[OpenLayers-Dev] OpenLayers for Mobile devices

Beau Anderson elbeau at gmail.com
Thu Dec 23 12:21:44 EST 2010


Wow, I've been working on making OpenLayers work more like Google on mobile
devices for the last few weeks with varied amounts of success.  I finally
joined the mailing list and the first e-mail I get is this one!  I'd like to
join in the fray and work together to get this done right.

I've got an existing successful iPhone application that I'm porting to other
platforms.  There's a lot that I want to add to my app's mapping
functionality and OpenLayers looks like it would fit the bill great, but the
user experience is just not as smooth as Google and Bing provide.  The
headache that I've been working on is getting layers that extend from
OpenLayers.Grid to act more like Google where when you zoom in or out it
simply scales the tiles you already have loaded then loads new tiles on top
of them.  To accomplish this we need to rethink the way that Grid layers
layout their tiles so that there are multiple layers of tiles within one
Grid (not just foreground and background img's in Image.js).

The approach that I have working is to use HTML tables rather than fixed
height/width div's to house the individual images the make up a tiled
grid...then set the height/width of the img tags to 100%/100% so that all
tiles are always trying to scale to as large as they can be.  If you add
these img's to each cell in the table, then as you scale the whole table up
or down, the tile images scale appropriately without having to constantly
manually recalculate the size and position of every loaded tile.   Then, for
each pre-configured zoom level you create one of these tables and make sure
that each table understands its relationship to the current resolution then
layer each table on top of each other.  As the zoom level changes
fractionally, we scale ALL the tables appropriately then recalculate to see
which of our tables should be requesting tiles and tell the Grid to work on
that table.  This works and gives good fractional zooming to Grid layers
(I've had a working version of this, but wasn't happy with all my code so
I'm presently in the process of refactoring it).  I'm not completely done, I
still have some minor positioning issues, but I'm definitely happy with the
general approach.

I also added a nice fade-in effect for newly loaded tiles which makes the UI
appear MUCH less clunky (it's often funny how a little visual change can
make something look so much more professional).  I'll share it when I get
some time.

I'd like to collaborate on porting OpenLayers to mobile devices.  May I
suggest that we organize together and come up with an approach where we work
together on the actual implementation of each of our issues?

Thanks,


-Beau



On Thu, Dec 23, 2010 at 9:14 AM, Bob Basques <Bob.Basques at ci.stpaul.mn.us>wrote:

>  Benoit,
>
>  I'm old school in all this, so my ideas about a generic approach to
> things will likely seem old fashioned.
>
>  My first stab at putting together a gesture based interface would be to
> use some sort of control overlay in the map area.  This is mostly to provide
> instant feedback to the users more than anything.
>
>
>  Some important points (I think):
>
>  ** Symbols for zooming in (+) and out (-) that are right on the map it
> self. This might actually be considered a failsafe mode, where all other
> modes don't work for a device, and would use essentially existing
> capabilities as far as user controls go in a mapping environment.  Dragging
> seems to be a universal capability (so far), that can be implemented on
> (nearly) all devices.
>
>  ** Some sort of (fairly comprehensive) device type detection (or user
> setup control for a device type) will need to be in place.
>
>  ** With the building of a "failsafe" mode, and making it always
> available, the process of adding the gesture aspects seem to fall into the
> category of add-on.  As an aside here, with the advent of Netbook / Tablet
> devices becoming more prevalent, there may be some room here to add in some
> key-macro navigation tools as well as an optional navigation toolset, but
> this is likely only interesting to myself.  But if this type of approach of
> allowing for more than one type of navigation toolset were used, I think the
> different platform specifics can be more easily approached.   Am I talking
> about a navigation conduit here?  Something that is standardized in some
> form, that many different types of navigation tools could be developed
> against?  There are certain work processes that might require direct access
> to a navigation tool in a very specific manner for example, that a framework
> like this might help with during the development.
>
>  ** Associated with the navigation are layer choosers of some sort.
>  Displaying many layer option (or even many of anything) is a problem on a
> smaller form factor device.  I think this is an associated and just as
> important aspect of the gesture controls (for mapping) as is the navigation.
>  While most map based solutions may only present a few layers to the end
> user, I have systems in place that have hundreds of layers available, and
> it's very difficult to present this type of information to the typical user,
> let alone ones you might have the option of doing a little one on one
> training with.
>
>
>  I would prefer to use a common set of code to accomplish this all with
> some sort of auto-detection of device on the client side.  The gestures
> themselves seem to be varied in their implementation across vendor products
> to one degree or another.  Is there room here for some sort of user
> settings/preferences (like in a desktop application) where the user can
> decide (based on the device capabilities) what gestures can be enabled.
> There could be defaults for known devices.
>
>  bobb
>
>
>
>
> >>> Benoit Quartier <benoit.quartier at camptocamp.com> wrote:
>
> Bob,
>
>   On Tue, Dec 21, 2010 at 5:40 PM, Bob Basques <
> Bob.Basques at ci.stpaul.mn.us> wrote:
>
>>  Benoit,
>>
>>
>>  I couldn't get the zooming to work at all on the N900. But I don't count
>> that as a fault.
>>
>>
>>  I understand the complexities here, especially with regard to the
>> multi-touch aspects vs single touch enabled devices. I think that in the
>> near term the gesture aspects are going to NEED to be targeted at vendor
>> specifics in order to take full advantage of each of them. Hopefully this
>> will flesh out to a standard from the best available, but, in the near term,
>> I'm interested in seeing a process that works for single touch (Could be all
>> phone/mobile devices??) as a foundational chunk of coding. Hopefully this
>> approach would get as many functional mobile devices accounted for as
>> possible. Then it makes sense to attack the vendor specific (extra)
>> capabilities. It seems to be easier to design for the masses where possible
>> (from my experience), and then to enhance for the specialties, don't you
>> think?
>>
>  Yes, I fully agree. That's why we didn't begin with Apple gestures. But
> as you wrote, in the near term, we will need to implement these vendor
> specifics gestures.
>
> It would be great to have a set of single touch events that works on all
> (as much as possible) devices and, additionally, a set of vendor specific
> gesture. What would they be? Double tap to zoom in, triple tap to zoom out?
>
>
>>
>>  I think there are options available for addressing these ideas, if
>> anyone else is interested.
>>
>  I am interested but I am not sure to understand what you mean? Could you
> please elaborate?
>
>
>
> _______________________________________________
> Dev mailing list
> Dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/openlayers-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osgeo.org/pipermail/openlayers-dev/attachments/20101223/17cf64a5/attachment.html


More information about the Dev mailing list