[OpenLayers-Dev] Caching layerimages by browsers

Claude Philipona claude.philipona at camptocamp.com
Wed Mar 10 03:26:30 EST 2010


On Tue, Mar 9, 2010 at 20:24, Tim Schaub <tschaub at opengeo.org> wrote:
> Claude Philipona wrote:
>>> So, you can improve the caching situation by having your server (or
>>> gateway cache) use appropriate headers on responses.
>>
>> In a deep analysis of OL cache browser use in firefox using the
>> firefox extension "Page Seed", we found out that Firefox is actually
>> not always using properly the cache browser with OL preseeded tile
>> system, even though the headers are properly set on the http server.
>>
>
> ... snip ...
>
>>
>> We found out that we are facing this kind of FF limitation with OL
>> preseeded layers served thru varnish or Squid
>>
>> More info from:
>>
>> http://code.google.com/speed/page-speed/docs/caching.html#LeverageBrowserCaching
>>
>> Avoid URLs that cause cache collisions in Firefox.
>>
>
> Great info Claude.  Thanks for pointing that out.  I would classify this
> as a browser limitation that OpenLayers has no real control over.
Sure. The problem is mainly on the FireFox side. One workaround that
could be implemented on OL side would be to organize the tiles
hierarchy and naming with a more complex syntax, which would garantee
each url to be out of the cache collision case... but this would
impact on all tile specifications.... so I think it would be better to
have this issue solved on the FF side. It would maybe good if the main
OL developpers and/or the OL PSC get in touch with FF team to raise
the priority of this limitation.

> The locations exposed for tile resources are not the responsibility of
> OpenLayers.  We cannot force a different URL to resolve to the same
> resource as another - so with OpenLayers alone, you cannot resolve the
> Firefox disambiguation issue.
>
> However, if you configure your server to expose the same resource with
> multiple URLs, then OpenLayers can take advantage of this (as you
> imply).  Using an array for the layer's url property (with different
> subdomains) would be a way to reduce the chance of cache "collisions."
Multiple subdomains sure help to limit the cache collision case in FF.

> At some point, I imagine this Firefox limitation will be gone.  Properly
> using cache related headers will always be good advice.
Of course

Claude


>>    The Firefox disk cache hash functions can generate collisions for
>> URLs that differ only slightly, namely only on 8-character boundaries.
>> When resources hash to the same key, only one of the resources is
>> persisted to disk cache; the remaining resources with the same key have
>> to be re-fetched across browser restarts. Thus, if you are using
>> fingerprinting or are otherwise programmatically generating file URLs,
>> to maximize cache hit rate, avoid the Firefox hash collision issue by
>> ensuring that your application generates URLs that differ on more than
>> 8-character boundaries.



More information about the Dev mailing list