[Tiling] Re: inodes
Frederic Junod
frederic.junod at camptocamp.com
Fri Feb 18 09:12:02 EST 2011
On Fri, Feb 18, 2011 at 2:26 PM, Paul Spencer <pagameba at gmail.com> wrote:
> I'm curious how you work with S3 and tiles - do you serve tiles directly out of S3 to the browser or go through a server that gets the tile from S3 using s3fs or something like that? How do you handle missing tiles or do you complete tile everything into S3? What do you think of the performance versus storing tiles on EBS volumes?
We generate all the tiles with TileCache to a s3 bucket using an
OpenLayers.Layer.TileCache compatible format (that's the tilecache
disk format, the same as mod_tile I think).
Between the browser and s3 we have a caching proxy (2 ec2 m2.xlarge
instances with vanish).
With this setup, we deliver on average 50Go / day
I don't think we could go back to a more "traditional" filesystem: we
have more than 200'000'000 tiles in the bucket and it would be a
nightmare to manage.
Note that this setup only applies for one of our customer ! With all
the others we have less data and we use a standard ext3 partition.
Hope this answers your questions.
Regards,
fredj
>
> We are also using EC2 and have resorted to using glusterfs to build distributed volumes to combine EBS volumes and provide shared volumes that exceed 1TB because we found using S3 too slow for read/write I/O when dynamically generating tiles.
>
> Cheers
>
> Paul
>
> On 2011-02-18, at 4:41 AM, Frederic Junod wrote:
>
>> Hello Gabriel,
>>
>> We had the same issue on a ext3 partition; our first fix was to
>> rebuild the filesystem with more inodes.
>> It worked but we realized that the tiles directories where impossible
>> to manage: it took more than 2 days to get the dir size with 'du' and
>> rsync never finish (> 3 days to compute the files to transfer).
>> Now we're using amazon s3 to store all our tiles.
>>
>> Maybe a distributed filesystem (like HDFS) can be an option ?
>>
>> Regards,
>> fredj
>>
>> On Fri, Feb 18, 2011 at 10:17 AM, Gabriel Roldan <groldan at opengeo.org> wrote:
>>> well, reiserfs seems to be doing the trick.
>>> btw, also tried btrfs but found it started to get slower than the others
>>> while the number of files increase. Or at least it seemed for my
>>> "benchmarking"
>>> anyone has experiences/pointer to posts about tile caches and disk/fs
>>> performance?
>>> Cheers,
>>> Gabriel
>>>
>>> On Fri, Feb 18, 2011 at 5:53 AM, Gabriel Roldan <groldan at opengeo.org> wrote:
>>>>
>>>> Hey Tilers,
>>>> just a question:
>>>> while running a seed task I ran out of free inodes (fs type was ext4, on a
>>>> 500GB disk, single primary partition) and started to get "no space left on
>>>> device" errors./
>>>> Well, reason seems obvious, and as I'm sure you already found this issue
>>>> or knew about it, I'm wondering what you usually recomment/do when planning
>>>> a partition to hold a tile cache.
>>>> Cheers,
>>>> Gabriel
>>>> --
>>>> Gabriel Roldan
>>>> OpenGeo - http://opengeo.org
>>>> Expert service straight from the developers.
>>>
>>>
>>>
>>> --
>>> Gabriel Roldan
>>> OpenGeo - http://opengeo.org
>>> Expert service straight from the developers.
>>>
>>> _______________________________________________
>>> Tiling mailing list
>>> Tiling at lists.osgeo.org
>>> http://lists.osgeo.org/mailman/listinfo/tiling
>>>
>>>
>>
>>
>>
>> --
>> Frédéric Junod
>> Camptocamp SA
>> _______________________________________________
>> Tiling mailing list
>> Tiling at lists.osgeo.org
>> http://lists.osgeo.org/mailman/listinfo/tiling
>
>
--
Frédéric Junod
Camptocamp SA
More information about the Tiling
mailing list