[mapserver-users] mapcache in a cluster
traviskirstine at gmail.com
Mon Nov 21 10:35:15 EST 2011
When running the seeder (I haven't tested on the fly generation) the
process is running extremely slow. The seeder is launched using all
available cpus (24) on the server. We are running MapServer as the
When I launch the seeder using a cache on the same file system the
seeder will spawn 18-24 concurrent WMS request and mapcache_seed will
be running at 250-350 % CPU.
If I launch the same seeding process using a NFS mounted cache the
performce will drop - The WMS requests will drop to only one request
(if that) and map_cache will be running at less than 60% CPU. The
server is basically idle.
I rechecked the mapcache configuration script and we are writing the
lock files to the same mounted directory as the cache, I'm not sure if
this could be a issue. I will try writing the lock files locally to
see if this has any effect.
On 21 November 2011 09:39, thomas bonfort <thomas.bonfort at gmail.com> wrote:
> What kind of performance issues? The current locking code only uses
> the presence/absence of a file for it's locking functions, and does
> not rely on flock/fcntl.
> On Mon, Nov 21, 2011 at 15:16, Travis Kirstine <traviskirstine at gmail.com> wrote:
>> We have been running into some performance issues mapcache and nfs.
>> We feel the issue may be related to how nfs locks files/directories
>> compared to smb. We are trying a few thing on our end (disable
>> locking / nfs4 etc). Do you have any ideas?
>> On 20 October 2011 12:19, thomas bonfort <thomas.bonfort at gmail.com> wrote:
>>> So, this discussion inspired me to completely rework the locking
>>> mechanism in mapcache, to stop relying on file locks which have their
>>> quirks on network filesystems.
>>> I have tried using multiple apache instances configured to used a
>>> SMB-mounted lock directory and hammered both instances on unseeded
>>> identical area to force locking, and ended up with absolutely no
>>> duplicate wms requests or failed requests for the clients.
>>> The code is committed in trunk. Thanks for bringing this up, this
>>> allowed me to really simplify the locking code and remove a lot of
>>> unneeded stuff :)
>>> On Thu, Oct 20, 2011 at 17:08, Travis Kirstine <traviskirstine at gmail.com> wrote:
>>>> Andreas and Thomas
>>>> Thanks for you responses, I have discussed this with some of our IT
>>>> staff and they had similar solution as Andreas using gfs. Their
>>>> comments are below:
>>>> "I suspect this scheme is not reliable over NFS. The problem is the
>>>> directory updates are not synchronized across multiple nodes. I had a
>>>> similar issue with the IMAP E-mail protocol. Our workaround currently
>>>> is to force each user to leverage a single server.
>>>> Seems like there's some tweaks to disable directory attribute caching
>>>> but this can trigger slower performance.
>>>> Only workaround is to use GFS which I found to have it's own issues. "
>>>> On 20 October 2011 05:32, Eichner, Andreas - SID-NLKM
>>>> <Andreas.Eichner at sid.sachsen.de> wrote:
>>>>> We use TileCache.py on two servers with the cache on an OCFS2 on a
>>>>> shared LUN in the SAN. No known issues with that for now. Note: Spurious
>>>>> stale lock files occurred already on a single machine. There seemed to
>>>>> be issues with lots of requests and a very slow upstream server. I used
>>>>> a cron job to delete lock files older than 5 minutes or so.
>>>>> As Thomas noted, if the lock files are created on a shared filesystem
>>>>> and you make sure the filesystem you use is able to lock files properly
>>>>> (read the docs carefully!) there's no reason why it should not work.
>>>> mapserver-users mailing list
>>>> mapserver-users at lists.osgeo.org
More information about the mapserver-users