shapefile optimization for dynamic data

Stephen Woodbridge woodbri at SWOODBRIDGE.COM
Mon Mar 27 22:16:20 EST 2006


Chris,

I would have suggested that also, but I think in the use case presented 
which is an embedded system most likely excluded that as an option, but 
maybe that was a poor assumption on my part. If you can install 
postgres/postgis that would be ideal, otherwise I think something like I 
suggested would probably work best.

Ben, let us know what works best for you.

-Steve

Chris Tweedie wrote:
> Ben, for the situation you described i would of thought pushing the GPS 
> points into a database would make much more sense than trying to 
> add/remake the shapefile especially at that frequency.
> 
> Have you looked into using postgis (or any other db for that matter) as 
> your datasource?
> 
> Chris
> 
> Quoting Stephen Woodbridge <woodbri at SWOODBRIDGE.COM>:
> 
>> Using shptree will not help you that much in this scenario because of 
>> the frequency of updating the file. You best bet would be use multiple 
>> files and a tile index that you would have to add the new files to as 
>> they are created. Then you can shptree on the non-active file, but not 
>> on the active file. That will probably be the best scenario. Also make 
>> sure you shptree the tileindex.
>>
>> If a shapefile does not have a qix spatial index, then mapserver 
>> creates one on the fly and throws it away. If you are adding a point a 
>> second the file is probably getting updated faster than you can index 
>> it and then render it. Using the tileindex should really help in this 
>> case also, because only the files the intersect you current display 
>> window need to be opened and looked at.
>>
>> -Steve W.
>>
>> Ben Eisenbraun wrote:
>>
>>> Hi-
>>>
>>> I'm building a Mapserver application with dynamic data coming in 
>>> every second, and I'm not sure how to organize the shapefiles for 
>>> speediest access.
>>>
>>> I'm collecting data via a GPS and a sensor that reports a data point 
>>> once per second.  I'm using Mapserver CGI to generate an overlay onto 
>>> a map via a javascript frontend that auto-refreshes every few 
>>> seconds.  The application has to run on a low-power embedded hardware 
>>> device (roughly a p2-266), and I'm running into performance problems 
>>> once I've collected a few thousand data points.  The Mapserver CGI 
>>> process tends to consume all the CPU trying to render the overlays.
>>>
>>> Up to now, I've been using shpadd/dbfadd to add the data points to a 
>>> single shapefile.  I've tried using shptree to index the shapefile, 
>>> but under my
>>> testing, it doesn't seem to speed up rendering time at all.  The 
>>> largest a shapefile should ever get to be is about 25,000 data 
>>> points; I'm not sure if that's large or small compared to other 
>>> people's data.
>>>
>>> I've thought about breaking up that single shapefile into multiple 
>>> shapefiles of a given size, but I'm not sure if that would be a win 
>>> for this type of situation.
>>>
>>> Given that I'm constantly updating the source shapefile, what are my 
>>> options
>>> for optimizing it?
>>>
>>> Thanks for any tips or pointers.
>>>
>>> -ben
>>>
>>> -- 
>>> simplicity is the most difficult thing to secure in this world; it is 
>>> the last limit of experience and the last effort of genius.     
>>> <george sand>
>>>
>>
> 



More information about the mapserver-users mailing list