About performances
percy
percyd at PDX.EDU
Thu Mar 8 20:44:20 PST 2007
Benoit, to be explicit about Frank's step #4
4) Produce reduced complexity datasets for overview maps that would
> otherwise include hundreds of thousands of features.
What I did for the Oregon Geology map
(http://geospatial.research.pdx.edu/ogdc/) to simplify the polygons was
to load the data into PostGIS and then execute a command like this:
pgsql2shp -f simplelith -h localhost -u mapserve geology -s "select
simplify(the_geom, 1000) as the_geom, gnlith_u from lithology"
This connects to the geology database and outputs a shapefile called
simplelith from the table lithology with a reduced set of vertices, in
this case one every 1000 feet.
I then serve this shapefile at zoomed out scales.
When zoomed out at the initial scale this "generalization" is virtually
undetectable and got my load time from ~30 seconds down to 4 seconds.
That's one approach!
:-)
Frank Warmerdam wrote:
> Benoit Myard wrote:
>> Hi list,
>>
>> We are on the process of replacing our custom GIS tools by MapServer
>> because it offers greater flexibility and more features. We also
>> appreciate its open approach to the GIS universe and the ability to
>> integrate it in a wide variety of tools.
>>
>> We currently work with E00 data which we convert to Arc/Info Vector
>> Coverage (with AVCE00) but we experience major performance issues.
>>
>> For a reduced set of data (about 2 MB), MapServer takes up to 15
>> seconds to render a rather small map (75,000 km²) on a pretty decent
>> hardware (Sun-Fire-T1000 running Solaris 10). Note that MapServer
>> doesn't have to reproject the data.
>>
>> We wonder if the slowness experienced is due to the data format used;
>> if so, which format would you recommend ?
>
> Benoit,
>
> I don't know the details of your data, or what rendering options you
> are using. But if you are rendering polygon layers from arcinfo binary
> coverages the polygons have to be assembled from arcs "on the fly" and
> this can result in quite a bit of extra seeking around to find data.
>
> I would suggest translating the layer(s) you want to use to shapefiles,
> and comparing the performance. If it is still slow then I think you will
> need to provide additional detail or examples of what you are doing that
> others can investigate.
>
>> Also, do you think that using the FastCGI approach would significantly
>> improve the performances ? I read that FastCGI is good for databases
>> but not of much use for files geographic databases; is that still
>> true, and what's current status of FastCGI support in MapServer ?
>
> FastCGI helps to make up for high connection costs. For files that
> would equate to formats that are expensive to open. I doubt very
> much that applies in this case.
>
>> Could you share performance reports (data formats used, amount of data
>> available, time to render and hardware specs) with the list so that we
>> can compare ? Are any of you aware of recent performance benchmarks,
>> so that we have a reference ?
>>
>> Do you have any ideas or tips to fasten MapServer besides tiles caching ?
>
> 1) Use shapefiles
> 2) Break source data into tiles if you have a very large dataset.
> 3) spatially index your files.
> 4) Produce reduced complexity datasets for overview maps that would
> otherwise
> include hundreds of thousands of features.
>
> I think you will find other performance hints in the email archives and
> perhaps amoung the web site documentation. The key is to set it up so
> that mapserver only needs to process a reasonable number of features for
> a given map request, and that it can find them quickly.
>
> Best regards,
--
David Percy
Geospatial Data Manager
Geology Department
Portland State University
http://gisgeek.pdx.edu
503-725-3373
More information about the MapServer-users
mailing list