[postgis-users] newby performance load to mapserver compared to shapefile
pramsey at refractions.net
Wed Jul 18 07:41:06 PDT 2007
(a) As you indicate later, you are drawing *ALL* your data, which
means the database is being used as nothing more than a big file.
For 400000 polygons, this is not a "real world" use case, since the
map you draw will be essentially meaningless (a 1000x1000 image has
only 1M pixels, so your 400K polygons will average 2.5 pixels each,
and that's the "best case" scenario).
(b) Is it possible that your table is either very wide (100s of
columns) or your geometries very big (1000s of vertices). If in
combination the data per row is > 8K, then each row will be toasted
into a side table, which will dramatically increase the overhead
involved in accessing the data. This could easily lead to a 20:1
performance difference versus reading directly off a simple shape file.
On 17-Jul-07, at 11:49 PM, francesco.pirotti at unipd.it wrote:
> Dear Users,
> I have been flirting with PostGIS data for a while, but now I have
> accross a benchmark issue which baffles me. I loaded a big-bunch of
> polygon data (431094 lines in postgres8.1) importing with shp2pgsql
> utility (thus with GIST index and all)... I run the vacuum analyze
> on the
> database, but the time mapserver takes to draw all the data is
> about 20
> times slower than a shapfile.
> Is this normal?
> Thak you for your time.
> Francesco Pirotti
> postgis-users mailing list
> postgis-users at postgis.refractions.net
More information about the postgis-users