Contours again.

Frank Warmerdam warmerdam at POBOX.COM
Fri Oct 15 12:04:11 PDT 2004


Bob Basques wrote:
> Yes, the source data was all organized into 1/2 x 1/2 mile tiles (2000 x
> 2000 ft).   When assembled into a single SHP file they were NOT joined
> together.  They are still polylines though.  I'm starting to wonder now
> though about exploding them into their respective line segments.  The
> data size will double though if I do that with all the extra end points
> to take manage.
>
> Even though the contours are chopped up by 1/2 mile increments, they are
> still quite large polylines, with hundreds of vertices each.   I could
> chop them up further, down to 500 foot grid tile perhaps.   The big
> concern for me was labelling.  I didn't want to end up with seperate
> segments that were labelled to often.   I could get away with a single
> label per object with the tiling method of aggregation.  Seperating
> every segment out seems like going backwards for some reason.  :c)
>
> I don't recall now if I mentioned this or not, but the contours are at
> one foot increments, quite dense and 560+ megs for the *.SHP file.  The
> qix file was 4+ megs.  As a matter of fact, this is the densest dataset
> that I need to publish ( so far )  I have the DEM next that was used to
> generate the Contours.  :c)

Bob,

I'm really surprised that the qix file gave you so little speed improvement
given that your data would appear to be very well organized for spatial
indexing.  I don't think exploding things into individual line segements
will help in any useful way, and it will make labelling hell.  De-densification
might be called for though. Splitting into tiles with a tileindex and
a spatial index for the tileindex, and each tile shapefile could help
some.

However, the modest performance gain you got with the spatial index
suggests to me that the bottleneck might not be where would expect it to
be.  Some profiling might be in order.

I am assuming that your map view request is for a small area of the total
dataset, right?  An overview render of this dataset will inevitably require
a *huge* amount of work in the reading and rendering.

If you felt like dumping a working mapfile and the full dataset on a CD and
sending it to me, I could try and do a bit of profiling to see what is
happening.  As Paul Ramsey discovered with his "label cache collision
checking" performance issue, the bottleneck isn't always where you think
it will be.

Umm, you aren't labelling these contours are you?  I feel bad asking all
these questions that you and Ed already hashed through but it is often the
un-asked question that would unearth the real issue.

Best regards,
--
---------------------------------------+--------------------------------------
I set the clouds in motion - turn up   | Frank Warmerdam, warmerdam at pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush    | Geospatial Programmer for Rent



More information about the MapServer-users mailing list