[Benchmarking] Interested in searching for a bottleneck

Paul Ramsey pramsey at cleverelephant.ca
Wed Nov 25 13:36:51 EST 2009


My intuition is that the individual requests are running so fast that
there's very little room to see differences. I would suggest some
"crazy, non real world" tests to tease out places where performance
differs (draw 100000 features into a map, do that while reprojecting
them, draw 100 polygons made up to 10, 100, 1000, 10000, 100000
vertices, do that while reprojecting them, draw a complex 5000x5000
image and change the image encoding format from JPEG to PNG to GIF to
PNG8, etc).

When the map draw takes 25ms, there's not a lot of room to see
differences, a 10% difference translates to 2ms, almost unmeasurable.

P.

On Wed, Nov 25, 2009 at 10:32 AM, Arnulf Christl <seven at arnulf.us> wrote:
> Hey,
> I would be interested in looking into the current results a bit more because
> it looks almost impossible to me that we should get so similar results from
> two so different software packages.
>
> My intuition tells me there is a bottleneck in the transmission between the
> two boxes.
>
> Another reason could be that the overall processing time needed to render
> the geometries is much smaller than the time required to retrieve them -
> regardless of it being file access or database access. If only 5% of the
> overall request is spent in the mapping software that would explain the
> similarity of the results.
>
> Maybe you already know this but what are the processor times split between
> the map applications and the database / file access?
>
> Regards,
>
> --
> Arnulf Christl
>
> Exploring Space, Time and Mind
> http://arnulf.us/
> _______________________________________________
> Benchmarking mailing list
> Benchmarking at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/benchmarking
>


More information about the Benchmarking mailing list