[Benchmarking] Ideas for next year

johann.sorel at geomatys.com johann.sorel at geomatys.com
Sat Sep 17 11:39:19 EDT 2011


Hi,

Just my thinking : Last year event had much more competitors, this year 
it was a Mapnik vs Mapserver mainly.
So I understand you want to compare both projects.

This year was my first participation as a developer for the 
Constellation server and I have been really upset most of the time.
I was hoping to work on our engine improvments but at the end I spend 
70% of my time on a parser to convert
Mapfile to SLD. without this effort both Constellation and Geoserver 
would have been out of the bench.

So Definitly next year if we intend to have more competitors (and not 
even less) there is a need to describe the objective in a neutral way 
for all teams, both styling and datas. I'm not saying it must be OGC 
SLD/SE, a text describing the expected result is enough, each team can 
then implement it with it's own style model.

Talking about datas, only about 3 or 4 weeks before the bench was 
decided to use BIL files for pseudo-hillshading. since both mapserver 
and mapnik rely on gdal/ogr they had no problems but that's not the case 
for everyone. so I also hope last minutes change linked to data format 
will not happen in the futur.

I also noticed those tests did not involve vector reprojections. after 
all we are providing Mapping servers not Painting servers. so 
reprojection should take more place in the tests. I think running 
queries in ten or more different projections would be nice.

johann


On 17/09/2011 07:23, Iván Sánchez Ortega wrote:
> Hi all,
>
> During some beers at the Wynkoop, I had an idea that I think is worth sharing.
>
> Until now, the results focus on throughtput per number of concurrent requests.
> This is fine, but other metrics are possible.
>
> Then, I heard that Mapnik excels at requests with few vector features, while
> Mapserver does a very good job when there are many vector features to be
> rendered.
>
> You can guess where this goes. I will propose that, for next year (years?),
> requests should be classified into groups depending on the number of features
> contained in that extent. e.g. requests with<10 feats, 10-50, 50-100,
> 100-500,>500. Measure latency/throughput for every group, put the results in
> a graph.
>
>
>
> I don't know if this is feasible. Anyway, will see you tomorrow at the Code
> Sprint,



More information about the Benchmarking mailing list