[Benchmarking] Ideas for next year

Martin Desruisseaux martin.desruisseaux at geomatys.fr
Wed Sep 21 15:45:45 EDT 2011


Hello Thomas

Thanks for your proposal, I think they would be good.

Le 21/09/11 18:44, thomas bonfort a écrit :
> With this year's exercise those confidence numbers would have been
> meaningless anyways, as for a given run there were very different kind
> of requests being renderered (e.g. from a 20x20 map with no features
> to a 800x800 map of a densely featured area.

I agree and this is the reason why I told (in the meeting in the bar) that those 
confidence intervals were not what I was looking for. In a previous email, I 
posted a graphic of confidence intervals based on multiple runs (5) of the full 
test.

> As for the confidence interval, I am not opposed but think it will be
> difficult to set up without requesting exactly the same data over and
> over again, and that will raise the same concerns of data caching as
> last year.

I don't think that it need to be the same data. If the number of requests is 
large (maybe 1000), it may be suffisient to ensure that there is approximatively 
the same number of 20x20 maps, 100x100 maps, etc. Statistically, the sensibility 
to random values usually become smaller as the number of sampling become larger.

Anyway, this variation may actually be what we really want to show. If we ran 
the same FOSS4G tests in exactly the same configuration but with a different set 
of requests, we would probably have got slightly different curves. We probably 
want to show at FOSS4G what the average performances are rather than the 
performances for a particular set of requests.

     Martin



More information about the Benchmarking mailing list