[Benchmarking] Testing setup (following up Adrian Custer questions)

Adrian Custer adrian.custer at geomatys.fr
Mon Aug 9 07:01:39 EDT 2010


On Thu, 2010-08-05 at 16:20 +0200, Andrea Aime wrote:
> > # how will the jmx file be designed? (need one .csv file per thread so all requests are different) 
> 
> I was planning to modify last year jmx to have a progression
> of 1 2 4 8 16 32 64 threads, each group making enough requests
> to stabilize the average (I'd say 200, 200, 400, 400, 400, 400, 800, 
> 800). As usual I'm open to suggestions, but I'd suggest to avoid
> too many requests, we have many servers and we cannot afford the total
> run to take various hours.
> 
> As far as I know one csv is sufficient, all threads pick the next value
> from the shared csv as if it was a shared input queue (and roll over to
> the start if it ends, but we'll generate enough requests to make sure
> no two requests are ever run in the same session)

Could you please point me to the file you actually used for last year's
test on the SVN. All the files I have recuperated from there and
subsequently fed to jmeter did not have this behaviour.

thank you,
--adrian




More information about the Benchmarking mailing list