[Benchmarking] Testing setup (following up Adrian Custer questions)

Andrea Aime aaime at opengeo.org
Mon Aug 9 07:08:46 EDT 2010


Adrian Custer ha scritto:
> On Thu, 2010-08-05 at 16:20 +0200, Andrea Aime wrote:
>>> # how will the jmx file be designed? (need one .csv file per thread so all requests are different) 
>> I was planning to modify last year jmx to have a progression
>> of 1 2 4 8 16 32 64 threads, each group making enough requests
>> to stabilize the average (I'd say 200, 200, 400, 400, 400, 400, 800, 
>> 800). As usual I'm open to suggestions, but I'd suggest to avoid
>> too many requests, we have many servers and we cannot afford the total
>> run to take various hours.
>>
>> As far as I know one csv is sufficient, all threads pick the next value
>> from the shared csv as if it was a shared input queue (and roll over to
>> the start if it ends, but we'll generate enough requests to make sure
>> no two requests are ever run in the same session)
> 
> Could you please point me to the file you actually used for last year's
> test on the SVN. 

http://svn.osgeo.org/osgeo/foss4g/benchmarking/scripts/mapserver/raster/bluemarble.csv
http://svn.osgeo.org/osgeo/foss4g/benchmarking/scripts/csv/dallas.csv
http://svn.osgeo.org/osgeo/foss4g/benchmarking/scripts/csv/texas.csv

> All the files I have recuperated from there and
> subsequently fed to jmeter did not have this behaviour.

I actually did not check, just trusted the documentation:

http://jakarta.apache.org/jmeter/usermanual/component_reference.html#CSV_Data_Set_Config

Quoting:

By default, the file is only opened once, and each thread will use a 
different line from the file. However the order in which lines are 
passed to threads depends on the order in which they execute, which may 
vary between iterations. Lines are read at the start of each test 
iteration. The file name and mode are resolved in the first iteration.

Cheers
Andrea


-- 
Andrea Aime
OpenGeo - http://opengeo.org
Expert service straight from the developers.


More information about the Benchmarking mailing list