[Benchmarking] WMS benchmarking for large raster formats: presentation at upcoming FOSS4G NA

Michael Billmire mgbillmi at mtu.edu
Tue Apr 9 09:59:36 PDT 2013


Hello WMS benchmarkers!

I'm writing because my abstract "WMS Server Benchmarking for Large Raster
Formats" was accepted for the upcoming FOSS4G North America conference in
Minneapolis at the end of May.

I wanted to make sure this list was made aware of this in hopes of getting
some constructive feedback on our methods or other recommendations for
additional tests or on other software platforms to test. Here's the
abstract in full:

"Prompted by a client's need to serve ~250GB of JPEG2000 imagery, we
evaluated several opensource (MapServer, GeoServer) and several proprietary
(ERDAS ApolloIWS, ArcGIS Server) WMS platforms for usability and speed of
return of large raster datasets.

The 4 platforms were configured on virtual machines with identical system
specifications. Our test data was a series of 260 Great Lake Shoreline
border images that we converted into three formats for evaluation: a
mosaicked TIF, a mosaicked JP2, and platform­ specific virtual mosaics.
Following previous FOSS4G WMS Benchmarking exercises, HTTP return metrics
were evaluated using Apache JMeter. We evaluated return speed at three zoom
levels in order to account for potential differences in serving highest­
resolution vs. overview data.

ArcGIS Server and GeoServer had the fastest return times for the TIF
formats, with MapServer also performing well. ERDAS Apollo had slow return
times for TIF format but was extremely fast with the JP2 format. ERDAS
Apollo was also generally the fastest returning the virtual mosaic format,
although ArcGIS Server and MapServer had very comparable results.

Taking both usability and performance into account, it is difficult to
identify a clear preference. ERDAS Apollo excelled in speed tests (aside
from TIF format), but had many usability issues. MapServer and ArcGIS
Server were well­-rounded in terms of usability and performance.
GeoServer's usability impressed, though the quality of virtual mosaicking
was low compared to the other platforms."

The notable differences between the "official" benchmarking exercises of
years past and our tests are as follows:
-We evaluate only serving of raster datasets
-Our tests were 'low stress' for the most part- 100 threads of the same
request over 1000 seconds
-Throughput, which seemed to be the primary metric for evaluating server
performance in the previous exercises, ended up being almost identical for
each test and platform, so we focused instead on the return speed results
from JMeter

I've already been in communication with Andrea Aime regarding our GeoServer
results (the virtual mosaicking quality problems mentioned in the abstract
turn out to be at least partially if not entirely due to user error). If
anyone would like any more details/clarification of methods or if there are
other platforms that anyone thinks we should evaluate, please let us know!

cheers,
Mike

-- 
Michael Billmire
Research Scientist
Michigan Tech Research Institute (MTRI)
3600 Green Ct. Suite 100
Ann Arbor, MI 48105

michael.billmire at mtu.edu
work: 734.913.6853
cell: 513.739.0686
fax: 734.913.688
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/benchmarking/attachments/20130409/74ae24b0/attachment-0001.html>


More information about the Benchmarking mailing list