[mapserver-commits] r11292 - trunk/docs/en/development/tests

svn at osgeo.org svn at osgeo.org
Sun Mar 20 16:48:01 EDT 2011


Author: warmerdam
Date: 2011-03-20 13:48:01 -0700 (Sun, 20 Mar 2011)
New Revision: 11292

Modified:
   trunk/docs/en/development/tests/autotest.txt
Log:
updated with [STRIP:] and otherwise brought up to date

Modified: trunk/docs/en/development/tests/autotest.txt
===================================================================
--- trunk/docs/en/development/tests/autotest.txt	2011-03-20 20:18:34 UTC (rev 11291)
+++ trunk/docs/en/development/tests/autotest.txt	2011-03-20 20:48:01 UTC (rev 11292)
@@ -8,7 +8,7 @@
 :Contact:       warmerdam at pobox.com
 :Revision: $Revision$
 :Date: $Date$
-:Last Updated: 2007/8/31
+:Last Updated: 2011/3/20
 
 .. contents:: Table of Contents
     :depth: 2
@@ -72,40 +72,44 @@
 
   python.exe run_test.py
 
-The results in the misc directory might look something like this::
+The results in the gdal directory might look something like this::
 
  warmerda at gdal2200[164]% run_test.py
- version = MapServer version 3.7 (development) OUTPUT=PNG OUTPUT=JPEG OUTPUT=WBMP
- SUPPORTS=PROJ SUPPORTS=TTF SUPPORTS=WMS_SERVER SUPPORTS=GD2_RGB INPUT=TIFF 
- INPUT=EPPL7 INPUT=JPEG INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
+ version = MapServer version 6.0.0-beta2 OUTPUT=GIF OUTPUT=PNG OUTPUT=JPEG SUPPORTS=PROJ SUPPORTS=AGG SUPPORTS=FREETYPE SUPPORTS=ICONV SUPPORTS=WMS_SERVER SUPPORTS=WMS_CLIENT SUPPORTS=WFS_SERVER SUPPORTS=WCS_SERVER SUPPORTS=SOS_SERVER SUPPORTS=THREADS SUPPORTS=GEOS INPUT=POSTGIS INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
 
- Processing: rgba_scalebar.map
-    results match.
- Processing: tr_scalebar.map
-    results match.
- Processing: tr_label_rgb.map
-    results match.
- Processing: ogr_direct.map
-    results match.
- Processing: ogr_select.map
-    results match.
- Processing: ogr_join.map
-    results match.
- Test done:
-    0 tested skipped
-    6 tests succeeded
-    0 tests failed
-    0 test results initialized
+ Processing: grayalpha.map
+     results match.
+ Processing: class16.map
+     result images match, though files differ.
+ Processing: nonsquare_multiraw.map
+     results match.
+ Processing: bilinear_float.map
+     results match.
+ Processing: processing_scale_auto.map
+     results match.
+ Processing: grayalpha_plug.map
+     result images match, though files differ.
+ Processing: processing_bands.map
+     results match.
+ ...
+ Processing: 256color_overdose_cmt.map
+     results match.
+ Test done (100.00% success):
+ 0 tested skipped
+ 69 tests succeeded
+ 0 tests failed
+ 0 test results initialized
 
 In general you are hoping to see that no tests failed.
 
+
 Checking Failures
 ------------------------------------------------------------------------------
 
 Because most msautotest tests are comparing generated images to expected
 images, the tests are very sensitive to subtle rounding differences on
 different systems, and subtle rendering changes in libraries like freetype
-and gd. So it is quite common to see some failures.
+gd, and agg. So it is quite common to see some failures.
 
 These failures then need to be reviewed manually to see if the differences
 are acceptable and just indicating differences in rounding/rendering or
@@ -115,17 +119,7 @@
 that allows images to be layers, and toggled on and off to visually
 highlight what is changing. OpenEV can be used for this.
 
-PerceptualDiff
-------------------------------------------------------------------------------
 
-If you install the PerceptualDiff program (http://pdiff.sourceforge.net/) and
-it is in the path, then the autotest will attempt to use it as a last fallback
-when comparing images. If images are found to be "perceptually" the same the
-test will pass with the message "result images perceptually match, though
-files differ." This can dramatically cut down the number of apparent failures
-that on close inspection are for all intents and purposes identical. Building
-PerceptualDiff is a bit of a hassle.
-
 Background
 ------------------------------------------------------------------------------
 
@@ -158,9 +152,45 @@
   dependent components in the future (there is a start on this in the mspython
   directory).
 
-Test Script Internal Functioning
-..............................................................................
 
+Result Comparisons
+------------------------------------------------------------------------------
+
+For shp2img tests The output files generated by processing a map is put in the
+file results/<mapfilebasename>.png (regardless of whether it is PNG or not). So
+when gdal/256_overlay_res.map is processed, the output file is written to
+gdal/results/256_overlay_res.png. This is compared to
+gdal/expected/256_overlay_res.png. If they differ the test fails, and the
+"wrong" result file is left behind for investigation. If they match the result
+file is deleted. If there is no corresponding expected file the results file
+is moved to the expected directory (and reported as an "initialized" test)
+ready to be committed to CVS.
+
+For tests using RUN_PARMS, the output filename is specified in the RUN_PARMS
+statement, but otherwise the comparisons are done similarly.
+
+The initial comparison of files is done as a binary file comparison.  If
+that fails, for image files, there is an attempt to compare the image checksums
+using the GDAL Python bindings.  If the GDAL Python bindings are not available
+this step is quietly skipped.  
+
+If you install the PerceptualDiff program (http://pdiff.sourceforge.net/) and
+it is in the path, then the autotest will attempt to use it as a last fallback
+when comparing images. If images are found to be "perceptually" the same the
+test will pass with the message "result images perceptually match, though
+files differ." This can dramatically cut down the number of apparent failures
+that on close inspection are for all intents and purposes identical. Building
+PerceptualDiff is a bit of a hassle and it will miss some significant 
+differences so it's use is of mixed value.
+
+For non-image results, such as xml and html output, the special image 
+comparisons are skipped. 
+
+
+
+REQUIRES - Handling Build Options
+------------------------------------------------------------------------------
+
 Because MapServer can be built with many possible extensions, such as support
 for OGR, GDAL, and PROJ.4, it is desirable to have the testsuite automatically
 detect which tests should be run based on the configuratio of MapServer. This
@@ -177,25 +207,19 @@
 
 In addition, individual .map files can have additional requirements expressed
 as a REQUIRES: comment in the mapfile. If the requirements are not met the map
-will be skipped (and listsed in the summary as a skipped test). For example
+will be skipped (and listed in the summary as a skipped test). For example
 gdal/256_overlay_res.map has the following line to indicate it requires
 projection support (in addition to the INPUT=GDAL and OUTPUT=PNG required by
 all files in the directory):
 
 ::
 
-    REQUIRES: SUPPORTS=PROJ
+    # REQUIRES: SUPPORTS=PROJ
 
-The output files generated by processing a map is put in the file
-results/<mapfilebasename>.png (regardless of whether it is PNG or not). So
-when gdal/256_overlay_res.map is processed, the output file is written to
-gdal/results/256_overlay_res.png. This is compared to
-gdal/expected/256_overlay_res.png. If they differ the test fails, and the
-"wrong" result file is left behind for investigation. If they match the result
-file is deleted. If there is no corresponding expected file the results file
-is moved to the expected directory (and reported as an "initialized" test)
-ready to be committed to CVS.
 
+RUN_PARMS: Tests not using shp2img
+------------------------------------------------------------------------------
+
 There is also a RUN_PARMS keyword that may be placed in map files to override
 a bunch of behaviour. The default behaviour is to process map files with
 shp2img, but other programs such as mapserv or scalebar can be requested, and
@@ -219,15 +243,52 @@
 
 For web services that generate images that would normally be prefixed with the
 Content-type header, use [RESULT_NOMIME] to instruct the test harnass to
-script off any http headers before doing the comparison.
+script off any http headers before doing the comparison.  This is particularly
+valuable for image results so the files can be compared using special image
+comparisons.
 
 ::
 
     # Generate simple PNG.
     # RUN_PARMS: wcs_simple.png [MAPSERV] QUERY_STRING='map=[MAPFILE]&SERVICE=WCS&VERSION=1.0.0&REQUEST=GetCoverage&WIDTH=120&HEIGHT=90&FORMAT=GDPNG&BBOX=0,0,400,300&COVERAGE=grey&CRS=EPSG:32611' > [RESULT_DEMIME]
 
+Result File Preprocessing
+------------------------------------------------------------------------------
+
+As mentioned above the [RESULT_DEMIME] directive can be used for image file
+output from web services (ie. WMS GetMap requests).  
+
+For text, XML and HTML output it can also be helpful to apply other 
+pre-processing to the output file to make comparisons easlier.   The 
+[RESULT_DEVERSION] directive in the RUN_PARMS will apply several translations
+to the output file including:
+
+- stripping out the MapServer version string which changes depending on build 
+  options and version.
+
+- manipulating the format of exponential numbers to be consistent across 
+  platforms (changes windows e+0nn format to e+nn).
+
+- strip the last decimal place off floating point numbers to avoid 
+  unnecessary sensitivity to platform specific number handling.
+
+- blank out timestamps to avoid "current time" sensitivity. 
+
+In some cases it is also helpful to strip out lines matching a particular
+pattern.  The [STRIP:xxx] directive drops all lines containing the indicated 
+substring.  Multiple [STRIP:xxx] directives may be included in the command 
+string if desired.  For instance, the error reports from the runtime 
+substitution validation test (misc/runtime_sub.map) produces error messages 
+with an absolute path in them which changes for each person.  The following
+directive will drop any text lines in the result that contain the string
+ShapefileOpen, which will be error messages in this case::
+
+ # RUN_PARMS: runtime_sub_test001.txt [MAPSERV] QUERY_STRING='map=[MAPFILE]
+ &mode=map&layer=layer1&name1=bdry_counpy2' > [RESULT_DEVERSION] [STRIP:ShapefileOpen] 
+
+
 What If A Test Fails?
-..............................................................................
+------------------------------------------------------------------------------
 
 When running the test suite, it is common for some tests to fail depending on
 vagaries of floating point on the platform, or harmless changes in
@@ -250,6 +311,9 @@
 - Add a high level script in the msautotest directory that runs the subscripts
   in all the subdirectories and produces a summary report.
 
+- Add something to run tests on the server and report on changes.
+
+
 Adding New Tests
 ------------------------------------------------------------------------------
 
@@ -282,9 +346,8 @@
   the results/<testname>.png file. Move the results/<testname>.png file into
   the expected directory when you are happy with it.
  
-- add the .map file, and the expected/<testname>.png file to CVS when you are
-  happy with them. Make sure the .png file (and any supporting data files) are
-  marked as binary files. For example,
+- add the .map file, and the expected/<testname>.png file to SVN when you are
+  happy with them.  For example,
 
 ::
 



More information about the mapserver-commits mailing list