[SoC] Weekly report 9: Testing framework for GRASS GIS

Vaclav Petras wenzeslaus at gmail.com
Fri Jul 18 08:51:19 PDT 2014

1. What did you get done this week?

I added assert methods for testing vector equality based on v.info and
difference of GRASS vector ASCII files, most notably
assertVectorEqualsVector() and assertVectorEqualsAscii() methods. So,
comparison of two vectors is now possible. I also added assert methods for
3D rasters but my tests for some of them are failing and also some
refactoring will be needed in the future.

Additionally, I included two experimental methods for vector comparison
based on buffering and overlays but I'm not sure if these are more useful
than the ones based on ASCII and v.info. Perhaps these could be an
inspiration for comparison of vectors for report if they are not equal.

There are no special methods to test temporal datasets but comparison of
key-value outputs is applicable.

By the way, there is currently 19 successfully running test files and 5
failing test files (each test file contains multiple tests). And thanks to
the tests I also randomly found a bug revealed by more strict type checking
(fix should be in r61272).

2. What do you plan on doing next week?

The plan was to implement the basic test results reports. I've already done
part of this, so now I will improve report generation for individual test
files and add different summaries to the main report page (which summarizes
all test files).

The structure or the report is the same as source code (where we look for
the `testsuite` directories). The difference is that  the `testsuite`
directory is replaced by several directories, one for each test file. Each
of these directories has its own `index.html`. The test case classes and
methods probably will not have their own directories.

The `index.html` of one test file seems to be the central page of report
containing basic information about individual tests (including
descriptions) with links to more details such as stdout, stderr and some
additional files (files created during the test). Information about tested
modules will be gathered separately (using some test metadata or logging
rather than using information about a directory).

The primary goal is, however, to add some parseable outputs such as
key-value file with number of successful and failed tests. I'm considering
also XML (XUnit XML test result format) which is understood by some
applications or report tools (however, it seems that definition of this XML
is not clear).

Perhaps writing some script which would gather information from different
test runs would be beneficial.

3. Are you blocked on anything?

I'm not really blocked but there is few things on testing framework todo
list which I might not be able to solve in the next weeks.

First, the distribution is still not completely solved (see previous week
discussion). Second, the system for different locations and data
requirements still waits on being tested. Third, I postponed the
integration with build system and improvements in command line interface.

Also some tests, including the one for comparison of 3D raster maps, are
failing for me because of the following error (same as in #2074).

ERROR: Rast3d_get_double_region: error in Rast3d_get_tile_ptr.Region
coordinates x 0 y 0 z 0 tile index 0 offset 0



To compile the latest documentation:
  cd grass-source-code-root-dir
  make sphinxdoc
  # open file "dist.../docs/html/libpython/gunittest_testing.html"

To run all tests in all subdirectories in special location:
  python -m grass.gunittest.main gisdbasepath nc_smp_location_name nc
  # open file "testreport/index.html"

To run one test file in the current location and mapset:
  cd raster/r.gwflow/testsuite/
  python validation_7x7_grid.py
  # see the output in terminal (stdout and stderr)

Previous week discussion:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/soc/attachments/20140718/2704ca8d/attachment.html>

More information about the SoC mailing list