[GRASS-dev] GSoC idea: Testing framework
Sören Gebbert
soerengebbert at googlemail.com
Sun Feb 2 06:46:51 PST 2014
Hi,
2014-01-31 Vaclav Petras <wenzeslaus at gmail.com>:
> Hi all,
>
> I would like to apply to GSoC this year with the idea of testing framework
> for GRASS. I probably don't have to explain the need for it.
This is really great. IMHO we desperately need it!
>
> Sören suggested that he would be my mentor in case my application is
> successful. I hope also that I will get the feedback from all developers,
> now or in the future, because it is crucial that GRASS developers will
> actually use this framework.
I would be happy to be your mentor for this project.
> I described the idea shortly on Trac GSoC 2014 wiki page. I plan to include
> more notes on separate page in next weeks but the basic idea should be
> clear. Some discussion is also in #2105. Perhaps, the most innovative idea
> is that different types of tests should be supported (e.g. Python doctest
> and shell scripts), although it would be always driven form Python. For
> example, it seems that doctest is very convenient for modules which has
> standard input and output (see recent doctest for r.category module).
Here some suggestions, reflecting some already mentioned concepts and ideas:
I would avoid to implement tests for modules using doctests and i
would not support shell script tests at all.
Why no shell script tests?
If you use shell script tests you will not have fine granular control
of how modules are called within the script and you can't switch
on/off valgrind support to call modules. The input/output validation
must be implemented in the shell scripts itself. This will lead to
huge redundant shell code, since the validation should be done by the
test framework which is written in Python. You can not determine from
outside the script what pre- and postprocess steps are or what
location should be used for testing without parsing the scripts and
define a special shell script syntax. How do you determine that the
test failed? Using return values, parsing the script output? It is
really hard to determine afterwards at what point the shell test
failed or what the reason was without reading massive amounts of
logged output. But most important: shell script tests do not work on
windows.
I will successively rewrite all of the shell script tests that i have
written (which are about 90%) to use the new test framework and Python
unittests.
Why no doctests for modules?
Unfortunately the r.category is not a good example for a module
doctest, no syntax highlighting by default in any editor. :)
How do you define the target location? How do you define what pre- and
postprocessing steps are, so that the test breaks if the
preprocessing/setup goes wrong? In case the preprocessing/setup fails,
all following dependent tests will fail as well, which should be
avoided.
IMHO doctests are well suited to implement tests for Python library
functions and classes that do not need any specific location/data
setup. Here a good example[1], here a bad example[2] which should be
rewritten as unittests.
For all other cases i strongly suggest to use unittests. Here an
example[3] that makes use of the class and object specific setup and
teardown concepts.
How the framework should work?
It should be possible to call tests from within grass by simply
invoking "python test.py" or "python.exe test.py" in the test suite
directory of the module.
It should be possible to invoke tests using the make system outside of
grass. Hence calling '"make tests" in a module directory will perform
all tests that are located in the modules test suite directory.
Invoking "make tests" in "lib/" will perform all library tests and so
on.
The grass test framework should provide an interface to define which
location should be used for testing. In case no location was provided,
the current location will be used in case the test was invoked from
inside grass, or the demolocation in case the make system was used.
The test framework will always create a temporary mapset in which the
test will be performed. The test framework will cleanup the temporary
files at the end, but the test developer can force the test framework
to avoid this. Any existing maps that should be used for testing must
be located in the PERMANENT mapset of the target location.
When writing unittests, the target location will be set at import
time, and so the clean up:
{{{
import grass.testsuite as testsuite
testsuite.set_target_location("nc") # Short cut for the north carolina location
testsuite.set_clean_up(True) # Can set to False to investigate the
created output
}}}
The location or mapset switch will be performed while running the
unittest. Parts of such a functionality is already implemented in the
wps-grass-bridge[4]. Hence a testsuite.init() command is needed in the
unittests scripts:
{{{
if __name__ == '__main__':
testsuite.init()
unittest.main()
}}}
The "make system" can simply start grass using the demolocation,
invoking python to run the tests. Location and mapset switching will
done by the test framework when init() is called.
Sorry for this huge mail, i think we better should use the trac wiki
for detailed discussion. :)
Best regards
Soeren
[1] http://trac.osgeo.org/grass/browser/grass/trunk/lib/python/temporal/temporal_extent.py#L30
[2] http://trac.osgeo.org/grass/browser/grass/trunk/lib/python/temporal/space_time_datasets.py#L21
[3] http://trac.osgeo.org/grass/browser/grass/trunk/lib/python/temporal/unittests_register.py
[4] https://code.google.com/p/wps-grass-bridge/source/browse/trunk/gms/GrassModuleStarter.py#271
>
> Best regards,
> Vaclav
>
>
> http://trac.osgeo.org/grass/wiki/GSoC/2014#TestingframeworkforGRASSGIS
> http://trac.osgeo.org/grass/ticket/2105
> http://trac.osgeo.org/grass/browser/grass/trunk/raster/r.category/test_rcategory_doctest.txt
More information about the grass-dev
mailing list