[GRASS-dev] [GRASS GIS] #2105: Missing guidelines for testing
GRASS GIS
trac at osgeo.org
Mon Oct 28 22:11:20 PDT 2013
#2105: Missing guidelines for testing
--------------------------------------------------------+-------------------
Reporter: wenzeslaus | Owner: grass-dev@…
Type: task | Status: new
Priority: normal | Milestone: 7.0.0
Component: Tests | Version: svn-trunk
Keywords: testing, tests, PyUnit, doctest, testsuite | Platform: All
Cpu: Unspecified |
--------------------------------------------------------+-------------------
Comment(by hamish):
Running a test of a C module is very easy and clear in shell script,
any added complication which Bourne scripting may have trouble with
quickly becomes a test of that complication and not the module you are
trying to test, regardless of the language, so any non-module
complications should be avoided, and thus shouldn't be a factor in
the test_suite design.
Follow the K.I.S.S. principal, and for the C module tests ~ three
lines of shell script + comments are much preferable and readable
than 10-30 lines of python boilerplate, IMHO.
The Makefiles and build system require the UNIX command line tools
already so the cross-platform argument is moot, especially if you
want to run it from a 'make test'.
That's not to say that results couldn't be written to a file like
the build failures are, and then some python script or otherwise
collect them into a html page. (I'd just focus on the failures,
scrolling past 350 passing "green" modules to find the one "red"
one serves little purpose IMO) or just print failures to the end
of the build log like missing man pages and failed module builds..
Simple simple simple, visible, and to the point.
For valgrind results, sure, a webpage table on adhoc or a new osgeo
buildbot VM would be great.
see e.g. http://www.gpsdrive.de/build_cluster/results.shtml
for inspiration you will find various test_suite/ directories in
raster/, with example runs based on either standard datasets or
synthetic maps. e.g. see r.cost, r.stats, and r.mapcalc. An exhaustive
set of tests of all flag and option combinations is probably not
possible, we just have to select something representative for each
module.
/ the idea so far with those was to just collect some working example
for each of the C modules, & decide on how to automate it all later.
the grass-addons/grass6/general/g.md5sum module was written with test
suites in mind, see the help page. There is also
grass-addons/grass7/general/g.compare.md5 too look at, but beware
there is the problem that different architectures, CPU models,
compilers, OSs, etc. may handle float point precision in slightly
different
ways, so instead of a results-hash like g.md5sum could store you'd
want to store a copy of a full results map for each module, then
compare with the new output and check that it doesn't result in some
epsilon greater than <something>. The disk use could be big.. And what
should the threshold be? Should the double precision be rounded from
%.17g to %.15g to mask any local FP implementation differences? It's
all a bit sloppy.. For G6's g.md5sum I took the slightly reduced
precision approach, but since r.out.ascii is using %.*f not %.*g for dp=,
that's not right yet either.
> Trac wiki synatax does not support our tradition of references done
> through `[1]`. The trac syntax is becoming an issue for us,
not sure what to say except try to use something else like "1." or (1)
when in the trac.. easy to do and helps to keep the mind flexible. :-)
Another thing to remember to backtick any mention of `r3.*` modules,
since rev 3 of the svn (CVS) was a huge commit.
2c & regards,
Hamish
--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2105#comment:6>
GRASS GIS <http://grass.osgeo.org>
More information about the grass-dev
mailing list