[SoC] Weekly report 11: Testing framework for GRASS GIS

Vaclav Petras wenzeslaus at gmail.com
Fri Aug 1 11:02:59 PDT 2014


1. What did you get done this week?

The original plan was to rewrite some existing test but I decided that it
will be more beneficial to work on reports to get them online and enable
people to do the same. Particularly I improved report generation for test
executed at different times and in different locations. As a proof of
concept I created a cron job to run tests every day. Results are published
at:

http://fatra.cnr.ncsu.edu/grassgistests/

The whole report structure is really general, so here are some links to
more focused pages. Here is a page with overview of results for tests
running in one location:

http://fatra.cnr.ncsu.edu/grassgistests/summary_report/nc/

Here is a page with list of all testsuites for a given location and date:

http://fatra.cnr.ncsu.edu/grassgistests/reports_for_date-2014-08-01-07-00/report_for_nc_spm_08_grass7_nc/testsuites.html

Here is a page for one testsuite for a given location and date. It contains
lits of test files and overview about test and code authors.

http://fatra.cnr.ncsu.edu/grassgistests/reports_for_date-2014-08-01-07-00/report_for_nc_spm_08_grass7_nc/lib/python/temporal/index.html

Finally, here is an example of the most useful page. It is an result page
for one test file containing general status, number of individual tests and
error output if tests are failing. The error output contains information
about individual tests.

http://fatra.cnr.ncsu.edu/grassgistests/reports_for_date-2014-08-01-07-00/report_for_nc_spm_08_grass7_nc/lib/python/gunittest/test_assertions_rast3d/index.html

When browsing the test results is it important to remember the system of
tests described in documentation. Individual tests are methods of test case
classes. One or more test case classes are in a test file which is a Python
file executable as standalone script. Each directory in a source code can
contain a directory named testsuite which can contain one or more Python
files which are called test files. The automated testing scripts go through
the source code and execute all test files in all testsuite directories.

I again did not focus on CSS of the HTML reports. Also the contents is just
what is necessary, e.g. the summary tables might be on more pages.

There is currently 21 successfully running test files and 6 failing test
files, 10 successfully running testsuites and 4 failing testsuites, and 751
successfully running tests and 46 failing tests (518 of the tests are from
PyGRASS module test). This applies to a NC SPM location and the
configuration of system at a computer where tests were executed.

When creating scripts for more complicated test executions, I improved  the
current system by better, more general API and better handing of some
special situations. I also improved the key-value summaries with aggregated
test results and introduced functions which shorten the file paths in
outputs so that you don§t have to read through long useless paths (useless
especially when result is published online).

I did not introduced any XML or JSON results since there is no immediate
use for them (and they can be always generated from existing key-value
files if we suppose the same amount of information). When creating the
summary reports of another summaries I have seen how database storage would
be useful because it would allows to do great queries at all levels of the
report without the need to summarize over and over again manually. Even at
the level of individual tests with hundreds of tests from different times
wouldn't be a problem. SQLite seems as a good choice here except for the
issues with parallelization and since parallelization (e.g. on the level of
tests in different locations) is something what we want to do some other
database (i.e. PostgreSQL) would be a better choice. ALthough I think that
it would be pretty straightforward, I cannot make this within GSoC.

It should be also noted that I haven't tried any third party
build/test/publish service. From what I have seen so far it seems that more
steps would be needed for integrating into something like, for example a
GitHub repository. Also (XML) format of tests might be a issue which I
discussed last time. Finally, I was not able to find out what are the ways
how large sets of testing data are handled. That's why I have decided to
create less sophisticated but sufficient custom solution.

2. What do you plan on doing next week?

The plan for week 12 is to work on documentation and I plan to do so.

Also I need to add some information to reports, most importantly the
information about platform and configuration and also a descriptive names
for test reports. This might be harder since these data might come from
outside, so they have to be passed from the top to the very bottom and I
should consider carefully what is really needed.

3. Are you blocked on anything?

The online documentation for gunittest package is not generated although it
compiles locally for me, so I'm not sure if everybody can compile it and
access it now.

Blocker for the future improvements of reports might be actually a need for
a database storage for test results which requires both test result and
report classes able to write to database and HTML report generator classes
to be able to read from database.

Wiki:
http://trac.osgeo.org/grass/wiki/GSoC/2014/TestingFrameworkForGRASS#Week11

Doc:
http://grass.osgeo.org/grass71/manuals/libpython/gunittest_testing.html

Broken gunittest package doc:
http://grass.osgeo.org/grass71/manuals/libpython/gunittest.html

Example of cron job:
http://trac.osgeo.org/grass/browser/grass/trunk/lib/python/docs/src/gunittest_running_tests.rst?rev=61498

Example test result online:
http://fatra.cnr.ncsu.edu/grassgistests/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/soc/attachments/20140801/90a53150/attachment.html>


More information about the SoC mailing list