[GRASS5] Automated test-system for basic functionalities.

Roger Bivand Roger.Bivand at nhh.no
Tue Jan 10 07:54:40 EST 2006


On Tue, 10 Jan 2006, Sören Gebbert wrote:

> On Tuesday 10 January 2006 10:28, Roger Bivand wrote:
> > On Tue, 10 Jan 2006, Radim Blazek wrote:
> > 
> > > On 1/9/06, Sören Gebbert <soerengebbert at gmx.de> wrote:
> > > > > It only tests if a module runs not the results, right?
> > > > Yes. Thats one main task of the framework. It should start the user specified grass programms and/or testscripts,
> > > > and catch/process the exitstatus (Signals) of the programms/scripts . Other tasks of the framework are, to parse the output (stdout and stderr)
> > > > for known errors (Usage, Fatal Error, Segfault and so on) and create a readable summary and a logfile.
> > > 
> > > > > We must get it running on Windows (native).
> > > > Then Windows should bring a bash within ;). Seriously, the framework prototype runs only within grass and uses the GRASS shell
> > > > variables and the shell output of some programms. And if grass-shellsript's (r.mremove) are running in grass/windows
> > > > then the framework should run too.
> > > > But if we decide to create a testfamework that works outside grass, creating locations and stuff, we may use another language (Perl?).
> > > 
> > > Say that MSYS can be installed for testing. Anyway, I would
> 
> I'll take a look on it.
> 
> > > like to separate test description and the script running it.
> 
> But if the test description is a shell script the user/dev will have more options to write tests.
> 
> We can define the test description in xml and write the framework in C/C++, 
> but in this case im not able write the framework (i dont have the time and the knowledge). :(
> 
> I just wanted a small module test framework, if you and other devs want a huge sophisticated test suite,
> maybe we should take a look on the mass of frameworks and testsuites which are free availible?
> 
> > > 
> > > > > But how can you check if module's output is correct?
> > > > The framework should not be that smart (i think thats impossible to implement, every test may produce different output ) to check if every output is
> > > > correct. But it should provide the functionality so the dev/user can build testfunctions to check the output (r.info -r). The framework
> > > > should handle this funktions and should integrate them into the reports.
> > > 
> > > Without output data verification it is almos useless in my opinion.
> > > I have 2 examples in my mind I recently saw:
> > > - new bug introduced in v.to.rast (last line segment not rasterized)
> > > - bug in v.in.ascii on Windows (missing header file -> wrong output)
> 
> Please dont misunderstand me, i didnt say'd we should not validate the output data, but the testscript 
> writer should provide these informations. 
> And your a right in the point, that the framework should know how to compare/remove the data. ;)
> 
> > > 
> > > That is why I suggested a small mapset with etalon output data
> > > which can be used for verification of test output.
> 
> I completly agree.
> Indeed, thats an important point, to have one/some validation mapsets.
> + added to the must have list 
> 
> > 
> > I agree, we need to compare the target output data with the generated. 
> > Given that they are most often binary, we can't use diff (R test uses diff 
> > on plain text files), but could we use checksums or something like a file 
> > digest to show same/different?
> 
> Interesting point, i dont know which one is better/faster, maybe a combination one for ascii and the other one for binary files?
> + added to the must have list
> 
> Many thanks for your suggestions!
> 
> Example: maybe a test description should look like this?
> 
> #########################################
> #Testcript for r.mapcalc written by me
> 
> #Some important variables, needed by the framework
> NeedValidationMapset="True" #only run this test in a validation mapset
> ValidateOutputData="True"     #validate the output data
> OutputDataType="raster"        #need to compare and remove the data
> 
> Programm="r.mapcalc"           #the programm that should be tested
> NumberOfTests=3                  #Number of Tests
> 
> #define the output map names
> OutputData[0]="grassTestOutputRmapcalc0" 
> OutputData[1]="grassTestOutputRmapcalc1"
> OutputData[2]="grassTestOutputRmapcalcNoValidation"
> 
> #define the Validation map names
> ValidationData[0]="r_mapcalc_validation_int_10"
> ValidationData[1]="r_mapcalc_validation_float_10"
> ValidationData[2]="none"                                      #no validatio is needed, just an example
> 
> #define the command line options
> GrassProgrammOptions[0]="${OutputData[0]} = 10"
> GrassProgrammOptions[1]="${OutputData[1]} = 10.0"
> GrassProgrammOptions[2]="${OutputData[2]} = 10.0"
> 
> #Now run the test
> TestProgramm       #thats a framework function, runs the programm and compares the output
> 
> #Take care the produced output is removed
> RemoveTestedData        #thats a framework function, the framework knows the datatype
> 
> ########################################
>  
> What are you thinking?

Shell scripts will be fine. Maybe we need a template which specifies 
things and checkpoints others: platform, some environment variables, GRASS 
environment variables, in a text file reporting the output, then runs the 
command with specified option values, finally prints some representation 
of the output for the options/environment variables. Programs outputting 
^H will be a problem. The output should include something like a checksum 
of binary files created. 

Each command test script must have a benchmark output file, then the fresh
output should be diff'ed against the benchmark, and some grepping done to
remove false hits (only report if important output differs). That way, we
don't need validation data in place, just a text representation of some
file digest like md5sum in the benchmark file. In:

http://www.ci.tuwien.ac.at/Conferences/DSC-2003/Drafts/MurrellHornik.pdf

use is actually made of diff to detect differences in binary graphics
files - that might be enough. I feel that to start with a simple shell 
script accepting arguments to pass through should be enough, trapping 
errors, and reporting back in a way that is easy to understand.

Roger

> 
> Best regards
> Soeren
> 
> > 
> > Roger
> > 
> > > 
> > > Radim
> > > 
> > > _______________________________________________
> > > grass5 mailing list
> > > grass5 at grass.itc.it
> > > http://grass.itc.it/mailman/listinfo/grass5
> > > 
> > 
> 

-- 
Roger Bivand
Economic Geography Section, Department of Economics, Norwegian School of
Economics and Business Administration, Helleveien 30, N-5045 Bergen,
Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
e-mail: Roger.Bivand at nhh.no




More information about the grass-dev mailing list