[GRASS-dev] Fwd: Re: Upcoming 7.2.0: review which addons to move to core

Paulo van Breugel p.vanbreugel at gmail.com
Wed Oct 5 07:29:00 PDT 2016


On 05-10-16 15:20, Moritz Lennert wrote:
> On 05/10/16 14:24, Sören Gebbert wrote:
>> Hi,
>>
>> 2016-10-05 10:20 GMT+02:00 Moritz Lennert <mlennert at club.worldonline.be
>> <mailto:mlennert at club.worldonline.be>>:
>>
>>     [sent this from the wrong address, so it didn't get through to 
>> the list]
>>
>>
>>     -------- Message d'origine --------
>>     Envoyé : 5 octobre 2016 00:41:20 GMT+02:00
>>
>>
>>
>>     Le 4 octobre 2016 22:55:35 GMT+02:00, "Anna Petrášová"
>>     <kratochanna at gmail.com <mailto:kratochanna at gmail.com>> a écrit :
>>     >On Tue, Oct 4, 2016 at 4:22 PM, Markus Metz
>>     ><markus.metz.giswork at gmail.com
>>     <mailto:markus.metz.giswork at gmail.com>> wrote:
>>     >> On Tue, Oct 4, 2016 at 5:42 PM, Sören Gebbert
>>     >> <soerengebbert at googlemail.com
>>     <mailto:soerengebbert at googlemail.com>> wrote:
>>     >>> Hi,
>>     >>>>
>>     >>>>
>>     >>>> >
>>     >>>> > You are very welcome to write the missing tests for core 
>> modules.
>>     >>>> >
>>     >>>> > However, i don't understand the argument that because many 
>> core
>>     >modules
>>     >>>> > have
>>     >>>> > no tests, therefore new modules don't need them. If 
>> developers of
>>     >addon
>>     >>>> > module are serious about the attempt to make their modules 
>> usable
>>     >and
>>     >>>> > maintainable for others, then they have to implement 
>> tests. Its
>>     >and
>>     >>>> > integral
>>     >>>> > part of the development process and GRASS has a beautiful 
>> test
>>     >>>> > environment
>>     >>>> > hat makes writing tests easy. Tests and documentation are 
>> part of
>>     >coding
>>     >>>> > and
>>     >>>> > not something special. I don't think this is a hard 
>> requirement.
>>     >>>> >
>>     >>>> > There is a nice statement that is not far from the truth:
>>     >Untested code
>>     >>>> > is
>>     >>>> > broken code.
>>     >>>>
>>     >>>> these gunittests only test if a module output stays the 
>> same. This
>>     >>>
>>     >>>
>>     >>> This is simply wrong, please read the gunittest documentation.
>>     >>
>>     >> but then why does
>>     >>>
>>     >>> The gunittest for the v.stream.order addon is an example how its
>>     >done:
>>     >>>
>> >https://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.stream.order/testsuite/test_stream_order.py
>> <https://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.stream.order/testsuite/test_stream_order.py>
>>     >>
>>     >> assume certain order numbers for features 4 and 7? What if these
>>     >order
>>     >> numbers are wrong?
>>     >>
>>     >> Recently I fixed bugs in r.stream.order, related to stream length
>>     >> calculations which are in turn used to determine stream 
>> orders. The
>>     >> gunittest did not pick up 1) the bugs, 2) the bug fixes.
>>     >>
>>     >>>
>>     >>> You can write gunittests that will test every flag, every 
>> option,
>>     >their
>>     >>> combination and any output of a module. I have implemented 
>> plenty of
>>     >tests,
>>     >>> that check for correct error handling. Writing tests is 
>> effort, but
>>     >you have
>>     >>> to do it anyway. Why not implementing a gunittest for every 
>> feature
>>     >while
>>     >>> developing the module?
>>     >>>>
>>     >>>>
>>     >>>> My guess for the r.stream.* modules is at least 40 man hours of
>>     >>>> testing to make sure they work correctly. That includes 
>> evaluation
>>     >of
>>     >>>> float usage, handling of NULL data, comparison of results 
>> with and
>>     >>>> without the -m flag. Testing should be done with both high-res
>>     >(LIDAR)
>>     >>>> and low-res (e.g. SRTM) DEMs.
>>     >>>
>>     >>>
>>     >>> Tests can be performed on artificial data that tests all 
>> aspects of
>>     >the
>>     >>> algorithm. Tests that show the correctness of the algorithm for
>>     >specific
>>     >>> small cases should be preferred. However, large data should 
>> not be
>>     >an
>>     >>> obstacle to write a test.
>>     >>
>>     >> I agree, for tests during development, not for gunittests.
>>     >>
>>     >> From the examples I read, gunittests expect a specific output. 
>> If the
>>     >> expected output (obtained with an assumed correct version of the
>>     >> module) is wrong, the gunittest is bogus. gunittests are ok to 
>> make
>>     >> sure the output does not change, but not ok to make sure the 
>> output
>>     >is
>>     >> correct. Two random examples are r.stream.order and r.univar.
>>     >
>>     >
>>     >I am not sure why are we discussing this, it's pretty obvious that
>>     >gunittests can serve to a) test inputs/outputs b) catch changes in
>>     >results (whether correct or incorrect) c) test correctness of 
>> results.
>>     >It just depends how you write them, and yes, for some modules c) is
>>     >more difficult to implement than for others.
>>
>>
>>     Well, I agree with Markus that unittests are not a panacea and that
>>     we should not fall into the trap of thinking that these tests will
>>     guarantee that the results of our modules are correct.
>>
>>
>> Then i live in a parallel universe. Simple question: How do you test
>> your software? How do you assure the correct functionality of your
>> software? Why is it impossible to implement your approach of testing in
>> a dedicated gunittest? How do you assure software quality, if you don't
>> provide tools so that other developers are able to test your software
>> for correctness? Regression tests are not possible then, because the
>> effect of changes in the core libraries can not be easily detected in
>> modules without tests.
>
>
> Please note that I was speaking about unit tests, here. I don't know 
> how efficient our testing framework is for integration testing ? Maybe 
> we also need to be clearer about what we understand by tests during 
> such discussions ?
>
> Good discussion, though ! :-)
>
>>
>> Can you explain to me why the developers of the sophisticated software
>> system VTK [1] implement unit and integration tests for all software
>> components to assure the correct functionality of the framework? They
>> didn't saw the trap? They are delusional to think that tests assure
>> software quality?
>>
>> Why is test driven development [2] an integral part of agile software
>> development approaches like scrum or extreme programming? They didn't
>> saw the trap? They are delusional to think that tests assure software
>> quality?
>>
>> [1] http://www.vtk.org/overview/
>> [2] https://en.wikipedia.org/wiki/Test-driven_development
>>
>>
>>     However, I do agree that these tests are useful in detecting if any
>>     changes to the code change the output, thus raising a flag that the
>>     developer has to at least take into account.
>>
>>     I'll try to write some tests for the OBIA tools when I find the
>>     time, although I do agree with Markus that it wouldn't be useful to
>>     try to write tests that would cover each and every possible corner
>>     case...
>>
>>
>> Why is it "not useful" to write tests for all cases the software is
>> dedicated to solve? It is indeed a lot of effort, but it is useful.
>
> I would say the question is rather, first, whether it is at all 
> possible, and, second, that maybe by thinking that it is, we are too 
> confident in our tests providing information that they really aren't 
> trying to provide.
>
> But I'm no expert whatsoever, on the topic (I am not a computer 
> scientist, just a scientist programming some tools with my very 
> limited capabilities), so I don't want to stretch this discussion out. 
> I do recommend reading this, though:
> http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
>
> I also like the table close to the top of
>
> http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/ 
>
>
> (attached as image)
>
> And let's remember that this all started as the question of what 
> should be required for a module to move from addons to core. The 
> question, therefore, is to find the right balance between necessary 
> effort and our desire to offer functionality to users. This also 
> raises the question of why it would be better for a given module to be 
> in core, rather than in extensions. We could also imagine the opposite 
> direction, i.e. move modules from core to extensions to lighten the 
> work load of maintaining core, while still offering the same 
> functionalities.
>
> IMHO, the largest advantage of having a module in core is that when 
> someone changes internal library APIs, then generally they check all 
> of core and modify what needs to, but this is not necessarily the case 
> for extensions...
>
> Maybe we should ask the users of whether this distinction between 
> modules and core and extensions is really relevant for them, or 
> whether most are perfectly happy to just install extensions.

Since you are asking :-) , as an user, my main interest is in good 
documentation and reproducible examples (which I can than also use to 
see if outputs make sense to me as an user). In that respect there is no 
inherent difference between core modules and extensions. What is 
different is that many (most?) of the core functions are accessible 
through the menu. I personally don't find that very important, 
especially with the modules tab giving fairly easy access to extensions, 
but I can imagine that for new / other users, especially those more 
inclined to menu-driven applications, this may make a difference.

>
> Moritz
>
>
> _______________________________________________
> grass-dev mailing list
> grass-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/grass-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/grass-dev/attachments/20161005/0c66c7ab/attachment-0001.html>


More information about the grass-dev mailing list