[GRASS-dev] Upcoming 7.2.0: review which addons to move to core
markus.metz.giswork at gmail.com
Tue Oct 4 07:13:54 PDT 2016
On Sun, Oct 2, 2016 at 9:43 PM, Sören Gebbert
<soerengebbert at googlemail.com> wrote:
> 2016-10-02 13:24 GMT+02:00 Moritz Lennert <mlennert at club.worldonline.be>:
>> On 01/10/16 21:25, Blumentrath, Stefan wrote:
>>> Sounds fair enough as requirements for new core modules. “Maintainable
>>> code” would in praxis mean “the module has undergone a code review by a
>>> core developer”?
>>> Those requirements would add to Markus requirement of “maturity”, which
>>> I would interpret like “the module has been tested in praxis and options
>>> and flags are consolidated” (so no major changes are expected /
>>> I am afraid, it seems only very few of the suggested modules are covered
>>> with unit tests. Most of them have a good documentation. No idea about
>>> the maintainability of the code...
>>> How should we proceed with this topic? Should the named modules (and
>>> from my point of view Moritz OBIA modules would be very welcome too)
>> They definitely do not meet the enounced criteria, yet. No tests and
>> AFAIK, most of them have only been used inhouse by my colleagues.
>> So, I'm happy to have them live addons for now.
>> This said, I think the requirement of tests is something I would like to
>> see discussed a bit more. This is a pretty heavy requirement and many
>> current core modules do not have unit tests...
> You are very welcome to write the missing tests for core modules.
> However, i don't understand the argument that because many core modules have
> no tests, therefore new modules don't need them. If developers of addon
> module are serious about the attempt to make their modules usable and
> maintainable for others, then they have to implement tests. Its and integral
> part of the development process and GRASS has a beautiful test environment
> hat makes writing tests easy. Tests and documentation are part of coding and
> not something special. I don't think this is a hard requirement.
> There is a nice statement that is not far from the truth: Untested code is
> broken code.
these gunittests only test if a module output stays the same. This
does not mean that a module output is correct. Tested code means first
of all that the code has been tested with all sorts of input data and
combinations of input data and flags. All these tests, e.g. what I did
for i.segment or r.stream.* (where I am not even the main author)
should IMHO not go into a gunittest framework because then running
gunittests will take a very long time. In short, simply adding
gunittests to addon modules is not enough, code needs to be tested
more thoroughly during development than what can be done with
My guess for the r.stream.* modules is at least 40 man hours of
testing to make sure they work correctly. That includes evaluation of
float usage, handling of NULL data, comparison of results with and
without the -m flag. Testing should be done with both high-res (LIDAR)
and low-res (e.g. SRTM) DEMs.
>> One thing we could think about is activating the toolbox idea a bit more
>> and creating a specific OBIA toolbox in addons.
>>> Identified candidates could be added to core once they fulfill the
>>> requirements above. Would that happen only in minor releases or would
>>> that also be possible in point releases?
>> Adding modules to core is not an API change, so I don't see why they can't
>> be added at any time. But then again, having a series of new modules can be
>> sufficient to justify a new minor release ;-)
>>> Or is that already too much formality and if someone wishes to see an
>>> addon in core that is simply discussed on ML?
>> Generally, I would think that discussion on ML is the best way to handle
>> grass-dev mailing list
>> grass-dev at lists.osgeo.org
> grass-dev mailing list
> grass-dev at lists.osgeo.org
More information about the grass-dev