[Qgis-developer] Continuous Integration / Testing with TravisCI

Yves Jacolin yjacolin at free.fr
Tue Nov 18 11:33:03 PST 2014


Le mardi 18 novembre 2014, 15:52:12 Matthias Kuhn a écrit :
> Hi all,
> 
> You may have noticed that there is a new symbol on the qgis github page
> that (hopefully) says build passing. [1]
> 
> The last week I have been busy with fixing tests for integration with
> Travis CI.
> 
> Right now, whenever somebody pushes a change to
>  * a branch in the qgis repository (that affects "master" and
> release-x_y branches)
>  * a pull request for the qgis repository
> a service called Travis CI [2] will be activated and compile the
> sourcecode and run the test suite there. That has the advantage that we
> have a reference platform (ubuntu precise for the moment) where all
> tests pass. This defined environment is a big step forward as it makes
> test runs comparable and if something goes wrong we know that code is
> responsible and not some other variable.
> Currently all tests (with a few exceptions which are disabled) are
> passing. And it would be excellent if it stays like that!
> 
> What does that mean for developers?
> 
> **Use pull requests**
> Whenever you open a pull request first you get the chance to test your
> changes without making the nice green symbol on the front page go red.
> You can push to these pull requests until the tests go green. For
> reviewers this is also nice: you don't need to spend time on pull
> requests that don't build or don't pass tests.
> 
> **Write tests**
> If you implement a new feature or fix a bug, write a test for it. If
> somebody breaks the test, the pull request or the symbol on the github
> project page will turn red. This means that it's clear which commit
> broke the test. But the test is your feature or your bugfix!
> 
> **Try to avoid rendering tests**
> A big portion of fixing has been spent on rendering tests. The problem
> with rendering tests is, that a lot of them have slight inconsistencies
> across systems. Gradients, anti aliasing, fonts... This all may render
> differently without being really a problem. But it produces a (false)
> alarm on another system and one needs to take measures. There are such
> measures: defining color intolerance, allowed pixel mismatches, known
> anomalies, since last week also different reference pictures... But in
> the end this means extra work and false alarms once in a while. We do
> create a software that has visual products. Therefore rendering tests
> are required. But think very about different possibilities before
> writing such a test. Maybe you can extract the WKT from a geometry and
> compare that against a known WKT string instead? That is less brittle,
> faster to test and much easier to maintain than a rendering test!
> If you really need to write a rendering test it is likely that it will
> fail on travis in the beginning. To make it pass on travis, create a
> pull request and wait for it to be tested. The results will be uploaded
> to the OTB CDash [3] where you get information what exactly you need to
> adjust. How many pixels failed? Visual comparison if adjusting the color
> tolerance a bit may help. Is an anomaly ok? Is there a need to add
> another reference picture?
> 
> **C++ vs. Python tests**
> I made an earlier call for having only either one or the other. That was
> based on misinterpretation of test results from my side. Sorry.
> My current opinion is: do what you feel comfortable with. If you prefer
> to write C++ tests, do so. If you prefer to write Python tests, do so.
> If you want both, do so. The important thing is: DO IT!
> If you really need advice: go for python tests, they test the same
> functionality like C++ functions but also test the python API and
> therefore cover a slightly bigger area.
> 
> Which tests are not fixed (on the reference platform)?
> 
>  * The atlas tests had interdependencies, that made some tests pass just
> because the environment was ok from previous ones. These tests are now
> changed to be independent one of another. And there is a PR that should
> fix the issues and re-enable the test. [4]
>  * The symbology tests fail due to problems with sld loading. There is a
> pull request that partly reverts a commit that broke things and
> re-enables the test. [5]
>  * The server pal labeling tests produced very different rendering
> results. I could imagine that they would be better off if the would also
> use the QGIS test font? New reference images could also be introduced.
> Dakota cartography's call.
>  * The server canvas labeling tests crash on exit because of a
> not-yet-ended thread. Multithreading issue? Something else?
>  * Recently the WCS test started to fail. I guess the server it uses to
> test does not always respond? We should either fix the server, disable
> the test, increase the timeout...
> 
> Where to go from here?
> 
> There are two things I would like to see:
> 
>   Many more tests :)
>   More platforms: Travis can also handle mac. We could ask them to
> enable mac testing for us. But we'd first need to fix the tests there.
> There is a service called appveyor that runs tests for windows. But we'd
> first need to fix the tests there.
> 
> I am very happy that there was such a great interest in the crowdfunding
> campaign that made it possible to do that work. THANK YOU to everybody
> who helped to get the testing integration to the next level.
> 
> Matthias
> 
> [1] https://github.com/qgis/QGIS
> [2] http://travis-ci.org/
> [3] http://dash.orfeo-toolbox.org/index.php?project=QGIS
> 
> _______________________________________________
> Qgis-developer mailing list
> Qgis-developer at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/qgis-developer
Matthias,

At camptocamp we often use Travis CI and Coverall: https://coveralls.io/ which 
give statistic on test coveralls.

Is it worth to add it?

Y.
-- 
Yves Jacolin


More information about the Qgis-developer mailing list