[mapserver-dev] the state of msautotest

Lime, Steve D (DNR) Steve.Lime at state.mn.us
Fri Sep 14 08:32:14 PDT 2012


The query stuff is there to exercise the myriad of query modes, expression and filter types, and templates. Some tests are resplicated for PostGIS sources. I put it in place as part of the lexer/parser refresh. It's a big enough topic that it didn't feel right mixing it in with the rendering tests. 

Steve

-----Original Message-----
From: mapserver-dev-bounces at lists.osgeo.org [mailto:mapserver-dev-bounces at lists.osgeo.org] On Behalf Of Stephan Meißl
Sent: Friday, September 14, 2012 9:01 AM
To: mapserver-dev at lists.osgeo.org
Subject: Re: [mapserver-dev] the state of msautotest

Hi Thomas,

thanks for the credits but as usual you did it in almost no time with very little help, simply amazing!

I see you test everything in misc, gdal, renderers, and wxs. What about query and also mspython and php? To be honest I've never used the later ones but I guess there's a reason why they are there.

And then there are also some MapScript tests at least for python and java. Is anybody using those? Should we add them to be tested via travis?

I'm very much in favor of completely adopting it. Question, what happens if somebody breaks the build? I vote for something with beers at the next FOSS4g :D

cu
Stephan


On Thu, 2012-09-13 at 19:00 +0200, thomas bonfort wrote:
> I have setup a travis account for mapserver and added the necessary 
> files to trigger a test run each time some code is committed to master 
> and branch-6-2. With Stephan's help, we went through the entire 
> test-suite to update the expected results so we can start off with a 
> clean slate, we still however have 10 failing tests that need to be 
> acted upon (the diffs can be seen at
> http://travis-ci.org/#!/mapserver/mapserver/builds/2437309 then 
> scrolling down to the bottom of the build/test run) I have tried to 
> set the config such that I am the only one getting spammed by failed 
> test logs. If you do receive such emails without having committed any 
> code, please get in touch with me.
> 
> Hope you like it, this has the potential of becoming a very valuable 
> tool once we iron out the remaining failing tests.
> 
> regards,
> thomas
> 
> On Thu, Sep 13, 2012 at 3:00 PM, Lime, Steve D (DNR) 
> <Steve.Lime at state.mn.us> wrote:
> > Ditto... I'd actually like to see the tests more tightly integrated 
> > in the build process, so they came with the source and you could do a 'make test'
> > easily enough.
> Steve, a "make test" target could be added rather simply now. However 
> the output of the test run will be as error-prone as it was before 
> hand, i.e. you have the risk of ending up with a number of false 
> positives due to your architecture, library versions, and compiled in 
> options.
> >
> > Steve
> >
> > ________________________________________
> > From: mapserver-dev-bounces at lists.osgeo.org 
> > [mapserver-dev-bounces at lists.osgeo.org] on behalf of Daniel 
> > Morissette [dmorissette at mapgears.com]
> > Sent: Wednesday, September 12, 2012 8:20 AM
> > To: mapserver-dev at lists.osgeo.org
> > Subject: Re: [mapserver-dev] the state of msautotest
> >
> > I am very supportive of making the msautotests easier to use and 
> > more reliable. Sounds like a few interesting options are on the table already.
> >
> > Daniel
> >
> > On 12-09-12 7:14 AM, thomas bonfort wrote:
> >> After maybe a couple beers too many with Stephan the other day we 
> >> came to talk about mapserver's autotests and it's current 
> >> deficiencies with respect to how we are using them.
> >>
> >> To summarize, here are the principle issues that arose:
> >>
> >> - The tests aren't run on each commit, which is understandable as 
> >> it takes an additional effort to do so
> >> - We have a large number of failing tests, which can be attributed to:
> >>    - the afore-mentioned reason: is a failing test a result of the 
> >> last commit or was it already failing beforehand?
> >>    - image comparison tests can produce false positives, due to 
> >> different cpu architectures and underlying dependency libraries 
> >> (namely freetype)
> >>    - image and xml/gml tests can diverge depending on the 
> >> compile-time configuration options that were chosen.
> >> - we've ended up in a chicken vs. egg situation where tests aren't 
> >> being run because they are failing, and test fail in the long run 
> >> because they are never being used/updated.
> >>
> >> To alleviate this, we'd like to set the ball rolling on these ideas:
> >> - provide a "reference" platform with a kitchen-sink (i.e. compiled 
> >> with all options) mapserver instance. This could be an osgeo ad-hoc 
> >> vm or a dedicated server provided by one of us or our users. 
> >> (second option might be better given the limited resources 
> >> available at osgeo)
> >> - go through all the current tests, make sure they pass on this 
> >> reference platform, and update the expected result if not.
> >> - automatically run the whole test-suite on each commit, and 
> >> provide a web-page or email alert when a test has failed. Does 
> >> anyone have experience with a software solution that could provide that to us?
> >>
> >> Thoughts? It's certainly not a miracle solution as it will require 
> >> acting upon when a test starts failing, however we believe that 
> >> automating the runs and removing the false positives will be a nice 
> >> step forward (and give more peace-of-mind to the release-manager 
> >> when packaging a release :) )
> >>
> >> cheers,
> >> thomas


_______________________________________________
mapserver-dev mailing list
mapserver-dev at lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/mapserver-dev




More information about the mapserver-dev mailing list