[gdal-dev] autotest questions/issues

Greg Troxel gdt at lexort.com
Wed Sep 1 06:54:27 PDT 2021

Robert Coup <robert.coup at koordinates.com> writes:

Thanks for taking the time to respond to me.  It's becoming a lot clearer.

I should be clear that these issues are of course minor and mostly doc
issues and are not meant to be about whether the RC before us is

>> * autotest is not in the distribution tarball
> Yeah, it's never been in the release tarballs. And the release tarballs
> start one folder level down (gdal/) as well.

I had no idea about that, and was just looking for autotest in the repo
since it wasn't in the tarball.  I see it now of course.  Certainly it
makes sense to have code and tests in the same repo.

> TBH I think the release tarballs should just match/be an archive of the git
> tag. But that's above my pay grade :-)

That's not quite the traditional view with autotools, but it is the
github way.  But it is traditional to have the tarball match the repo in

> @Debian and other package maintainers — are you just getting the Git tags
> these days or do you actually use the release tarballs?

I am the maintainer for gdal in pkgsrc and I use release tarballs.  That
has been the traditional approach since the beginning of packaging, and
avoids the packaging system having to deal with N different VC backend
systems (not just "git", but various git hosting), and allows
distributed tarballs to be cache and easily moved around on CD/DVD.  As
I see it, the release tarballs are in fact the released products
intended for users, and that is therefore the proper ting to use.
Packaging in general is encapulating the standard user build and adding
metadata, within a system that keeps track of everything.

Good question of course also for Debian, RH, FreeBSD, OpenBSD, macports,
homebrew, etc.

>>   + Does it test the gdal that is installed in one's PATH (and
>>   PYTHONPATH), or is is testing the gdal that is in the source tree?
> CONTRIBUTING.md has a guide to run `scripts/setdevenv.sh` which sets up
> in-tree execution for utilities/libraries and the tests.

I see that now.  I expected to be able to read the README in autotest
and understand the big view of testing.  It feels like autotest is sort
of a separable part of gdal not being in the tarball.

(That script uses bash rather POSIX sh; with any luck that's just the #!
line.  It does use [[ in test, which isn't specified by POSIX shell.  It
might be nice to set POSIXLY_CORRECT in the environment to find these

It's mostly obvious and entirely understandble with slight effort how to
set env vars to point to the built-but-not-installed version, so there's
no real problem -- and for now I am wanting to test releases.

>>   + Is one supposed to check out the corresponding branch (or really
>>   tag), or is it ok to test 3.3.2rc2 with master?
> The tests should match the code, so 3.3.2rc2 tests run against 3.3.2rc2
> code.

So is it helpful to file a doc bug, or are you inclined (since you
actually understand the test plan) to adjust the README.md?

>> * no clear path to test a not-yet-installed gdal
>> When building a new gdal, it seems a good idea to run tests before
>> installing it.  autoconf has done this for year with make check.  I see
>> no obvious way to do this.
> setdevenv.sh? Adding some sort of `make test` might be a cleaner approach
> though.

Yes, in general I take the view that it's good if a project meets the
standard interfaces so that people don't have to understand anything
about the project's test setup to run it.

It might also be nice to have a scheme uses variables set during the
build, perhaps in an created test_env.sh (that just has "export
GDAL_FOO=/bar" lines) instead of reaching in to figure them out again.
The main build already knows all these paths and it would be easy to
write such a file.

>> * puzzling test dependencies
>>   It's not really clear if a missing dependency (vs requirements.txt)
>>   will cause problems (assuming pytest and lxml are there), in terms of
>>   just skipping some things, bad output, or perhaps some missing pretty
>>   printing.
> a mixture. Is there some particular reason installing everything in
> requirements.txt from PyPi is a problem?

In pkgsrc, we don't install from PyPi because that results in bits on
the system that are not managed by the packaging system.  Their presence
is unrecorded, and hence they won't get updated, won't be on the list of
needed packages when doing a bulk build on another system, and their
files will show up as spurious as a violation of the mostly-true "all
files in the packaging prefix are recorded as being part of some
package" invariant.

However, what we do is to create a package for that, which does the
equivalent download/install, which amounts to the same thing but staying
without the packaging system, ensuring system-wide options are followed
and the files are recorded as belonging to the package.  That's not a
big deal, especially for pure python using normal setuptools/distutils,
and I'll get around to it.

I expect that other packaging systems have a similar approach.

I was really coming at this from the point of view that normally a
package lists required dependencies and optional dependencies, and I had
no idea what would happen if one were missing.

>>   I think pytest-env is
>>     https://pypi.org/project/pytest-env/#history
>>   but that appears unmaintained, with a last release in 2017.   Perhaps
>>   there are no outstanding bugs and no changes and it is actually
>>   maintained but there is no reason to change -- but that is highly
>>   implausible.   It seems boutique, based on not being in pkgsrc, which
>>   means no one else has needed it, even though we have 20K packages.
> It's about 45 lines of code. Pytest plugins are often fairly minimal, and
> just do One Thing.

Perhaps a link to PyPi in the README and a statement that something that
appears unmaintained really is the intended dependency.  It just came
across to me that it was more likely I have failed to find the right
code, than that the last release really is 4 years ago.  I had a similar
experience a few days ago for something totally unrelated, and in that
case the homepage with code from 2017 was out of data and there were
more recent releases someplace else.

>>   [Now I see that osr tests fail with a warning about env if this is
>>   missing, but really the test should be skipped or hard fail and not
>>   run without the env.]
> Yes, I think it should hard-fail

Should I file a bug?  What I got was

/home/n0/gdt/SOFTWARE/GEO/GDAL/gdal/autotest/osr/osr_pci.py:200: AssertionError
=============================== warnings summary ===============================
  /usr/pkg/lib/python3.8/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: env
    self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")

-- Docs: https://docs.pytest.org/en/stable/warnings.html

which I think led to testing without the env var.

>> * I'm guessing the expectation is that all the tests pass.  With
>>   autotest from master and 3.3.2rc2 installed (base lib and python
>>   bindings), I get:
> In general all the tests pass. There's skips/xfails for particular known
> issues, but maybe you've found a bug? Maybe BSD-specific, BSD isn't part of
> CI.

It would be really nice to add one.  I do see there is macOS, which is
at least !Linux.  In packaging it is fairly typical to see problems from
code that makes beyond-POSIX assumptions that of course worked in the
development environment, but so far I have no hint of that here.

>> * skipping tests after a failure.
>> It would be nice to have hints in README.md about skipping tests, but it
>> seems easy enough to run osr and then the following ones.
> you mean, stopping on the first failure? `pytest -x ...`
> autotest/README.md has some docs on running specific tests / modules / etc

It does, but I didn't understand how to solve my issue, which is a
python segfault (likely in gdal binary code loaded in a module, I
speculate without basis) that ended the entire test run.   I worked
around that my removing ogr/ogr_mitab.py and rerunning.

> GDAL has some odd test layouts with particular inter-test dependencies from
> when the test suite was bulk-ported via automation to work under pytest —
> this made it a lot saner, but some of the "test 18 depends on test 17
> passing" issues remain.

Yes, I did see that caution and was trying to avoid doing anything

At this point I think I've run into something else, that might need
explaining in README or perhaps somethihng else.   When we build gdal
for pkgsrc, we have most drivers and most dependencies, but I suspect
some are missing, mostly because the dependencies aren't packaged and
nobody is complaiming about a missing format.  I wonder how much of my
test problems are about that.   Without ogr_mitab, there are a lot of
tests that end up

  E       AttributeError: 'NoneType' object has no attribute 'ExecuteSQL'

which isn't the real problem.

I also saw things like

  ERROR 1: Driver GTM is considered for removal in GDAL 3.5. You are
  invited to convert any dataset in that format to another more common one
  .If you need this driver in fut ure GDAL versions, create a ticket at
  https://github.com/OSGeo/gdal (look first for an existing one first) to
  explain how critical it is for you (but the GDAL project may still
  remove it), and to enable it now, set the
  GDAL_ENABLE_DEPRECATED_DRIVER_GTM configuration option / environment
  variable to YES ERROR 1: GPSTrackMaker driver failed to create

and I wonder if the tests should be setting
Alternatively those tests could be in a separate driver file and not
normally invoked.

I looked into the ogr_mitab.py failure, and it looks like the crash
might be just a secondary failure, sort of using a NULL pointer that
wasn't expected, or code written in C missing input validation.

Test session starts (platform: netbsd9, Python 3.8.11, pytest 6.2.4, pytest-sugar 0.9.4)
cachedir: .pytest_cache
rootdir: /home/n0/gdt/SOFTWARE/GEO/GDAL/gdal/autotest, configfile: pytest.ini
plugins: sugar-0.9.4
collecting ...
 ogr/ogr_mitab.py::test_ogr_mitab_2 ✓                                                                                                                         2% ▎

――――――――――――――――――――――――――――――――――――――――――――――――――― test_ogr_mitab_3 ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

    def test_ogr_mitab_3():

        gdaltest.mapinfo_ds = ogr.Open('tmp')
>       gdaltest.mapinfo_lyr = gdaltest.mapinfo_ds.GetLayer(0)
E       AttributeError: 'NoneType' object has no attribute 'GetLayer'

/home/n0/gdt/SOFTWARE/GEO/GDAL/gdal/autotest/ogr/ogr_mitab.py:124: AttributeError

 ogr/ogr_mitab.py::test_ogr_mitab_3 ⨯                                                                                                                         3% ▍

―――――――――――――――――――――――――――――――――――――――――――――――――――――― test_ogr_mitab_4 ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

    def test_ogr_mitab_4():

>       sql_lyr = gdaltest.mapinfo_ds.ExecuteSQL(
            "select * from tpoly where prfedea = '35043413'")
E       AttributeError: 'NoneType' object has no attribute 'ExecuteSQL'

/home/n0/gdt/SOFTWARE/GEO/GDAL/gdal/autotest/ogr/ogr_mitab.py:157: AttributeError

 ogr/ogr_mitab.py::test_ogr_mitab_4 ⨯       5%

Fatal Python error: Segmentation fault
Current thread 0x000078c2de374800 (most recent call first):
  File "/usr/pkg/lib/python3.8/site-packages/osgeo/ogr.py", line 1143 in SetAttributeFilter
  File "/home/n0/gdt/SOFTWARE/GEO/GDAL/gdal/autotest/ogr/ogr_mitab.py", line 177 in test_ogr_mitab_5
  File "/usr/pkg/lib/python3.8/site-packages/_pytest/python.py", line 183 in pytest_pyfunc_call
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 194 bytes
Desc: not available
URL: <http://lists.osgeo.org/pipermail/gdal-dev/attachments/20210901/77516bac/attachment-0001.sig>

More information about the gdal-dev mailing list