[gdal-dev] Testing the driver
Even Rouault
even.rouault at spatialys.com
Fri Feb 9 02:47:43 PST 2024
Abel,
Le 09/02/2024 à 10:55, Abel Pau via gdal-dev a écrit :
>
> Hi,
>
> I am at the lasts steps before pulling a request about the MiraMon
> driver.
> I need to write some documentation and formalize the tests.
>
> After that, I’ll do the pull request to github.
>
I'd suggest first before issuing the pull request that you push to your
fork on github and look at the Actions tab. That will allow you to fix a
lot of things on your side, before issuing the PR itself
>
> I am a little confused about the testing. I can use pytest or ctest,
> right? Which is the favourite? Are there any changes from the official
> documentation?
>
ctest is just the CMake way of launching the test suite. It will execute
C++ tests of autotest/cpp directly, and for tests written in python will
launch "pytest autotest/XXXXX" for each directory.
"ctest --test-dir $build_dir -R autotest_ogr -V" will just run all the
autotest/ogr tests, which can be quite long already.
To test your own development, you may have a more pleasant experience by
directly running just the tests for your driver with something like
"pytest autotest/ogr/ogr_miramon.py" (be careful on Windows, the
content of $build_dir/autotest is copied from $source_dir/autotest each
time "cmake" is run, so if you edit your test .py file directly in the
build directory, be super careful of not accidentally losing your work,
and make sure to copy its content to the source directory first. That's
admittedly an annoying point of the current test setup on Windows,
compared to Unix where we use symbolic links)
after setting the environment to have PYTHONPATH point to something like
$build_dir/swig/python/Release or $build_dir/swig/python/Debug (I
believe you're on Windows?). If you look at the first lines output by
the above "ctest --test-dir $build_dir -R autotest_ogr -V" invokation,
you'll actually see the PYTHONPATH value to specify.
You also need to first install pytest and other testing dependencies
with: python -m pip install autotest/requirements.txt
> There is a minimal test to create?
>
A maximal test suite, you mean ;-) You should aim for a "reasonable"
coverage of the code you wrote. Aiming to test the nominal code paths of
your driver is desirable (testing the error cases generally requires a
lot more effort).
>
> Can you recommend me some driver that tests things like:
>
> ·Read a point/arc/polygon layer from some format (gml,kml, gpckg,..)
> and assert the number of readed objectes
>
> ·Read a point layer and assert some points (3d included) and some of
> the fields values
>
> ·The same with arcs and polygons
>
> ·Create some layer from the own format to anothers and compare the
> results with some “good” results.
>
> ·Create multiple layers from one outer format (like gpx) and verify
> the name of the created files...
>
You don't necessarily need to use other formats. It is actually better
if the tests of a format don't depend too much on other formats, to keep
things isolated.
To test the read part of your driver, add a autotest/ogr/data/miramon
directory with *small* test files, ideally at most a few KB each to keep
the size of the GDAL repository reasonable, and a few features in each
is often enough to unit test, with different type of geometries,
attributes, and use the OGR Python API to open the file and iterate over
its layers and features to check their content. Those files should have
ideally be produced by the Miramon software and not by the writing side
of your driver, to check the interoperability of your driver with a
"reference" software.
For the write site of the driver, you can for example run
gdal.VectorTranslate(dest, source) on those files, and use again the
test function to validate that the read side of your driver likes what
the write site has produced. An alternative is also to do a binary
comparison of the file generated by your driver with a reference test
file stored in for example autotest/ogr/data/miramon/ref_output. But
this may be sometimes a fragile approach if the output of your driver
might change in the future (would require regenerating the reference
test files).
I'd suggest your test suite also has a test that runs the "test_ogrsf"
command line utility which is a kind of compliance test suite which
checks a number of expectations for a driver, like that
GetFeatureCount() returns the same number as iterating with
GetNextFeature(), etc etc
It is difficult to point at a "reference" test suite, as all drivers
have their particularities and may need specific tests. Potential
sources of inspirations:
- autotest/ogr/ogr_gtfs.py . Shows very simple testing of the read side
of a driver, and includes a test_ogrsf test
- autotest/ogr/ogr_csv.py has examples where the writing side of the
driver is checked by opening the output file and checking that some
strings are present in it (only easily doable with text based formats)
- autotest/ogr/ogr_openfilegdb_write.py . Extensive testing of the
writing side of a driver . A lot in it will be specific to the format
and irrelevant to your concern, but you should at least find all
possible aspects of how to test the write side of a driver.
Even
--
http://www.spatialys.com
My software is free, but my time generally not.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/gdal-dev/attachments/20240209/78c677ab/attachment.htm>
More information about the gdal-dev
mailing list