[gdal-dev] Best way to get GPU accelerated GDAL binaries?

Arthur Nieuwland a.nieuwland at sobolt.com
Mon Nov 9 02:40:02 PST 2020


Hello mailing list,

I am trying to use gdalwarp and gdal_translate to do manipulations on a
large raster using a vector file. The process is taking very long (a week
by my estimation), I assume because the raster is large and the vector file
complex.

For that reason I want to accelerate the calculations using GPU/OpenCL
acceleration. What is the best way to get such binaries? Is there maybe a
docker image with such binaries? Or a tgz prebuilt with CUDA/OpenCL support?

I tried compiling it myself with GDAL 2.4.2 on Ubuntu 16.04 with 9.0. While
the compilation failed, running the binaries resulted in a core dump. If
compiling is the best way, what configuration (Linux distro / CUDA version
/ ...) works well?

Lastly, does GPU acceleration help gdalwarp mask faster? The raster is
409600 by 241600 pixels, data type byte with 2 bits. The command I want to
run is:

    gdalwarp -cutline <complex gpkg> -srcnodata 3 -dstnodata 3 -co
BIGTIFF=yes -multi -co tiled=yes in.vrt out.tif

To summarize, I'd like to know:

1. Do prebuilt binaries exist with GPU acceleration enabled?
2. If I need to compile them myself, what are the best versions to do so
with?
3. Optionally, will GPU acceleration help me mask my input raster?

Thanks in advance,
Arthur Nieuwland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/gdal-dev/attachments/20201109/cc2509ad/attachment.html>


More information about the gdal-dev mailing list