[QGIS-Developer] Raster pipeline issues

Nyall Dawson nyall.dawson at gmail.com
Wed Apr 15 21:09:38 PDT 2020


On Thu, 16 Apr 2020 at 07:46, Martin Dobias <wonder.sk at gmail.com> wrote:
>
> Hi all
>
> With the recent addition of contour renderer of rasters I have
> realized there are artifacts coming to the renderer from the earlier
> stages (the raster data provider mainly). Another issue is that raster
> resampling (nearest neighbor / bilinear / bicubic) is only applied at
> the end of raster processing pipeline (i.e. after the contour renderer
> has been run), which does not really help for contours to get smooth
> contour lines (rather one gets some pixelated blurry output).
> Moreover, hopefully I am not the only one seeing occasional subtle
> "jumps" of raster data when zooming/panning map, which I would account
> to some small math errors in the QGIS raster code.

I agree that the current approach has issues. I'm glad to see this
getting wider attention!

> Looking at the GDAL raster data provider in QGIS, there's quite some
> code complexity, with copying of data between temporary buffers and
> dealing with limitations that existed in GDAL < 2.0 (e.g. non-integer
> source windows).

Frustratingly, we've actually got 3 different sources of resampling
happening during raster rendering:
1. Internal resampling happening inside gdal
2. A very coarse resampling run happening inside QgsGdalProvider::readBlock
3. The raster pipeline resampling, which happens **after** the renderer.

> I am wondering what people think about simplifying the QGIS raster pipeline:
> - get rid of the last stage of the pipeline - resampling - and make it
> responsibility of data providers (i.e. GDAL) to return resampled data

It's important to note that these are two very different things. I
recently did some work on the resampler to avoid edge effects, and in
my initial experiments I attempted moving the resampling right up to
the provider level, exactly as you describe. The results however are
HUGELY different vs the current resample-after-rendering approach.
Unfortunately I junked the code from this early approach or I could
show screenshots, but I'll try to describe it in words :p

Imagine a lower-resolution float raster being resampled up to a higher
resolution (e.g. zoomed in past 1:1). You might be taking 100x100
pixels from the gdal provider, then applying a discrete color ramp to
these, before resampling up to 1000x1000 for render. The results will
be a blurry version of the discrete colors - i.e. the "Hard"
boundaries between color mapped pixels are "softened" when enlarging
this 100x100 colored block out to 1000x1000.

The other approach is to do the resampling first, in gdal. What we get
then is 1000x1000 pixels of resampled float values to begin with,
smoothly interpolated from the 100x100 original pixels. This is then
rendered using the discrete color ramp, and then rendered without
further resampling or smoothing. What you get here looks more like
smoothed contours of the discrete color bands, with NO color smoothing
of the rendered pixels. The result looks much higher resolution
(because we are interpolating the floating point values on a 1000x1000
grid), but the "boundaries" between the discrete color values are
"hard" and have no antialiasing.

I honestly think there's valid use cases for both pre-render vs
post-render resampling as a result. We can't directly replace the
current post-render approach with a forced pre-render approach without
people getting VERY different rendering of their existing projects.

That said, the benefits of pre-render are huge, including that we can
instantly expose all of GDAL's inbuilt resampling mechanisms (adding
average, mode, max, min, median, q1, q3 mechanisms). The results are
also more accurate, because we are doing a single resampling run vs 3.
And, for some purposes, it results in a more visually pleasing render.

> - get rid of the reprojector stage of the pipeline - and again make it
> responsibility of data providers (i.e. GDAL) to return already warped
> data

This sounds like a good idea, BUT we'd need to be very careful that
any transformation happening inside GDAL correctly uses the same
coordinate operation as you'd get if the transformation was done on
the QGIS side. I.e. you'd have to evaluate the coordinate operation
using the transform context, pass this to GDAL, and make sure that
GDAL also has access to the same proj search paths as QGIS does.

> - any other raster data provider not being able to do it internally
> would just use GDAL routines

> My hope is that we could get rid of a lot of raster related code and
> at the same time use GDAL's optimized code even more. I am not saying
> I will be working on this anytime soon

Ok, so we expect to see this done by May then? :D

Nyall


, but I would be interested to
> hear what others think about the current state of the raster pipeline
> :-)
>
> Cheers
> Martin
> _______________________________________________
> QGIS-Developer mailing list
> QGIS-Developer at lists.osgeo.org
> List info: https://lists.osgeo.org/mailman/listinfo/qgis-developer
> Unsubscribe: https://lists.osgeo.org/mailman/listinfo/qgis-developer


More information about the QGIS-Developer mailing list