[gdal-dev] downsampling geotiff with a low-pass filter
Even Rouault
even.rouault at spatialys.com
Sat Nov 17 04:46:21 PST 2018
Kay,
> This will be a long post, please forgive my verbosity, but this is not a
trivial topic.
Thanks a lot for the in-depth analysis.
>
> 1.
>
> I'll start with cubic spline resampling. Thanks to Even for pointing me
> to the GDAL code. It looks to me as if the sample of the B-spline basis
> function is applied directly to the source data, with no prefiltering
> applied to the source data.
I confirm.
> The prefilter is a high-pass filter which
> 'sharpens' the original signal by just so much that applying the
> reconstruction filter will produce the original signal. The problem with
> prefiltering is that it's done with an IIR filter and has theoretically
> infinite support (usually it's enough to have a handful of support
> either end, forgive my sloppy formulation), which makes margin treatment
> and handling of no-data value a dicey issue. This is probably one reason
> why prefiltering tends to be omitted.
So there would be a need for high-pass prefilter after a low-pass one, before
applying the resampling function ?
How would that high-pass prefilter would be implemented ? Do you have
references for that ?
Does that high-pass prefilter only applies to cubic bspline, or to other
methods as welll ? currently the code logic is common for Bicubic (Catmull-
Rom), Bicubic-BSpline and Lanczos. They only differ by the kernel radius and
the weighting function applied.
>
> 2.
> I'd encourage GDAL to consider
> adopting this method to avoid the worst of downsampling errors that come
> with '-r average'.
I somehow remember people mentionning this in the past too. (ah and "GDAL"
would be an abstract concept, if not embodied by its contributors, like you
potentially ;-) )
>
> 3.
>
> It's also possible to apply a low-pass filter no matter
> if there is high-frequency content: if there is no high-frequency
> content, a good low-pass filter has no effect.
Would probably be faster from a computation point of view to just apply the
filter than prior checking if it is needed. One point to have in mind is that
for memory reasons, and also because users also want to process big images by
chunks, GDAL operates by chunks and not on the whole image, so "infinite
support" is at best limited to the source window being passed to the function.
> What has to be understood here is that the filter has to match the
> desired scale change. The kernel has to be produced by a process taking
> the scale change into account, which can be done for a gaussian by
> choosing an appropriate standard deviation.
For a down-to-earth approach, any recommended formulas to apply ? Does it
depend only on the scaling factor ?
>
> Still with me? Thanks for your patience! I hope I have understood all
> the source code and response posts correctly, don't hesitate to correct me.
While we are at assessing correctness of the implementation, one point that
has sometimes been raised is the potential impact of the "pixel-is-area"
versus "pixel-is-point" convention. Typically an imaging sensor will reflect
the number of photons collected over an area in a range of wave-length, so the
value of the pixel applies to the whole area it covers. Other datasets might
be the result of a variable evaluated at the center of the pixel. GDAL
currently computes the distances to the pixel centre when computing the
weights of the resampling function, so it would seem to me that it is the
natural way for "pixel-is-point" datasets, and I'm wondering how good it is
for "pixel-is-area" ones.
Even
--
Spatialys - Geospatial professional services
http://www.spatialys.com
More information about the gdal-dev
mailing list