[gdal-dev] downsampling geotiff with a low-pass filter

Kay F. Jahnke _kfj at yahoo.com
Sat Nov 17 02:04:13 PST 2018


Thank you all for responding. Rather than responding to each individual 
post, I'll try and summarized what I have gleaned so far, please correct 
me if I have misunderstood something. This will be a long post, please 
forgive my verbosity, but this is not a trivial topic.

1.

I'll start with cubic spline resampling. Thanks to Even for pointing me 
to the GDAL code. It looks to me as if the sample of the B-spline basis 
function is applied directly to the source data, with no prefiltering 
applied to the source data. The sampled b-spline basis function is then 
used to produce a weighted average of source data values. If 
prefiltering is indeed omitted, the resulting interpolation has a 
slightly smoothing effect and does not fulfill the interpolation 
criterion, meaning that the result of the operation at locations 
coinciding with source data points does not precisely reproduce the 
source data. This seems to be what's happening: without a scale change, 
the application of cubic spline resampling seems to slightly smoothe the 
signal.

Application of the b-spline evaluation without previous prefiltering is 
commonly done, sometimes deliberately because the slight smoothing is 
intended and the resulting signal will stay within the convex hull of 
the coefficients (so, there won't be overshoots, which are possible with 
prefiltered coefficients) - and sometimes erroneously, because the need 
for prefiltering is not known. For slight downscaling, this smoothing 
may be 'just right', but it's only 'about right' for a small range of 
scaling factors. Note here that b-spline prefiltering is not the 
low-pass filter which has to be applied to the source data before 
resampling.

b-splines are not 'direct' interpolators. Direct interpolators 
approximate the continuous signal by using some operation over the given 
samples. b-splines instead use an operation over a set of coefficients, 
which are generated from the original samples by 'prefiltering' them. 
This can be explained simply: The 'reconstruction' filter for a given 
b-spline is a low-pass filter. Hence, applying it to the original signal 
will 'blur' that signal. The prefilter is a high-pass filter which 
'sharpens' the original signal by just so much that applying the 
reconstruction filter will produce the original signal. The problem with 
prefiltering is that it's done with an IIR filter and has theoretically 
infinite support (usually it's enough to have a handful of support 
either end, forgive my sloppy formulation), which makes margin treatment 
and handling of no-data value a dicey issue. This is probably one reason 
why prefiltering tends to be omitted.

2.

Looking at the source code, what I understand from it is that, as I have 
stated in my initial post, upsampling is indeed well-covered, but the 
only 'filter' which makes sense in a downsampling context is 'average', 
with all the disadvantages it has. In the GRASS codebase (Thanks to 
Markus for pointing to it) I noticed that the need for specialized 
technique for downsampling is acknowledged and there is a filtering 
method which is definitely better-suited than just averaging over 
contributing source data values: the 'weight according to area' 
parameter, which can be passed to the r.resamp.stats method. This is a 
technique which is also used in astronomy - I've seen it first in 
openCV, where it is one of the standard interpolators ('area'). It can 
produce output data which preserve the energy content of the input 
signal, looking at it as if it were composed of small rectangles of a 
uniform value and doing an average over a part of that pattern which is 
given by the extent of the (larger) target pixel. This method is 
definitely preferable to using the method without the -w parameter. It 
also has the nice property of having small support and the resulting 
signal will not over- or undershoot. I'd encourage GDAL to consider 
adopting this method to avoid the worst of downsampling errors that come 
with '-r average'.

3.

Downsampling usually has to be preceded by an appropriate low-pass 
filter. Let me iterate when such a filter is needed:

The highest representable frequency in the target (downsampled) signal 
is given by the Nyquist frequency for this signal's sampling rate. If 
the source signal has frequency content higher than that, it *must* be 
low-pass filtered before resampling, because otherwise the 
high-frequency content will 'alias' into the target signal and produce 
artifacts. If such high-frequency content is not present, resampling can 
proceed without previous low-pass filtering.

Oftentimes the high-frequency content is small, and especially when this 
comes together with small amounts of downscaling, omitting the low-pass 
filter will only produce small errors which are not easily seen, so the 
result may pass as correct, looking 'okay', when in fact is it slightly 
wrong, but not noticeably so. With increasing high-frequency content and 
larger downscaling factors, the error grows and may 'spoil' the signal 
so much that it becomes noticeable.

So, first, we have to establish if there is need for low-pass-filtering, 
and the advice would be to use it if there is relevant high-frequncy 
content. This might be seen from an FFT of the signal. If the FFT is at 
hand, the high-frequency part can be removed in the frequency domain, 
and after IFFT the signal is 'eligible' for resampling without further 
ado. This is a straightforward approach, but it does not play well with 
no-data values. It's also possible to apply a low-pass filter no matter 
if there is high-frequency content: if there is no high-frequency 
content, a good low-pass filter has no effect. So now we come to the 
next question, namely 'what is a good low-pass filter'.

4.

Thanks to Jakob for pointing me to your python code. I'll happily use 
Python :D

Using a gaussian to smoothe (or low-pass-filter) the source signal is a 
valid approach, especially since the radius of the gaussian can be 
chosen to match the desired degree of smoothing, which, in a 
downsampling context, depends on the scale change. Using a truncated 
gaussian (the 'true gaussian' is infinitely large) as a convolution 
kernel is common and well-understood. The suppression of high-frequency 
content is not as sharp as one might wish; there are filters with 
smaller transition bands and better stop-band attenuation, but they have 
to be custom-designed for a given downsampling factor, and from what 
I've seen this is not commonly done in general-purpose software.

I'd say that using a gaussian - or other convolution-based low-pass 
filter is a good idea and should outperform the 'weight according to 
area' method, with the downside of the wider support, which makes the 
handling of no-data values more difficult. Gaussian kernels will also 
not produce overshoots (because all weights are positive), which is a plus.

What has to be understood here is that the filter has to match the 
desired scale change. The kernel has to be produced by a process taking 
the scale change into account, which can be done for a gaussian by 
choosing an appropriate standard deviation. So a general-pupose 
downsampling algorithm would perform these steps:

- generate a custom filter (like, a kernel) given the desired scale change

- apply this filter to the source data

- use an interpolator over the low-pass filtered signal to obtain the 
target values for the given target locations.

Not taking the scale change into account and using a fixed kernel will 
only be appropriate for a specific scale change, so it's no good for the 
general purpose case, but can be used for specific scale changes, like 
halving the resolution ('half-band filter'), which may be repeated 
several times to cover larger scale changes.

5.

This gets me to Jukka's response pointing me to VRT files with 
KernelFilteredSource. I hadn't seen/noticed this before, thanks for 
pointing it out! With the proviso that the kernel suits the scaling 
factor (so, it has to be computed for a desired scale change) this looks 
like an excellent way of dealing with the problem: The filter is 
conceptually attached to the source data where it belongs, and 
interpolation of target values operates on the filtered data. Neat. It 
leaves the user with having to figure out a suitable kernel - a kernel 
derived from a truncated (or otherwise windowed) gaussian (needs to be 
normalized, though) would be a good starting point. I'll try that and 
report back. It also fits well with my current work flow of using shell 
scripts and gdal.

Also, thanks to Joaquim. The code you link to also looks like it will 
suit the problem, but I'll admit to a bit of overload here and defer 
looking at it in more detail to sometime later. What I found especially 
attractive is 'grdfft', which performs operations in the frequency 
domain, which is, as I have pointed out above, a very clean and 
desirable way of treating the problem and should be ideal, but might be 
difficult to use if there are no-data values - and the FFT and IFFT are 
sure computationally intensive. I think that the GRASS code base also 
offers FFT and IFFT, so it might be possible to use GRASS to the same 
effect.

6.

To conclude:

- downsampling without previous low-pass filtering is normally an error.

- upsampling is a very different matter and requires no previous 
filtering, it's necessary to see the distinction.

- there is a wide spectrum of possible low-pass filters, but the filter 
has to be customized to the given scaling factor. There is no 'one 
kernel fits all' approach.

- once the low-pass filtered signal has been obtained, it can be 
downsampled using any of the available methods, but preferably *not* 
nearest neighbour. Cubic spline is good here.

Still with me? Thanks for your patience! I hope I have understood all 
the source code and response posts correctly, don't hesitate to correct me.

Kay


More information about the gdal-dev mailing list