[GRASS-user] increase performance of r.neighbors
Moritz Lennert
mlennert at club.worldonline.be
Fri May 26 09:20:42 PDT 2017
Le Fri, 26 May 2017 11:04:14 -0400,
Giuseppe Amatulli <giuseppe.amatulli at gmail.com> a écrit :
> Hi all,
> I'm running r.neighbors for a global 250m raster with window-size
> setting larger than 100 pixels.
That is a very, very large raster and a
very, very large window size. If you cover the whole of earth and
my calculation are correct, you should have over 8 billion pixels. And
you are asking the computer to compute for each of these pixels a
statistic based on over 10.000 pixel values (100x100).
Does it really make sense in your use case to calculate a stat based on
pixels some of which are 12.5km away from your pixel ?
> The computation take almost 1 week and i'm thinking if there is a way
> to speed up the process.
>
> I know how to set-up a multi-region & multi-core computation and
> working in tiles but I would avoid due to the difference that I would
> encounter in the tile borders (and tile overlap will be required).
If you work with sufficient tile overlap, you won't have the border
effect, you can just drop the entire overlap area after the
calculations.
>
> Is it possible to run r.neighbors in parallel or increase the memory
> that r.neighbors would use (as developed in r.watershed)?
AFAIK, r.neighbors already uses all available memory, but I'm not sure
that memory is the bottleneck, here.
And internally, no parallelization is foreseen.
So for running it in parallel, tiling is the only option, AFAIK.
Moritz
More information about the grass-user
mailing list