[GRASS-dev] lidar tools update in grass7

Soeren Gebbert soerengebbert at googlemail.com
Mon May 3 06:11:22 EDT 2010


Hello Markus,

2010/5/3 Markus Metz <markus.metz.giswork at googlemail.com>:
> On Wed, Apr 28, 2010, Soeren Gebbert wrote:
>>>
>>> Hmm, if I understand the code right, only the innermost for loop [of the tchol solver] can be
>>> executed in parallel because the first two for-loops need results of
>>> previous runs (not possible to run i + 1 before i, same for j). But I don't
>>> know that parallel stuff, I would in any case first optimize the code
>>> without parallelization, only then add multithreading.
>>
>> I fully agree. The band cholesky solver is well suited for the job but
>> not designed for parallelization. Thus i have implemented an iterative
>> Conjugate Gradient (CG) solver for band matrices  (just by replacing
>> the matrix - vector multiplication) to replace the cholesky band
>> solver. Well, the cholesky solver out-performes the CG solver by a
>> factor of 10+. I have partly parallelized the CG solver for band
>> matrices, but even the speedup on a 8 core system is to small to
>> compete with the cholesky band solver. Besides of that, the created
>> band matrices are of bad condition, so the CG solver may fail for
>> subregions.
>
> IMHO, it doesn't really make sense to replace a function with another
> function that's 10+ times slower and produces worse results... If you
> need a parallelized version and a 10+ core system to achieve the same
> speed, and the results are not as good as with the original tchol
> solver, I would suggest to keep using the original tchol solver until
> the CG solver is faster and produces identical results (also for
> asymmetrical matrices).

Sorry for my misunderstanding words, i not suggest to use the parallel
CG algorithm instead of the cholesky. I just wanted to say that i
tried to find an alternative and failed.

>
>> Hence, i would suggest to use MPI parallelization or something else on
>> process base, to concurrently compute the subregions which are created
>> by default. Maybe a python script using overlapping tiling and
>> subprocess  can do the job?
>
> The handling of the overlapping tiles is deeply embedded in the module
> code, at first glance it seems to me that you would have to completely
> translate the module to python and then call the interpolation
> routines from python? Seems messy to me. The subregion tiling is a bit
> tricky and I spent quite some time fixing and optimizing it.
>

I don't intent to move the tiling from the C-code into the
Python-code, but to have a Python module which is able to tile huge
regions into smaller subregions avoiding edge effects using
overlapping strategies. These subregions should be still large enough
so that the lidar tools can use there tiling strategies. No
interpolation is called from the python modules.
If that mean to re-implement the lidar tiling mechanism in Python
using spline estimation and all the fancy algorithms would be to messy
indeed. I was hoping that there are simpler overlapping strategies
avoiding edge effects (large overlapping edges which are cut to
smaller ones when patching the large tiles using a pixel averaging
strategy ...).

[snip]
> As above, IMHO that would be a complete rewrite of the modules in
> python. Maybe because I am more familiar with C. Changing the C code
> so that tiles can be processed in parallel should not be too hard.

I see. That will mean to re-fracture the C-code, putting the
consolidated tiling algorithms in the lidar library and implement MPI
parallelism in the lidarlib and each module. Thats IMHO a lot of work
too. :)

[snip]
> r.tileset is not doing the same like the subregion tiling done in the
> lidartools because of the overlapping. The outermost overlapping edge
> is used for interpolation but the values interpolated there are never
> used, it exists to avoid edge effects along the borders of the
> subregion. The interpolation results of the second overlapping zone
> are used but weighted according to their distance to the core zone.
> The interpolation results of the core zone are used as is. The extends
> of the overlapping zones dramatically influence accuracy and speed,
> and the current design should be kept in order to avoid edge effects.
> AFAICT, the interpolated surfaces are seamless, no edge effects
> whatsoever visible in various diagnostic maps calculated.

Please excuse my ignorance, i am not familiar with the lidar tiling
algorithm. So i thought the size of the overlapping edges can be
computed in python and used as parameter for r.tileset (parameter
overlap). After the lidar interpolation, the created tiles can be
patched by cutting the overlapping edges and using pixel averaging.
This will, as you mentioned above not be sufficient to avoid edge
effects.

So there will be no good alternative to a native MPI lidar version?

Best regards
Soeren


More information about the grass-dev mailing list