[GRASS-dev] r.surf.contour inefficiency?

Glynn Clements glynn at gclements.plus.com
Fri Feb 6 22:00:55 EST 2009


Hamish wrote:

> while running r.surf.contour I notice that 33% of the cpu is dealing with
> "sys" leaving ~ 65% to run the program. with r.surf.contour using 97% of
> the CPU, top says:
> 
> Cpu(s):  0.3%us, 33.1%sy, 64.9%ni,  0.0%id, ...
> 
> (nice'd because I run GRASS at low priority to keep processing from
> slowing down desktop tasks while number crunching)
> 
> there is no big disk I/O going on -- is using that much "sys" CPU
> indicative of some massive inefficiency in the program? It's a module
> which has historically taken a ridiculously long time to run and I don't
> understand why the kernel needs to be involved so much.
> (linux 2.6.22, 1.5GB+ free memory)
> 
> 
> ideas?

I presume that it's the use of the segment library causing a lot of
I/O.

Can r.surf.contour reasonably be used on gigabyte-sized maps? If not,
I'd suggest eliminating the use of the segment library.

In general, there isn't much point using segmentation with algorithms
whose time complexity is worse than linear in the size of the input. 
On modern systems, the run time will typically become prohibitive
before memory usage does.

If it can reasonably be used on maps which won't necessarily fit into
memory, try increasing the number of segments in memory (the last
parameter to cseg_open(), currently hardcoded to 8).

Even if that reduces the sys time, segment_get() is still vastly
slower than array access. If segmentation is useful, the code in
r.proj[.seg] should be significantly more efficient, due to the use of
macros, power-of-two sizes, and compile-time constants.

-- 
Glynn Clements <glynn at gclements.plus.com>


More information about the grass-dev mailing list