[GRASS-user] r.sun use - automaticcaly stopped process ?

Hamish hamish_b at yahoo.com
Mon Apr 22 21:24:35 PDT 2013


simogeo wrote:
> As said before, I use Ubuntu 12.04 in 64bits.The 
> wikipage <http://grasswiki.osgeo.org/wiki/Large_raster_data_processing>   mentions
> only memory usage limits for 32bits system. After your
> first reply Markus, I thought the 2^31 memory limit would 
> also maybe affect 64 bits systems.
...
> With res=5 or res=1 it's working well but with the raster
> resolution (0.2) the process is killed again.

Hi,

just looking in the code, I see a few things which might be
suspicious in the INPUT_part() function,
  https://trac.osgeo.org/grass/browser/grass/trunk/raster/r.sun/main.c#L761
maybe something there needs to be off_t instead?

but mainly I think it's just that the module wants a lot of RAM,
and the process gets killed when it asks for too much.


here are some tests on a few months old 6.4.svn build
(6.4.3svn.50937) on 64bit linux.

# Mode 2 (integrated daily irradiation) at spearfish
g.region rast=elevation.10m res=${*}
r.sun -s elevation.10m lin=2.5 alb=0.2 day=172 \
   beam_rad=b.172 diff_rad=d.172 \
   refl_rad=r.172 insol_time=it.172

as you can see, allocating >4gb RAM works ok for me, so it
is likely not a LFS/32/64bit problem.

rows:       27960
cols:       37980
cells:      1061920800
-> in swap, >16gb RAM, (~19gb?)

rows:       22368
cols:       30384
cells:      679629312
-> 12gb

rows:       18640
cols:       25320
cells:      471964800
-> 8.8g

rows:       13980
cols:       18990
cells:      265480200
-> 5GB

rows:       6990
cols:       9495
cells:      66370050
-> 1.2g

rows:       2796
cols:       3798
cells:      10619208
-> 0.2gb  time: 37m42s

plotting it out, memory use seems to grow linearly with
number of cells. from earlier experiments, time does as well.

by my calcs, a 60000x60000 cell region would want ~64gb RAM.
However it would take a long-long time to get there, as in
the last example above, a bit bigger than 3000x3000 cells took
half an hour on a few months old fast i7 cpu w/ 16GB ram.

I don't think Seth's GPU OpenCL acceleration is going to help
there, since GPU RAM is often limited and the I/O to it a bottle-
neck, and even 8 or 16x faster than it w/multithreading would
take would still take too long for Mode 2 daily integration runs.

So I think your best bet is to use a coarser resolution, then
make it finer until you hit a time or RAM limit. Do you have
that fine of a DEM anyway? LiDAR often being binned to 2m cell
size,..


Hamish


More information about the grass-user mailing list