[GRASS-user] Simultaneous r.horizon processes

Collin Bode collin at berkeley.edu
Thu Apr 19 14:23:32 EDT 2012


Hamish, Markus, 

I have compiled the OpenCL code and got it to work with Grass70svn on Ubuntu 11.10, but it is severely memory constrained.  Your map has to fit in your video ram (1GB for me).   You can't use memory partitioning unless you have already run r.horizon, unfortunately, and r.horizon was never ported to OpenCL.  OpenCL is exciting, but for large datasets, it is not yet useful :-(.

Is there any optimization tricks that we could do with either r.horizon or its equivalent in r.sun?  For example, distant mountains do not need to be in 0.5 meter resolution, as with Daniel's dataset, or 2 meters in mine.  10-30 meters is sufficient to provide shading 10km away.  It would be orders of magnitude faster to specify a 'regional' map for large scale topographic shading which would then be overlaid with a smaller tile of high resolution elevation.

I really want to just use r.sun and never use r.horizon again, but unless I can get access to a cluster with 10GB ram per node, I can't.  It just takes too long to process. 

Collin 




On Apr 18, 2012, at 12:31 PM, Hamish wrote:

> Daniel wrote:
>> What do you mean, r.sun can do multithreading? I've heard that
>> r.sun uses multithreading in GRASS 7, but is that implemented
>> in GRASS 6? Or are you talking about "poor man's
>> multithreading," like on the GRASS wiki?
> 
> there is OpenCL GPU accel. support, but it has not yet been
> merged into grass 7. (mea culpa)
> 
> for r.sun being run 365 (or whatever) times in a row the "poor
> man's" method is fine, in fact the r3.in.xyz script in addons
> is perhaps the most efficient multi-CPUing in grass to date.
> (to my surprise)
> 
> 
> I've just read through the r.horizon code in devbr6 and I don't
> see anything which makes the module unable to be run multiple
> times in the same mapset. (no external region setting, no
> generically named temp files, no gratuitous use of grass library
> global variables)   ... are you running under NFS or similar as
> addressed by Markus's script? aka maybe the trouble is rooted
> elsewhere?
> 
>> I did a little debugging today and think it's due to the large
>> size of my study area (~36 km², 0.5m resolution).
> 
> 72000x72000, how much ram does r.horizon use?
> maybe processes are being killed as you run out of RAM.
> in that case set max num of parallel jobs so that it fits
> into memory without going into swap space instead of num of CPUs.
> and error handling in the script (test for '$? -ne 0') could
> help try failing runs again.
> 
>> If I spatially partition the area and then stitch everything
>> back together, I hope it works - tests on smaller regions have
>> worked correctly thus far but I'll need to wait a while to see
>> the real results.
> 
> for r.horizon the mountains in the distance can matter (that's
> the whole point) so I'd be careful with cutting up the region.
> temporarily lowering the region resolution during r.horizon
> may be less-bad of a compromise.
> 
> 
> FWIW I've tentatively given up on using r.horizon, see the "r.sun
> commissioning trials" trac ticket and wiki page. since sun
> placement changes each day, and for sub degree placement of the
> sun you need so many horizon maps as to make the loading of them
> all more expensive than just re-calculating it on-the-fly but
> with the exact placement. (I generally try for slow exactness
> instead of fast processing time though, YMMV)
> but maybe I don't correctly understand what r.horizon is doing..
> 
> 
> Hamish



More information about the grass-user mailing list