[GRASS-user] Simultaneous r.horizon processes

Collin Bode collin at berkeley.edu
Thu Apr 19 14:23:25 EDT 2012


Daniel,

Wow, I may have done the math wrong, but I think you would need ~10GB of RAM per r.horizon process to run your map without constraining the area.   So, I would be inclined to agree with Hamish that you may be RAM constrained.   I have had some success using g.region to "tile" my dataset into rectangles that are long east-west  (sunrise-sunset) and short north-south (summer-winter), but I had to make sure I had at least 25% overlap to cover edge effects.  

To clarify, I meant "poor man's multithreading", i.e. running one instance of r.horizon per CPU core.  I used the multiprocessing library make sure I created only as many processes as there were cores.  I am still learning python, but unlike bash, I don't think you can do that without the mp library.  

I was wrong about the incorrect values from r.horizon.  When I looked back at the output, the files were overwriting each other because they had the same name and I had several processes writing to the same mapset to a map of the same name.  

Good luck!

Collin Bode
UC Berkeley



On Apr 18, 2012, at 12:31 PM, Hamish wrote:

> Daniel wrote:
>> What do you mean, r.sun can do multithreading? I've heard that
>> r.sun uses multithreading in GRASS 7, but is that implemented
>> in GRASS 6? Or are you talking about "poor man's
>> multithreading," like on the GRASS wiki?
> 
> there is OpenCL GPU accel. support, but it has not yet been
> merged into grass 7. (mea culpa)
> 
> for r.sun being run 365 (or whatever) times in a row the "poor
> man's" method is fine, in fact the r3.in.xyz script in addons
> is perhaps the most efficient multi-CPUing in grass to date.
> (to my surprise)
> 
> 
> I've just read through the r.horizon code in devbr6 and I don't
> see anything which makes the module unable to be run multiple
> times in the same mapset. (no external region setting, no
> generically named temp files, no gratuitous use of grass library
> global variables)   ... are you running under NFS or similar as
> addressed by Markus's script? aka maybe the trouble is rooted
> elsewhere?
> 
>> I did a little debugging today and think it's due to the large
>> size of my study area (~36 km², 0.5m resolution).
> 
> 72000x72000, how much ram does r.horizon use?
> maybe processes are being killed as you run out of RAM.
> in that case set max num of parallel jobs so that it fits
> into memory without going into swap space instead of num of CPUs.
> and error handling in the script (test for '$? -ne 0') could
> help try failing runs again.
> 
>> If I spatially partition the area and then stitch everything
>> back together, I hope it works - tests on smaller regions have
>> worked correctly thus far but I'll need to wait a while to see
>> the real results.
> 
> for r.horizon the mountains in the distance can matter (that's
> the whole point) so I'd be careful with cutting up the region.
> temporarily lowering the region resolution during r.horizon
> may be less-bad of a compromise.
> 
> 
> FWIW I've tentatively given up on using r.horizon, see the "r.sun
> commissioning trials" trac ticket and wiki page. since sun
> placement changes each day, and for sub degree placement of the
> sun you need so many horizon maps as to make the loading of them
> all more expensive than just re-calculating it on-the-fly but
> with the exact placement. (I generally try for slow exactness
> instead of fast processing time though, YMMV)
> but maybe I don't correctly understand what r.horizon is doing..
> 
> 
> Hamish



More information about the grass-user mailing list