[gdal-dev] CUDA PyCUDA and GDAL

Shaun Kolomeitz shaun.kolomeitz at derm.qld.gov.au
Thu Nov 19 21:27:14 EST 2009


Can you expand of what you mean by a "gross issue" (non-optimised/user
ignorance/rtfm ???) ? Are there compounding issues in the process ?
Is there a "magic combination of switches" on a 32bit system (3Gb RAM,
Dual Xeons) that I should be using (does it depend muchly on the
source/destination image format - normally IMB or ECW/SID to TIF/PNG? I
currently use a window size of 500MB and GDAL_CACHEMAX of 512MB, and I
thought that was supposed to be optimal ? I also have a high-end
consumer 64bit Core i7 XE / 12Gb RAM with twin SSDs in Raid0 available
at home and am curious if the settings for optimising processing in
64bit (for gdal_retile, gdalwarp and gdal_translate) can be bumped up
Its sounding like CUDA is perhaps not going to give us that great a
return for our investment. How does one go about locating a suitably
experienced GDAL developer ? (which of course you would have to be the
ultimate Guru here :-)

Many thanks,

-----Original Message-----
From: Frank Warmerdam [mailto:warmerdam at pobox.com] 
Sent: Thursday, 19 November 2009 11:20 AM
To: Shaun Kolomeitz
Cc: Seth Price; gdal-dev at lists.osgeo.org
Subject: Re: [gdal-dev] CUDA PyCUDA and GDAL

Shaun Kolomeitz wrote:
> Thanks Seth,
> It makes sense that the slowest part of the whole equation would be
> disk operations, and there must be quite a number of disk reads/writes
> when processing imagery. Currently we use Raid Arrays that push data
> through at a rate of 300MB/s, granted if these were SSDs in Raid0 we
> could push beyond 1GB/s. Currently to process (mosaic) an 80GB image
> takes several days to complete. This is also on 32bit hardware, and I
> suspect is only single threaded so we're limited to 3GB RAM. From what
> understood the most optimal caching size in GDAL is 500MB, using a
> window (unless that has changed).
> If you can easily lay your hands on your GSoC application than that
> would be great. We are discussing what might be possible with a very
> talented coder, who eats these types of "challenges" for breakfast !
> Perhaps a better approach would be to use something like a grid
> computing approach then like Condor to break up the processing ?


I suspect that there is a gross issue with how the warping is being
and that it could be sped up substantially. On my two year old consumer
grade desktop I'm able to warp/reproject around 30GB/hour with gdalwarp.

I don't want to dissuade you from investigating CUDA options, but
you might be able to get ten fold improvement by hiring an experienced
gdalwarp developer for a few hours to investigate your particular

Best regards,
I set the clouds in motion - turn up   | Frank Warmerdam,
warmerdam at pobox.com
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush    | Geospatial Programmer for Rent

As of 26 March 2009 the Department of Natural Resources and Water/Environmental Protection Agency integrated to form the Department of Environment and Resource Management 

Think B4U Print
1 ream of paper = 6% of a tree and 5.4kg CO2 in the atmosphere
3 sheets of A4 paper = 1 litre of water

More information about the gdal-dev mailing list