[gdal-dev] CUDA PyCUDA and GDAL

Shaun Kolomeitz shaun.kolomeitz at derm.qld.gov.au
Wed Nov 18 17:25:26 EST 2009

Thanks Seth,

It makes sense that the slowest part of the whole equation would be the
disk operations, and there must be quite a number of disk reads/writes
when processing imagery. Currently we use Raid Arrays that push data
through at a rate of 300MB/s, granted if these were SSDs in Raid0 we
could push beyond 1GB/s. Currently to process (mosaic) an 80GB image it
takes several days to complete. This is also on 32bit hardware, and I
suspect is only single threaded so we're limited to 3GB RAM. From what I
understood the most optimal caching size in GDAL is 500MB, using a 512M
window (unless that has changed).
If you can easily lay your hands on your GSoC application than that
would be great. We are discussing what might be possible with a very
talented coder, who eats these types of "challenges" for breakfast !
Perhaps a better approach would be to use something like a grid
computing approach then like Condor to break up the processing ?


-----Original Message-----
From: Seth Price [mailto:seth at pricepages.org] 
Sent: Thursday, 19 November 2009 8:07 AM
To: Shaun Kolomeitz
Cc: gdal-dev at lists.osgeo.org
Subject: Re: [gdal-dev] CUDA PyCUDA and GDAL

I've been intending for a while to work on either CUDA or OpenCL with
& GRASS. I applied to do this for the Google Summer of  Code, but wasn't
accepted this past summer. I'll probably work on it someday just to make
sure my thesis work gets finished within budget.

However, I'm mostly interested in speeding up the resampling routines.
They should be able to get close to the theoretical maximum on CUDA. I
don't know about the routines which you mention without looking closer
the code. For example, image reading is probably limited by the disk
speed, so it wouldn't be faster in CUDA. Translates are another
which doesn't involve much CPU time compared to disk time, so it would
also be difficult to speed it with CUDA. For these operations your best
option might be to replace your hard drive with a SSD.

I'm not familiar with image mosaics in GDAL, but I would guess that they
are heavy on the resampling when generating a quality final image. This
something where each output pixel depends on the nearest ~16 input
It takes a lot of CPU time to process all those pixels, and it would
benefit from CUDA.

If you want, I could hunt down my GSoC application which would go into a
bit more detail.

On Wed, November 18, 2009 2:46 pm, Shaun Kolomeitz wrote:
> I've heard a lot about the power of NVidia CUDA and am curious about
> ways in which we could leverage off this to increase the performance
> 1) Image Mosaics 2) Translates and 3) Image Reading/rendering
> (especially highly compressed images).
> I also see that there is pyCUDA as well. Both of which I am unsure how
> (or if) you could use them to run (even portions of) GDAL ?
> If anyone has any pointers it would be nice to know.
> Many thanks,
> Shaun Kolomeitz
> Principal Project Officer
> Business and Asset Services
> Queensland Parks and Wildlife Service
> As of 26 March 2009 the Department of Natural Resources and
> Water/Environmental Protection Agency integrated to form the
> of Environment and Resource Management
> +----------------------------------------------------------------+
> Think B4U Print
> 1 ream of paper = 6% of a tree and 5.4kg CO2 in the atmosphere
> 3 sheets of A4 paper = 1 litre of water
> +----------------------------------------------------------------+
> _______________________________________________
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev

More information about the gdal-dev mailing list