[gdal-dev] Raster calculation optimalisation
damian.dixon at gmail.com
Thu Jun 15 08:22:57 PDT 2017
Personally I would agree with your approach at looking at the cost of
However if you do change your code to read/write blocks than there are
opportunities for speeding the processing up.
Once you have the block size reading working consider using multiple
You will need to use a GDALDataset for each thread.
You will however need to consider how you write the results out again
as you will not be able to write to a single output. You will have to write
a separate raster for each thread.
This is the approach I have taken.
I can't help you with writing the block back as once I've got the raster
processes it into a format that allows me to load and display the data a lot
quicker than going back to GDAL for the data.
You need to find the equivalent to:
Also these may help...
On 15 June 2017 at 14:04, Paul Meems <bontepaarden at gmail.com> wrote:
> Thanks all for your suggestions.
> @Rutger and @Damian:
> Thanks for your suggestion about the blocks.
> I had a look at Rutger's links,
> I create the input file myself so I can add 'TILES=YES' but I'm not sure
> how to change my calculation code.
> I see in this first link xbs, ybs = b1.GetBlockSize() But I don't see
> when to use the xbs or ybs variables.
> I assume I need to change the reading of the data: band.ReadRaster(0,
> line, sizeX, 1, scanline, sizeX, 1, 0, 0); but I'm not sure how.
> And how should I write back the block?
> BTW. The main bottle-neck seems to be in the used formula parser.
> With the formula it takes 68s, without the formule - just setting the
> pixelvalue takes 3.3s and with the formula written in code pixelValue =
> (float)Math.Exp(3.1 + 0.9 * f); it only takes 3.4s.
> So not using mathparser has the highest benefits. I will do that first,
> but I also want to understand the block reading and writing.
> @Jan, thanks for your link to the other parser. I had a quick look and it
> looks very promising. Sadly I couldn't get their C# example working.
> I will look at as well.
> Just a more general question.
> Doesn't it makes sense if gdal would provide a gdal_calculate tool which
> also is librified like VectorTranslate?
> It seems lots of people are implementing it on their own.
> 2017-06-14 9:29 GMT+02:00 Rutger <kassies at gmail.com>:
>> Damian Dixon wrote
>> > It is usually better to process the pixels in a block rather than
>> > across each row especially if you are processing a TIFF as these are
>> > usually stored as tiles (blocks).
>> Other layouts are common as well. For example, the Landsat TIFF's provided
>> by the USGS have a row based-layout. If you can choose it yourself, bocks
>> are prefered in my opinion, since GDAL VRT's have a fixed blocksize
>> of128*128. So when writing TIFF's, setting "TILED=YES" is a good default.
>> I think your spot on by mentioning the blocks. Don't assume the layout at
>> all, look at the blocksize of the file and use it. If the blocks are
>> relatively small (memoy-wise), using a multiple of the size can increase
>> performance a bit more. So if its row-based and you have plenty of memory
>> spare, why not read blocks of (xsize, 128). Or if the blocksize is
>> use blocks of 256x256 etc.
>> If the volume of data is really large, increasing GDAL's block cache can
>> helpful. Although its best to avoid relying on the cache (if possible) by
>> specifying an appropriate blocksize.
>> Here are a few mini-benchmarks:
>> View this message in context: http://osgeo-org.1560.x6.nabbl
>> Sent from the GDAL - Dev mailing list archive at Nabble.com.
>> gdal-dev mailing list
>> gdal-dev at lists.osgeo.org
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gdal-dev