[Gdal-dev] using GDALDataset* in "point" mode and memory allocation

Gregory, Matthew matt.gregory at oregonstate.edu
Mon Feb 2 17:19:45 EST 2004


I am working on an app which runs a model at point, list and window modes, which gets determined at run-time.  I have a class which has a GDALDataset* as a private member and this dataset is opened at run initialization through GDALOpen, eg. 

  _varLayer = (GDALDataset *) GDALOpen( _varFileName.c_str(), GA_ReadOnly );

This class exists throughout the entire model run.  What I've noticed when running the point and list (set of points) runs is that I am ramping up memory pretty quickly through numerous calls to RasterIO, eg.

  GDALRasterBand* tempBand = _varLayer->GetRasterBand( 1 );
  tempBand->RasterIO(GF_Read, col, row, 1, 1, ptr, 1, 1, GDT_Float64, 0, 0);

I'm guessing that this is NOT the intended way to run RasterIO, ie. a pixel at a time.  It looks like each call to RasterIO is allocating memory within GetBlockRef() either for cache raster blocks or doing actual reads on the data (I'm a bit fuzzy on this).

Note that everything cleans up well when my class goes out of scope, but I'm a bit worried that users may run out of dynamic memory if they run a huge set of points.

1.  Is there a better way to free memory after each point is run?  I'm imagining that means taking my GDALDataset* out of scope each time, which obviously wouldn't be beneficial in terms of speed.

2.  Is it more economical to front load a number of tiles at initialization, rather than potentially one at a time. 

I'm interested to hear comments of anybody else using GDAL in this way.  Please straighten me out if I'm way off base ...

Thanks, matt

Matt Gregory
Faculty Research Assistant
Department of Forest Science
Oregon State University
541.750.7285
matt.gregory at orst.edu



More information about the Gdal-dev mailing list