[gdal-dev] Memory use in GDALDriver::CreateCopy()

ozy sjahputera sjahputerao at gmail.com
Wed Jan 13 12:18:50 EST 2010


Update:

after more than 20 minutes of being non-responsive, the OS finally regained
functionality and promptly killed gdal_translate after about 80% into the
process.


On Wed, Jan 13, 2010 at 11:14 AM, ozy sjahputera <sjahputerao at gmail.com>wrote:

> Hi Even,
>
> yes, I tried:
> gdal_translate -of "NITF" -co "ICORDS=G" -co "BLOCKXSIZE=128" -co
> "BLOCKYSIZE=128"  NITF_IM:0:input.ntf output.ntf
>
> I monitored the memory use using top and it was steadily increasing till it
> reached 98.4% (I have 8GB of RAM and 140 GB of local disk for swap etc.)
> before the node died (not just the program, but the whole system just
> stopped responding).
>
> My GDAL version is 1.6.2.
>
> gdalinfo on this image shows the raster size of (37504, 98772) and
> Block=37504x1.
> The image is compressed using JPEG2000 option and contains two subdatasets
> (data and cloud data ~ I used only the data for gdal_translate test).
>
> Band info from gdalinfo:
> Band 1 Block=37504x1 Type=UInt16, ColorInterp=Gray
>
> Ozy
>
>
> On Tue, Jan 12, 2010 at 5:38 PM, Even Rouault <
> even.rouault at mines-paris.org> wrote:
>
>> Ozy,
>>
>> Did you try with gdal_translate -of NITF src.tif output.tif -co
>> BLOCKSIZE=128 ? Does it give similar results ?
>>
>> I'm a bit surprised that you even managed to read a 40Kx100K large NITF
>> file organized as scanlines. There was a limit until very recently that
>> prevented to read blocks whose one dimension was bigger than 9999. This
>> was fixed recently in trunk ( see ticket
>> http://trac.osgeo.org/gdal/ticket/3263 ) and branches/1.6, but it has
>> not yet been released to an officially released version. So which GDAL
>> version are you using ?
>>
>> Does the output of gdalinfo on your scanline oriented input NITF gives
>> something like :
>> Band 1 Block=40000x1 Type=Byte, ColorInterp=Gray
>>
>> Is your input NITF compressed or uncompressed ?
>>
>> Anyway, with latest trunk, I've simulated creating a similarly large
>> NITF image with the following python snippet :
>>
>> import gdal
>> ds = gdal.GetDriverByName('NITF').Create('scanline.ntf', 40000, 100000)
>> ds = None
>>
>> and then creating the tiled NITF :
>>
>> gdal_translate -of NITF scanline.ntf tiled.ntf -co BLOCKSIZE=128
>>
>> The memory consumption is very reasonnable (less than 50 MB : the
>> default block cache size of 40 MB + temporary buffers ), so I'm not
>> clear why you would have a problem of increasing memory use.
>>
>> ozy sjahputera a écrit :
>> > I was trying to make a copy of a very large NITF image (about 40Kx100K
>> > pixels) using GDALDriver::CreateCopy(). The new file was set to have
>> > different block-size (input was a scanline image, output is to have a
>> > 128x128 blocksize). The program keeps getting killed by the system
>> > (Linux). I monitor the memory use of the program as it was executing
>> > CreateCopy and the memory use was steadily increasing as the progress
>> > indicator from CreateCopy was moving forward.
>> >
>> > Why does CreateCopy() use so much memory? I have not perused the
>> > source code of CreateCopy() yet, but I am guessing it employs
>> > RasterIO() to perform the read/write?
>> >
>> > I was trying different sizes for GDAL  cache from 64MB, 256MB, 512MB,
>> > 1GB, and 2GB. The program got killed in all these cache sizes. In
>> > fact, my Linux box became unresponsive when I set GDALSetCacheMax() to
>> > 64MB.
>> >
>> > Thank you.
>> > Ozy
>> >
>> >
>> > ------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > gdal-dev mailing list
>> > gdal-dev at lists.osgeo.org
>> > http://lists.osgeo.org/mailman/listinfo/gdal-dev
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osgeo.org/pipermail/gdal-dev/attachments/20100113/99d3f81c/attachment.html


More information about the gdal-dev mailing list