Hi Even,<br><br>yes, I tried:<br>gdal_translate -of "NITF" -co "ICORDS=G" -co "BLOCKXSIZE=128" -co "BLOCKYSIZE=128" NITF_IM:0:input.ntf output.ntf<br><br>I monitored the memory use using top and it was steadily increasing till it reached 98.4% (I have 8GB of RAM and 140 GB of local disk for swap etc.) before the node died (not just the program, but the whole system just stopped responding). <br>
<br>My GDAL version is 1.6.2.<br><br>gdalinfo on this image shows the raster size of (37504, 98772) and Block=37504x1. <br>
The image is compressed using JPEG2000 option and contains two
subdatasets (data and cloud data ~ I used only the data for
gdal_translate test).<br>
<br>Band info from gdalinfo:<br>Band 1 Block=37504x1 Type=UInt16, ColorInterp=Gray<br><br>Ozy<br><br><div class="gmail_quote">On Tue, Jan 12, 2010 at 5:38 PM, Even Rouault <span dir="ltr"><<a href="mailto:even.rouault@mines-paris.org">even.rouault@mines-paris.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Ozy,<br>
<br>
Did you try with gdal_translate -of NITF src.tif output.tif -co<br>
BLOCKSIZE=128 ? Does it give similar results ?<br>
<br>
I'm a bit surprised that you even managed to read a 40Kx100K large NITF<br>
file organized as scanlines. There was a limit until very recently that<br>
prevented to read blocks whose one dimension was bigger than 9999. This<br>
was fixed recently in trunk ( see ticket<br>
<a href="http://trac.osgeo.org/gdal/ticket/3263" target="_blank">http://trac.osgeo.org/gdal/ticket/3263</a> ) and branches/1.6, but it has<br>
not yet been released to an officially released version. So which GDAL<br>
version are you using ?<br>
<br>
Does the output of gdalinfo on your scanline oriented input NITF gives<br>
something like :<br>
Band 1 Block=40000x1 Type=Byte, ColorInterp=Gray<br>
<br>
Is your input NITF compressed or uncompressed ?<br>
<br>
Anyway, with latest trunk, I've simulated creating a similarly large<br>
NITF image with the following python snippet :<br>
<br>
import gdal<br>
ds = gdal.GetDriverByName('NITF').Create('scanline.ntf', 40000, 100000)<br>
ds = None<br>
<br>
and then creating the tiled NITF :<br>
<br>
gdal_translate -of NITF scanline.ntf tiled.ntf -co BLOCKSIZE=128<br>
<br>
The memory consumption is very reasonnable (less than 50 MB : the<br>
default block cache size of 40 MB + temporary buffers ), so I'm not<br>
clear why you would have a problem of increasing memory use.<br>
<br>
ozy sjahputera a écrit :<br>
<div><div></div><div class="h5">> I was trying to make a copy of a very large NITF image (about 40Kx100K<br>
> pixels) using GDALDriver::CreateCopy(). The new file was set to have<br>
> different block-size (input was a scanline image, output is to have a<br>
> 128x128 blocksize). The program keeps getting killed by the system<br>
> (Linux). I monitor the memory use of the program as it was executing<br>
> CreateCopy and the memory use was steadily increasing as the progress<br>
> indicator from CreateCopy was moving forward.<br>
><br>
> Why does CreateCopy() use so much memory? I have not perused the<br>
> source code of CreateCopy() yet, but I am guessing it employs<br>
> RasterIO() to perform the read/write?<br>
><br>
> I was trying different sizes for GDAL cache from 64MB, 256MB, 512MB,<br>
> 1GB, and 2GB. The program got killed in all these cache sizes. In<br>
> fact, my Linux box became unresponsive when I set GDALSetCacheMax() to<br>
> 64MB.<br>
><br>
> Thank you.<br>
> Ozy<br>
><br>
><br>
</div></div>> ------------------------------------------------------------------------<br>
><br>
> _______________________________________________<br>
> gdal-dev mailing list<br>
> <a href="mailto:gdal-dev@lists.osgeo.org">gdal-dev@lists.osgeo.org</a><br>
> <a href="http://lists.osgeo.org/mailman/listinfo/gdal-dev" target="_blank">http://lists.osgeo.org/mailman/listinfo/gdal-dev</a><br>
<br>
<br>
</blockquote></div><br>