[gdal-dev] Large NITF file performance problems
Martin Chapman
mchapman at texelinc.com
Wed May 14 19:08:55 EDT 2008
Jason,
Is your program rendering the data to the screen or just processing it
somehow? As a bench mark for you, I can render nitf files that are 1.8GB
and bigger in one to two seconds on my machine using gdal over a samba
network connection to a linux machine (100MB/sec Ethernet). The trick is
resampling the image on the fly using rasterio() to fit the screen size.
There is a HUGE performance difference when you do this.
If you are just making copies of the image then I would look at your
physical disk data subsystem. If you do not have a write cache enabler then
it may be your customers disk that is slow and not the software. I have run
into that situation before with HP Smart Arrays. You can validate that this
is the case by comparing your CPU usage to the average write disk queue
length. If the CPU usage is less than your write queue it means your data
subsystem is not able to keep up with the CPU
basically GDAL is faster than
your hardware. Your CPU usage should be somewhere between 70 and 100 % is
everything is working fine would be my guess.
Martin
From: gdal-dev-bounces at lists.osgeo.org
[mailto:gdal-dev-bounces at lists.osgeo.org] On Behalf Of Jason Beverage
Sent: Wednesday, May 14, 2008 12:24 PM
To: Even Rouault
Cc: gdal-dev at lists.osgeo.org
Subject: Re: [gdal-dev] Large NITF file performance problems
Hi Even,
Thanks for your sample file, it really helped. It loads quickly in our
application and in OpenEV so I'll have to look down another route to find
out what's wrong with the customer's data.
However, it does appear that there is a problem somewhere when doing a
CreateCopy() as Ivan pointed out. I can translate that file to a GeoTiff in
about 40seconds, but writing NITF has been going for about 5 minutes now and
its still on "0" with no progress reported.
Thanks,
Jason
On Tue, May 13, 2008 at 2:45 PM, Even Rouault <even.rouault at mines-paris.org>
wrote:
Hi,
I don't really understand why they would have issues with large file sizes
(below the 4 GB of course)
I've tried the following small python script to generate a 1.2 GB NITF file
filled with '1' as a value :
#!/usr/bin/env python
import gdal
new_ds = gdal.GetDriverByName( 'NITF' ).Create( 'largentf.ntf', 100000,
12000,
1 )
new_ds.GetRasterBand(1).Fill(1)
new_ds = None
It runs in less than one minute on my slow machine. I can open the resulting
file with OpenEV and scroll through it quite smoothly. gdalinfo -checksum
largetntf.ntf also runs in about 3 minutes, which seems reasonnable.
a gdalinfo on the file shows that it is automatically tiled in blocks of
256x256 and by looking at the code I can see that 256x256 tiling is
automatically activated when either the number of lines or columns of the
file is larger than 9999.
So I don't see any obvious reason why you get poor performance. You could
probably break with a debugger to see where it idles during gdal_translate ?
I email you the bzip2 result of the 1.2 GB file that is only 1KB... so you
can
test on the same file as me.
As far as your customer is concerned, maybe there is an issue with
compression
(for example a very large mono-block JPEG image ?). A 'gdalinfo' on the
files
could maybe give some hints ?
Le Tuesday 13 May 2008 18:21:15 Jason Beverage, vous avez écrit :
> Hi all,
>
> I've got a customer who is using large NITF files (~1.5 GB) and is seeing
> ridiculously slow load times in our application. I don't have access to
> his data, so I can't test directly, but it seems like the NITF driver may
> have some issues with large file sizes (> 1GB).
>
> To test on my end, I created a few different GeoTiff files (600 MB, 800
MB,
> and 1.2 GB) and tried to convert them to NITF using gdal_translate.
> Converting the 600 and 800 MB files worked just fine and had very
> reasonable speed (few minutes). However, when I tried to use
> gdal_translate on the 1.2 GB file, the process hung at 0% for forever and
I
> had to kill it after waiting for a very long time. It seems as if there
is
> something magical about this 1 GB boundary.
>
> Does anyone have any ideas or suggestions as to what could be causing this
> issue?
>
> Thanks!
>
> Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osgeo.org/pipermail/gdal-dev/attachments/20080514/ddd69ced/attachment.html
More information about the gdal-dev
mailing list