[gdal-dev] GetStatistics and signed 8-bit IMGs and GeoTIFFs

Even Rouault even.rouault at mines-paris.org
Thu Sep 24 05:18:48 EDT 2009


Selon Jason Roberts <jason.roberts at duke.edu>:

Please open a Trac ticket about that. This should be fixable within GDAL itself
by testing the presence of PIXELTYPE=SIGNEDBYTE and by casting the value to a
signed byte in that case.

> Ok, here is how a GDAL caller can detect that a file is signed 8-bit:
>
>
>
> >>> band.GetMetadata('IMAGE_STRUCTURE')
>
> {'PIXELTYPE': 'SIGNEDBYTE'}
>
>
>
> Thanks to Frank W for giving me a clue about this a week ago that I promptly
> forgot about. Sorry about that.
>
>
>
> So. does anyone know whether band.GetStatistics is supposed to work for
> these signed 8-bit files? If not, I guess I will write a workaround so my
> app will calculate statistics itself for signed 8-bit files.
>
>
>
> Thanks,
>
> Jason
>
>
>
> From: Jason Roberts [mailto:jason.roberts at duke.edu]
> Sent: Wednesday, September 23, 2009 4:18 PM
> To: 'gdal-dev'
> Subject: RE: [gdal-dev] GetStatistics and signed 8-bit IMGs and GeoTIFFs
>
>
>
> Someone suggested that I try this with 1.6.2 without saying whether it might
> be a bug or by design. The behavior is the same with 1.6.2, using binaries
> produced by Tamas Szekeres (thanks Tamas, for providing those on your
> website).
>
>
>
> Back to the original question: is it a bug or by design? How does the GDAL
> caller determine through GDAL that a given file is signed 8-bit or unsigned
> 8-bit?
>
>
>
> Thanks again,
>
> Jason
>
>
>
> From: gdal-dev-bounces at lists.osgeo.org
> [mailto:gdal-dev-bounces at lists.osgeo.org] On Behalf Of Jason Roberts
> Sent: Wednesday, September 23, 2009 3:11 PM
> To: 'gdal-dev'
> Subject: [gdal-dev] GetStatistics and signed 8-bit IMGs and GeoTIFFs
>
>
>
> Is band.GetStatistics() supposed to work with signed 8-bit integer rasters
> with the HFA or GTiff driver?
>
>
>
> With GDAL 1.6.0, I can successfully create a signed 8-bit raster with the
> HFA or GTiff driver using the PIXELTYPE=SIGNEDBYTE option:
>
>
>
> >>> a = numpy.array([[-128, -1, 0, 1, 127]], dtype='int8')
>
> >>> a
>
> array([[-128,   -1,    0,    1,  127]], dtype=int8)
>
> >>> b = numpy.cast['uint8'](a)
>
> >>> print b
>
> [[128 255   0   1 127]]
>
> >>>
>
> >>> gdal.UseExceptions()
>
> >>> driver = gdal.GetDriverByName('HFA')
>
> >>> dataset = driver.Create(r'C:\Temp\test_int8.img', 5, 1, 1,
> gdal.GDT_Byte, ['PIXELTYPE=SIGNEDBYTE'])
>
> >>> band = dataset.GetRasterBand(1)
>
> >>> band.WriteArray(b)
>
> 0
>
> >>> del band, dataset, driver
>
>
>
> When I look at that raster with ArcGIS, it shows it is signed 8-bit integer
> and the five values show up properly. But when I open it with GDAL and call
> band.GetStatistics(), it looks like the calculation is performed using
> unsigned 8-bit integers:
>
>
>
> >>> dataset = gdal.Open(r'C:\Temp\test_int8.img', gdalconst.GA_ReadOnly)
>
> >>> band = dataset.GetRasterBand(1)
>
> >>> band.GetStatistics(False, True)
>
> [0.0, 255.0, 102.2, 95.199579831005551]
>
>
>
> I am wondering if this is a bug or by design. If by design, is there any way
> my application can tell through GDAL that it is a signed 8-bit file rather
> than an unsigned 8-bit file?
>
>
>
> Thanks very much for your help,
>
>
>
> Jason
>
>
>
>




More information about the gdal-dev mailing list