[gdal-dev] Extracting cell values from a big image and
many smaller images (Fork)
Lucena, Ivan
ivan.lucena at pmldnet.com
Mon Mar 3 16:03:40 EST 2008
Limei, Frank,
So we are definitely talking about different issues.
I am trying to measure the time consumed in querying cells though a
series of raster bands. I did that for a folder full of single band
files varying the number of files and the size of individual files. No
tiles, no compression.
Now I am trying to group those files in multi bands format like geotiff,
HDF5, netCDF, HFA, etc. I will then try tiles and BIP.
Then I will test the same query from a Oracle georaster, PostgreSQL
(Terralib schema) and SQL Server (ArcSDE schema). This last one I don't
exactly what language/environment I should use... :|
Anyway, I am having problem creating unsigned integer 16 GEOTIFF with
more than 40 bands or byte GEOTIFF with more than 80 bands. Is there any
limitation for that in GDAL + Python?
Best regards,
Ivan
> Frank Warmerdam wrote:
>> - If you will be accessing your huge image in local chunks, consider
>> organizing it as a tiled image. Perhaps an imagine (HFA) file or
>> a BigTIFF with tiling.
Limei Ran wrote:
> Hi Frank:
>
> Thank you very much for your help. I am still puzzled about what GDAL
> classes I should use in our application. I should give you more details
> about our goal in the previous email. Our goal for the program is to
> compute the percent of each NLCD 30meter landuse type within each of
> 12km modeling grids like zonal computation in arcGIS. The 12km modeling
> grids cover almost more than 2/3 US (in meterological modeling) and some
> Canada and Mexico. For areas outside US, we want to use MODIS 1km data
> for the landuse type percent calculation.
> I created a program using OGR classes to generate the 12km modeling grid
> shapefile with modeling grid IDs. Then, I used ogr2ogr for projection
> conversion, gdal_translate to create a 30 meter-cell 0 value grid with
> the modeling grid extent, and gdal_rasterize to rasterize the shapefile
> into the 0 value image as you suggested (UInt32 to hold ID). I also
> clipped 1km MODIS image using the modeling area and regrid it into 30m
> grid cells.
>
> Now, I have this created image (116474 X 100960 cells with 30 meter cell
> size, 47gb for *.bil file), 14 NLCD 30 meters landuse images, and MODIS
> 30meter landuse image. I am developing a cpp program using GDAL to
> compute a table which has modeling grid ID and all landuse perentages
> falling into this modeling grid.
>
> The logic in my mind is to read through each NLCD and match each pixel
> in the modeling grid image to calculate each landuse types falling into
> the modeling grid. After going through NLCD images, non-data areas will
> be calculated using MODIS image data. But, reading through one pixel at
> a time will take a long time to finish using RasterIO() like you said.
> What is the best way to read through those images for the program under
> GDAL? Are there any image addition classes or zonal function classes I
> can use within GDAL c++?
>
> Thank you very much,
>
>
> Limei
>
>
> Frank Warmerdam wrote:
>
>> Limei Ran wrote:
>>
>>>
>>> Hi:
>>>
>>> I am using GDAL cpp library to create a program. The program will
>>> ultimately generate a statistic table with cell values from a very
>>> big modeling grid domain image (almost whole US) and many smaller
>>> land use images within the big image.
>>>
>>> I need to go through all the small image pixels to match grid cell
>>> values in the big image. There are many ways to read image data in
>>> line and blocks from GDALRasterBand class.
>>>
>>> Since I am new in using GDAL libraries, I appreciate any suggestion
>>> you might have in accessing the images efficiently.
>>
>>
>> Limei,
>>
>> I'm not exactly clear on what you want to do, but a couple hints:
>>
>> - Avoid doing many one pixel reads with RasterIO(). There is quite
>> a bit of overhead in each call and so you should only do one pixel
>> reads when that is all you really need. I believe even with caching
>> using one pixel reads to read a whole scanline would be easily an
>> order of magnitude slower than doing one full scanline read.
>>
>> - If you will be accessing your huge image in local chunks, consider
>> organizing it as a tiled image. Perhaps an imagine (HFA) file or
>> a BigTIFF with tiling.
>>
>> - If you need precision, and your small land use images are for
>> reasonably small areas, I would suggest just loading all the data from
>> the big image that matches the area for the small image in one gulp
>> (one RasterIO() call). Then do your matching analysis and then move
>> on to the next.
>>
>> Best regards,
>
>
> _______________________________________________
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>
>
More information about the gdal-dev
mailing list