[gdal-dev] Re: HDF-EOS vs. GDAL: order of dimensions

Lucena, Ivan ivan.lucena at pmldnet.com
Thu Dec 20 09:15:54 EST 2007


Shmakov,

> 	How do I use HDFexplorer in, say, Shell scripts?

Download from www.space-research.org.

> 	Besides, is it available for GNU/Linux platform?

You can run it on Wine I guess.

> 	So far I have nothing against your proposal.

It is just a suggestion and I may but overlooking some issues so I hope 
that this discussion will help.

> 	However, it would be useful if the application could provide
> 	options to the driver without creating a helper file.  Consider
> 	running a web application (say, a kind of dataset explorer),
> 	which is, for the sake of security, restricted so that it cannot
> 	write any files.  It will be impossible for this application to

There is not need to write files in the client application.

> 	allow user to specify any options for the GDAL driver not
> 	already in some helper file.  Allowing for options to be passed
> 	directly from the application to GDAL seems to solve this
> 	problem.

We already have example of knowledge-base files in the <gdal>\data 
folder. That would be the reference system. So why not have a 
hdf-knowledge-base that tells the driver some specific issues about 
specific HDF products.

People can also add to this file and send their discoveries to be 
included in the GDAL trunk. Just like we do with Reference System.

Users should only create helper files on the data folder in a very 
specific case like in my areal photography hypothetical example.

> 	I believe that the driver should be given options already read
> 	and parsed, and not the helper file name or its contents.  This
> 	way, both the options parsing and the whole options concept
> 	could be much generalized among the drivers.  And it requires
> 	introducing an ``options'' structure, which may be initialized
> 	either by GDAL reading the helper file, or by the calling
> 	application itself.

I am dealing with a real scenario where because of the complexity, 
variability and quantity of HDF files I can't see how the open option 
would play a role. For me that should be a separated thread, IMHO.

But that is how I would like to see it working:

There are 350 HDF that I want to server it a webapp. First thing to do 
is to create a catalog. I am not a big Mapserver expert but I guess that 
you should use *gdaltindex* for that. That will happen in the server 
machine just once, during some sort of data preparation process, and 
once in a while when you update it with new files.

Running gdaltindex, the HDF4 driver will look for hdf-helper file on the 
same folder as the driver and then on <gdal>\data. If there is a match 
between a pair {product name,dataset name} on the data file and one 
similar pair on the helper file, the driver will use the dimension order 
from the helper file, ex.:

SUBDATASET_52_NAME=HDF4_EOS:EOS_SWATH:"MYD06_L2.A2006220.hdf":mod06:Cloud_Mask_1km
SUBDATASET_52_DESC=[2040x1354x2] Cloud_Mask_1km mod06 (8-bit integer)

--> This dataset has 2 bands of 2040x1354.

SUBDATASET_28_NAME=HDF4_EOS:EOS_SWATH:"MYD06_L2.A2006220.hdf":mod06:Radiance_Variance
SUBDATASET_28_DESC=[7x408x270] Radiance_Variance mod06 (16-bit integer)

--> This dataset has 7 bands of 408x270

That is because this file is a very known problematic product that 
doesn't have dimension order information for some datasets:

Product: "MODIS Level 2 Cloud Properties"
Dataset: "Cloud_Mask_1km"
Dataset: "Radiance_Variance"

Does it make sense?

Cheers,

Lucena



More information about the gdal-dev mailing list