[gdal-dev] help using gdal_grid ...

Paul Spencer pagameba at gmail.com
Sun Jun 1 21:16:08 EDT 2008


Hi,

I have a set of x,y,z data in text files.  There are about 1800  
individual files.  Each file has several thousand points.  The sum  
total is about 14.5 million entries.

I would like to convert this into a DEM so I can make contours and  
hillshade.

My first attempt has been to concatenate all the files into a single  
file, create a VRT file, and run gdal_grid on the resulting file.  It  
took about 4 hours for gdal_grid (running at 99.5% of one core on my  
macbookpro) to output the first '0' in the progress monitor so I  
abandoned this process :)

So I would like some advice on how to tune the parameters to gdal_grid  
to get reasonable results in a more suitable amount of time.

The data has been collected at approximately 70 meter intervals or  
less depending on the terrain.

The area of coverage is in ESPG:2036 and is

2302989.998137,7597784.001173 - 2716502.001863,7388826.998827

which is about 413000 m x 209000 m

Questions:

* what is a reasonable -outsize value?  Originally I though 5900 x  
3000 based on the 70 m per measurement thing, but perhaps that is way  
too big?

* invdist seems to be the slowest algorithm based on some quick tests  
on individual files.  Is there much difference between average and  
nearest?  What values of radius1 and radius2 will work the fastest  
while still producing reasonable results of the -outsize above?

* would it be better to convert from CSV to something else (shp?) first?

* would it be better to process individual input files then run  
gdal_merge.py on the result?

Cheers

Paul

__________________________________________

   Paul Spencer
   Chief Technology Officer
   DM Solutions Group Inc
   http://www.dmsolutions.ca/


More information about the gdal-dev mailing list