[gdal-dev] help using gdal_grid ...

Tamas Szekeres szekerest at gmail.com
Mon Jun 2 05:17:52 EDT 2008


Paul,

In the recent days I've run into the same performance problem
according to the large number of the scattered points. I admit I could
get a significant performance increment by reducing the number of the
floating point operations in the inner loop of the invdist
implementation. I've also created a custom invdist when the power is
2.0 so the that the pow() function can totally be eliminated in this
special case. I've been thinking of contributing back of these changes
but I'm hesitant to mess the things up with funky optimizations inside
the code. Maybe a bug report along with a patch would be more
reasonable from me.

Best regards,

Tamas



2008/6/2 Paul Spencer <pagameba at gmail.com>:
> Hi,
>
> I have a set of x,y,z data in text files.  There are about 1800 individual
> files.  Each file has several thousand points.  The sum total is about 14.5
> million entries.
>
> I would like to convert this into a DEM so I can make contours and
> hillshade.
>
> My first attempt has been to concatenate all the files into a single file,
> create a VRT file, and run gdal_grid on the resulting file.  It took about 4
> hours for gdal_grid (running at 99.5% of one core on my macbookpro) to
> output the first '0' in the progress monitor so I abandoned this process :)
>
> So I would like some advice on how to tune the parameters to gdal_grid to
> get reasonable results in a more suitable amount of time.
>
> The data has been collected at approximately 70 meter intervals or less
> depending on the terrain.
>
> The area of coverage is in ESPG:2036 and is
>
> 2302989.998137,7597784.001173 - 2716502.001863,7388826.998827
>
> which is about 413000 m x 209000 m
>
> Questions:
>
> * what is a reasonable -outsize value?  Originally I though 5900 x 3000
> based on the 70 m per measurement thing, but perhaps that is way too big?
>
> * invdist seems to be the slowest algorithm based on some quick tests on
> individual files.  Is there much difference between average and nearest?
>  What values of radius1 and radius2 will work the fastest while still
> producing reasonable results of the -outsize above?
>
> * would it be better to convert from CSV to something else (shp?) first?
>
> * would it be better to process individual input files then run
> gdal_merge.py on the result?
>
> Cheers
>
> Paul
>
> __________________________________________
>
>  Paul Spencer
>  Chief Technology Officer
>  DM Solutions Group Inc
>  http://www.dmsolutions.ca/
> _______________________________________________
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>


More information about the gdal-dev mailing list