[gdal-dev] Gdal-Grid lidar.
Even Rouault
even.rouault at spatialys.com
Mon Jun 15 12:24:41 PDT 2015
Le lundi 15 juin 2015 21:00:15, Nicolas Cadieux a écrit :
> Thanks Even,
> One last question about the buffer. Is 2GB is the max buffer size for
> gdal_grid? If I select that, am I likely to run into problem with some
> values?
The variable is a uint32 so you could probably go up to 4 GB.
> Apart from trial and error, is there a balance that should be
> stuck between available computer memory (currently at 64GB), input file
> size and the buffer or can I simply select a 2GB buffer?
As you're exploring new territories, I'd expect you to report how this works
;-)
>
> Merci beaucoup pour votre aide.
>
> Nicolas Cadieux M.Sc.
> Les Entreprises Archéotec inc.
> 8548, rue Saint-Denis Montréal H2P 2H2
> Téléphone: 514.381.5112 Fax: 514.381.4995
> www.archeotec.ca
>
> On Jun 12, 2015 17:29, Even Rouault <even.rouault at spatialys.com> wrote:
> > Le vendredi 12 juin 2015 23:18:26, Nicolas Cadieux a écrit :
> > > Hi Even,
> > > Thank you very much. I increased the buffer to 1.5 GB
> > > (512+512+514)*1024*1024 and I can now work much much faster with
> > > gdal_grid.
> > >
> > > Does gdal_fillnodata have the same type of variable that could speed
> > > things up? It is also slow and does not seem speeded up by the
> > > "All_CPUS".
> >
> > No, no tunable for fillnodata, and it doesn't support multi-threaded
> > parallelization.
> >
> > > I will use warp and the raster calculator in the future so you have
> > > more magic up your sleeve, I would appreciate it.
> > >
> > > Cheers and thanks again.
> > >
> > > Nicolas Cadieux M.Sc.
> > > Les Entreprises Archéotec inc.
> > > 8548, rue Saint-Denis Montréal H2P 2H2
> > > Téléphone: 514.381.5112 Fax: 514.381.4995
> > > www.archeotec.ca
> > >
> > > On Jun 5, 2015 19:13, Even Rouault <even.rouault at spatialys.com> wrote:
> > > > Le samedi 06 juin 2015 00:18:45, Nicolas Cadieux a écrit :
> > > > > Hi,
> > > > > I have been using gdal_grid to interpolate LiDAR data for some time
> > > > > now. This would be a typical command:
> > > > >
> > > > > gdal_grid -l P0344_sol2004_XYZI_MTM8_CGVD28_150529 -txe 348733.0
> > > > > 401135.0 -tye 5568587.0 5773796.0 -a
> > > > > nearest:radius1=3.0:radius2=3.0:angle=0.0:nodata=0.0 -outsize
> > > > > 52402.0 205209.0 -of GTiff
> > > > > H:/Nicolas/pytemp/OutPut\P0344_sol2004_XYZI_MTM8_CGVD28_150529.vrt
> > > > > H:/Nicolas/pytemp/OutPut\P0344_sol2004_XYZI_MTM8_CGVD28_150529.tif
> > > > > --confi g GDAL_NUM_THREADS ALL_CPUS
> > > > >
> > > > > As you can see, this is a huge raster and the csv. file (Pointed
> > > > > with a .vrt) contains over a billion points. I am trying to do
> > > > > this in one shot to avoid boundary effects. (I know I can split
> > > > > this .csv file into smaller bits and make this more efficient).
> > > > > The computer I am using has 12 cores, 64BG of memory (smaller than
> > > > > the csv file) and a 1TB pcie solid state drive. It should manage
> > > > > the task but it has been running for 5 days with less than 10%
> > > > > done). I have access to a super computer with gdal if all else
> > > > > fails.
> > > > >
> > > > > This is my question. The "--config GDAL_NUM_THREADS ALL_CPUS" or
> > > > > "--config GDAL_NUM_THREADS 12" does not seem to work with the
> > > > > nearest neighbor algorithm. (It seem to work with the -a count
> > > > > command) If I use this on a smaller data set, I see no difference
> > > > > in speed or cpu usage between using the "ALL_CPUS" or not using
> > > > > this switch. Is this a bug or simply a limitation of the nearest
> > > > > neighbor algorithm?
> > > >
> > > > Nicolas,
> > > >
> > > > The threading mechanism is generic for all algorithms . Add "--debug
> > > > on" and you should see traces like "GDAL_GRID: Using 12 threads".
> > > > Now, a reason for it not being efficient is that the work buffer
> > > > used by gdalgrid (16 MB) is too small w.r.t to nearest speed. Hum,
> > > > actually looking more closely at the code, I see that the quad tree
> > > > to index the points will be rebuilt for each work buffer. This is
> > > > likely the main cause of the inefficiency and why you don't see
> > > > measurable differences between non threaded or threaded computations
> > > > (since the building of the quadtree is not multithreaded). If you
> > > > compiled from source, you could try changing the "const GUInt32
> > > > nDesiredBufferSize = 16*1024*1024;" line in apps/gdal_grid.cpp to a
> > > > much higher value. A more proper solution would be to avoid
> > > > rebuilding the quad tree for each work buffer. This is mainly code
> > > > restructuring.
> > > >
> > > > An alternative to gdal_grid would be to use gdal_rasterize +
> > > > gdal_fillnodata. This is usually much much faster, so you could try
> > > > to check if it gives the results you expect.
> > > >
> > > > You could also try scipy :
> > > > http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.inte
> > > > rpol ate.griddata.html#scipy.interpolate.griddata Not sure how it
> > > > behaves with that number of points.
> > > >
> > > > Even
> > > >
> > > > > I don't want to move this
> > > > > to a super computer if it will not multi-thread properly.
> > > > >
> > > > > Thanks,
> > > > > Nicolas Cadieux
> > > > >
> > > > > (Gdal1.11.2 downloaded with OSGEO4W 64bit install with QGIS 2.8 on
> > > > > windows 7_64) I use GDAL from the command line not from QGIS.
> > > > >
> > > > >
> > > > > Nicolas Cadieux M.Sc.
> > > > > Les Entreprises Archéotec inc.
> > > > > 8548, rue Saint-Denis Montréal H2P 2H2
> > > > > Téléphone: 514.381.5112 Fax: 514.381.4995
> > > > > www.archeotec.ca
> > > > > _______________________________________________
> > > > > gdal-dev mailing list
> > > > > gdal-dev at lists.osgeo.org
> > > > > http://lists.osgeo.org/mailman/listinfo/gdal-dev
--
Spatialys - Geospatial professional services
http://www.spatialys.com
More information about the gdal-dev
mailing list