[pdal] Memory management with PDAL/Docker
Matt Beckley
beckley at unavco.org
Mon Apr 1 06:26:01 PDT 2019
Is there a way to optimize docker or PDAL to be able to run larger queries
on ept files? Or Is there some info about memory management, or some
settings I should be using to facilitate larger queries? I have been
trying to run a test on the JSON pipeline below, and I am only able to get
it to run successfully if I set my docker memory to over 6GB....anything
less than 6GB will fail. When successfully run, the resulting LAZ file is
only about 150 million points - so, not huge by lidar standards. I've
tried increasing the "threads" keyword to the ept reader, but that didn't
seem to help.
{
"pipeline": [{
"type": "readers.ept",
"filename": "
https://s3-us-west-2.amazonaws.com/usgs-lidar-public/USGS_LPC_HI_Oahu_2012_LAS_2015
",
"bounds": "([-17606581.532235783,
-17598784.955350697],[2441398.6834285296,2448889.512200477])"
},
{
"type" : "writers.las",
"filename": "HI_LargeOutput_Local.laz"
}]}
Any ideas would be appreciated.
Thanks!
matt.
---------------------------
Matthew Beckley
Data Engineer
UNAVCO/OpenTopography
beckley at unavco.org
303-381-7487
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/pdal/attachments/20190401/5909a91b/attachment.html>
More information about the pdal
mailing list