[pdal] pdal pipeline giving garbage in postgres

Howard Butler howard at hobu.co
Mon Sep 28 07:27:28 PDT 2015


> On Sep 28, 2015, at 9:19 AM, Tom van Tilburg <tom.van.tilburg at gmail.com> wrote:
> 
> 1) I'm not sure what you mean with "seed" file. Do you mean create it in a postgres table? Or more like a config file that is read by pdal pipeline?

I just mean the first file loaded that creates the table and sets the schema for the point cloud entry.


> It would be nice if the pipeline file/lasreader could deal with a directory instead of a file.
> So in de pipeline.xml file it would look like:
> <Reader type="readers.las">
>                     <Option name="directory">/tmp/mylasfiles</Option>
> All files have the same schema, I don't have to worry about that.

We are working to copy GDAL's workflow where the user would create a GDAL-like tile index and then provide that tile index to PDAL pipeline operations. The workflow might be something like:

> $ pdal tindex files/*.las las-files.sqlite
> $ pdal pipeline pipeline.xml --readers.tindex.filename=las-files.sqlite --writers.pgpointcloud.connection="pg-connection-string-details"



> 2) Exactly
> 
> I'm not unhappy with the way I'm doing it now by the way, but people with less skills in batch processing could be saved a lot of worries if it could be all handled from the pdal pipeline command.

Insight into how people actually want to workflow the data will help us. The developer team has a lot of experience with workflowing data with Oracle, but that's not necessarily the way you'd want to do it in pg.

Howard




More information about the pdal mailing list