[pdal] pdal pipeline giving garbage in postgres

Tom van Tilburg tom.van.tilburg at gmail.com
Mon Sep 28 08:29:33 PDT 2015


It only now occurs to me that you can give command line params to the
pipeline as well....
That makes the batching a lot easier, I couldn't ask for more :)

What would be the benefit of creating a tindex first for the pointclouds?
It seems that it would take a considerable amount of extra time to go
through the tiling process. For rasters I can see the use but I thought the
chipper already deals with a kind of irregular tiling which can be used by
a spatial index of postgis.
Or is the tileindex just a list of las-files in an sqlite dbase? Personally
I would stick to batching in that case, it gives me more control on the
process.

Best,
 Tom

On Mon, 28 Sep 2015 at 16:27 Howard Butler <howard at hobu.co> wrote:

>
> > On Sep 28, 2015, at 9:19 AM, Tom van Tilburg <tom.van.tilburg at gmail.com>
> wrote:
> >
> > 1) I'm not sure what you mean with "seed" file. Do you mean create it in
> a postgres table? Or more like a config file that is read by pdal pipeline?
>
> I just mean the first file loaded that creates the table and sets the
> schema for the point cloud entry.
>
>
> > It would be nice if the pipeline file/lasreader could deal with a
> directory instead of a file.
> > So in de pipeline.xml file it would look like:
> > <Reader type="readers.las">
> >                     <Option name="directory">/tmp/mylasfiles</Option>
> > All files have the same schema, I don't have to worry about that.
>
> We are working to copy GDAL's workflow where the user would create a
> GDAL-like tile index and then provide that tile index to PDAL pipeline
> operations. The workflow might be something like:
>
> > $ pdal tindex files/*.las las-files.sqlite
> > $ pdal pipeline pipeline.xml --readers.tindex.filename=las-files.sqlite
> --writers.pgpointcloud.connection="pg-connection-string-details"
>
>
>
> > 2) Exactly
> >
> > I'm not unhappy with the way I'm doing it now by the way, but people
> with less skills in batch processing could be saved a lot of worries if it
> could be all handled from the pdal pipeline command.
>
> Insight into how people actually want to workflow the data will help us.
> The developer team has a lot of experience with workflowing data with
> Oracle, but that's not necessarily the way you'd want to do it in pg.
>
> Howard
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/pdal/attachments/20150928/8b0fc538/attachment.html>


More information about the pdal mailing list