[pgpointcloud] RLE and SIGBITS heuristics

Rémi Cura remi.cura at gmail.com
Wed Apr 15 09:53:19 PDT 2015


ratio is size in pg (according to pgadmin) / size of original files on disk

I tested 3 ways to transfer, writing with postgres trough a COPY (ASCII
ply), reading from client with python (uncompressed patches), and streaming
to browser trough javascript (ASCII X Y Z time intensity).
The 2 first are parallelized 7 times.

I just ran pc_compress to estimate the compression decompression time
(for the moment only for 500 million las points)
I roughly have a capacity of 5 Million pts/sec compression  or
decompression speed (1 core)
(tested trough timing uncompress, then compress(uncompress), then
uncompress(compress(uncompress)) )

I would say overhall pgpointcloud compression is pretty fast (I'm hoping
this facts are not due to some caching)

2015-04-15 18:30 GMT+02:00 Sandro Santilli <strk at keybit.net>:

> On Wed, Apr 15, 2015 at 06:09:52PM +0200, Rémi Cura wrote:
> > Here are some facts about pgpointcloud compression
> >
> > 5.2 Billion las points (usual suspects):
> > ratio 4.36 (compared to .las files)
>
> How do you compute that ratio ? Is that LAS/PG size ?
> How do you determine PG size ?
>
> > The system can write from 1 Million to 0.2 Million pts/sec to a client,
> > depending on the point type
>
> How do you transfer those points to the client, in which format ?
>
> > I'm in the process to measure compression/decompression time on those
> data
> > I tried to get the best ratio (thanks to recent work of @strk on
> uint64_t)
>
> Did you try pc_compress to manually tweak per-dimension compression yet ?
>
> --strk;
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/pgpointcloud/attachments/20150415/5c2bc160/attachment-0001.html>


More information about the pgpointcloud mailing list