<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body smarttemplateinserted="true" text="#000000" bgcolor="#FFFFFF">
<p>Hi there,</p>
<p>I have a question regarding the usage of Entwine and was hoping
somebody could help me? The use case is merging point clouds that
have been generated on different machines. Each of these point
clouds is part to the same final dataset. Entwine works great with
the current workflow:</p>
<p>entwine scan -i a.las b.las ... -o output/</p>
<p>for i in {a, b, ... }<br>
</p>
<p> entwine build -i output/scan.json -o output/ --run 1</p>
<p>The "--run 1" is done to lower the memory usage. On small
datasets runtime is excellent, but with more models the runtime
starts to increase quite a bit. I'm looking specifically to see if
there are ways to speed the generation of the EPT index. In
particular, since I generate the various LAS files on different
machines, I was wondering if there was a way to let each machine
contribute its part of the index from the individual LAS files
(such index mapped to a network location) or if a workflow is
supported in which each machine can build its own EPT index and
then merge all EPT indexes into one? I don't think this is
possible, but wanted to check.</p>
<p>Thank you for any help,</p>
<p>-Piero<br>
</p>
<p><br>
</p>
</body>
</html>