[libpc] RE: quick perf look

Howard Butler hobu.inc at gmail.com
Wed Mar 2 11:24:17 EST 2011


Not surprised.  I assume you were doing this tests in release mode as well?  If not, pc2pc would probably win by more.


On Mar 1, 2011, at 5:59 PM, Michael P. Gerlek wrote:

> I couldn't let that one rest, so I quickly hacked on the "native"
> LasReader/Writer classes just enough to get MtStHelens to seem to work:
> pc2pc clocked in at 1.8 secs, against las2las at 2.5 secs.
> 
> *SWISH*!
> 
> 
> [This is not a validated result, as the native Las classes are half-baked,
> but they do basically seem to be functional enough to get a ballpark point
> format 0 result.  Don't try this at home.]
> 
> -mpg
> 
> 
> 
>> -----Original Message-----
>> From: Michael P. Gerlek [mailto:mpg at flaxen.com]
>> Sent: Tuesday, March 01, 2011 3:29 PM
>> To: libpc at lists.osgeo.org
>> Subject: quick perf look
>> 
>> I ran "pc2pc -i MtStHelens.las -o x.las" against "las2las -i
> MtStHelens.las -o
>> x.las" this afternoon.  I was getting about 3.0 secs for pc2pc and 2.5
> secs for
>> las2las.  [The writer still isn't bit for bit right in the headers, but
> it's close
>> enough for this exercise.]
>> 
>> [Yes, this is using the "LiblasReader/Writer" path, so we're really just
> looking
>> at overhead here.  If we used the native LasReader/Writer, I expect us to
> be
>> in much better shape.]
>> 
>> 3.0-vs-2.5 isn't as good as I'd like, but it's not unreasonable.  Looking
> at the
>> profiles for pc2pc, functions in the libpc:: namespace seem to account for
>> ~15-20% of the runtime, but I'm not real confident of the amount of
> sampling
>> for such a short run.
>> 
>> A big part of that, however, does seem to be the code that moves the
>> individual fields between the liblas::Point and the libpc::PointData -- on
> both
>> the read and write sides, the code to move the fields is a long,
> branchless
>> sequence of calls to Point.SetFoo() and Point.GetFoo().  If I could use
> Point's
>> direct SetData/GetData calls to map into the PointData buffer, a lot of
> that
>> might go away.
>> 
>> So anyway, I'm not too worried at this point.
>> 
>> -mpg
> 
> 
> _______________________________________________
> libpc mailing list
> libpc at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/libpc



More information about the libpc mailing list