[Liblas-devel] I/O performance? -- your help requested!
Michael P. Gerlek
mpg at flaxen.com
Fri Feb 4 11:39:08 EST 2011
Do you have 5-10 minutes to spare today?
Your libLAS team (well, me anyway) is wondering about I/O performance of the
liblas kit -- specifically, when doing binary reading and writing, is there
any fundamental performance difference between using C-style FILE* I/O and
C++-style stream I/O? And if streams are better, would boost's stream be
better still? If you google around a bit, you'll find lots of contradictory
(and sometimes overly passionate) statements about this topic. At the end
of the day, though, the consensus seems to be that:
(1) you need to be "smart" if you're using C++ I/O -- it is easy to shoot
yourself in the foot
(2) modern C++ streams are implemented on top of the native OS APIs
(3) under Visual Studio, FILE* operations and streams are both implemented
using the win32 APIs, but streams have an additional lock (that is claimed
by some to be not needed)
and, most importantly,
(4) performance varies greatly with different I/O patterns, e.g. large
sequential block reads vs small random reads
Very fortunately, we happen to already have a rough, 1st-order I/O
performance test built into the laszip tree. If you have that tree built
(http://hg.liblas.org/zip), in Release mode, could you please send me the
results of running the "laszippertest" test app, as follows?
time ./laszippertest -n 1000000
time ./laszippertest -n 1000000
time ./laszippertest -n 1000000
time ./laszippertest -n 1000000 -s
time ./laszippertest -n 1000000 -s
time ./laszippertest -n 1000000 -s
The first three runs will encode and decode 1 million random points using
FILEs, and the second three will do it with streams. This is not a perfect
test, but it represents something approximating the real I/O footprint or
traces that liblas uses.
Oh, and be sure to include the kind of platform (processor speed, compiler,
OS) you're running it on.
Thanks much!
-mpg
More information about the Liblas-devel
mailing list