[Liblas-devel] Cached Reader bottleneck

Gary Huber gary at garyhuberart.com
Wed Oct 20 18:20:59 EDT 2010


Howard,

I've run into a problem trying to read a billion-point LAS file to test 
the index. It looks like I run out of memory in the 
CachedReaderImpl::m_mask while it is adding vector members trying to 
reach the number of points in the file. It's in the loop on line 88 of  
cachedreader.cpp.

Is there a way to override point cacheing or will I just hit a similar 
wall somewhere else if I do? I wonder if it wouldn't be faster and take 
less memory to resize that array once and then set the values in a loop. 
Having to realloc that array and copy repeatedly as it gets larger might 
not be a good way to go. I'm stalling after adding 689 million array 
elements. It seems to take a long time to hit that error and is probably 
taking twice the memory it needs to.

I made the change here and like I thought, much faster and I don't hit 
my memory limit. this would seem to be a way to speed up the reading of 
any large LAS file if you want me to check it in so you can look at it.

-Gary


More information about the Liblas-devel mailing list