[GRASS-user] [GRASSLIST:1174] Working with very large data sets

David Finlayson david.p.finlayson at gmail.com
Tue Aug 8 01:36:25 EDT 2006


I am working with an interferometric sidescan SONAR system that produces
about 2 Gb of elevation and amplitude data per hour. Our raw data density
could support resolutions up to 0.1 m, but we currently can't handle the
data volume at that resolution so we decimate down to 1 m via a variety of
filters. Still, even at 1 m resolution, our datasets run into the hundreds
of Mb and most current software just doesn't handle the data volumes well.

Any thoughts on processing and working with these data volumes (LIDAR
folks)? I have struggled to provide a good product to our researchers using
both proprietary (Fledermaus, ArcGIS) and non-proprietary (GMT, GRASS, my
own scripts) post-processing software. Nothing is working very well. The
proprietary stuff seems easier at first, but becomes difficult to automate.
The non-proprietary stuff is easy to automate, but often can't handle the
data volumes without first down sampling the data density (GMT does pretty
well if you stick to line-by-line processing, but that doesn't always work).

Just curious what work flows/software others are using. In particular, I'd
love to keep the whole process FOSS if possible. I don't trust black boxes.

Cheers,

-- 
David Finlayson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osgeo.org/pipermail/grass-user/attachments/20060807/2912e186/attachment.html


More information about the grass-user mailing list