[postgis-users] design comments

Bruce Rindahl rindahl at lrcwe.com
Wed Feb 11 15:36:28 PST 2009


Since you are going to write some processing code anyway, have you 
considered taking your results from Google Maps and creating a command 
script to something like GRASS and it's raster commands?

Bruce

Paul Ramsey wrote:
> Pivoting a raster coverage into a table of points is an unbelievably
> bad idea. Don't do it.  To the extent that the wktraster project
> starts to output some raster *processing* tools that lay on top of the
> storage abstraction, it will become useful for you.  In the meanwhile,
> there is no reason you should have to read several files into memory
> to read raster data out, as long as your files are well organized. You
> should be able to read directly out of, for example, uncompressed TIFF
> files, without loading any extra data into memory. You can even
> compress them, if you use an internal tiling schema, to get both fast
> access and smaller size.
>
> P.
>
> On Wed, Feb 11, 2009 at 2:17 PM, Theller, Lawrence O.
> <theller at purdue.edu> wrote:
>   
>> HI from excited new user; the raster discussion really piques my interest.
>> Would welcome some design comments.
>>
>> We now have an online model that reads large collection (1 TB) of raster
>> files, queries an oracle DB (tabular not spatial) does some manipulations
>> and presents results with Mapserver.  User interface starts and stops with
>> Mapserver with display of 5 states worth of GIS files I manage; like to get
>> out of that business by going to Google Maps.
>>
>> I have designed a  grand replacement strategy from reading web pages and I
>> now realize perhaps I need some feedback from experienced users. My hope is
>> to convert the raster files into xyz point data stored in postgresql –
>> stored in large watershed subsets of a state as a table of points indexed by
>> small watersheds among other things, with 5 to 10 attributes per point. A
>> typical (8 digit watershed) table then would be maybe 52 million points with
>> 15 attributes.  A state would have dozens of this size table. We would
>> present the user with Google Maps interface, they draw a box which passes to
>> us, we do spatial query to find which watershed they are in and read and
>> spatial query the appropriate table of points with attributes,  and
>> manipulate.
>>
>> I hope this is faster than current method which is to find appropriate 5
>> physical raster files, 50 Megabytes each, say; read them into memory and
>> then query out appropriate subarea and manipulate, results now include
>> create both ascii raster and xy generate file, and then a shapefile This
>> output process I would replace with just KML output (points with changed
>> attributes get output)
>>
>>  So the real question is will the database efficiently handle spatial query
>> into 52 million well-indexed points, or should I stick with reading raster
>> layer upon layer? BTW these are categorical and not image rasters, for
>> example soil type and landcover type with derivatives. Not .tifs either at
>> this point, we use a homebrew binary we make from ESRI raster. Query will be
>> a (KML/GML) polygon generally.
>>
>> Any thoughts are welcome.
>>
>>
>>
>> Regards,
>>
>> Larry Theller
>>
>> Ag and Bio Engineering
>>
>> Purdue University
>>
>> LTHIA model: http://cobweb.ecn.purdue.edu/~watergen/owls/htmls/reg5.htm
>> select a state
>>
>> _______________________________________________
>> postgis-users mailing list
>> postgis-users at postgis.refractions.net
>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>>
>>
>>     
> _______________________________________________
> postgis-users mailing list
> postgis-users at postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
>
>
>   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/postgis-users/attachments/20090211/dd0498c7/attachment.html>


More information about the postgis-users mailing list