>From what I've been able to find re toasted columns; they're stored somewhere in the database as a separate table, outside the referencing row, and limited to 1GB. The 1GB size may be significant in that some raster formats can be larger. Not a show-stopper, but it would require repackaging of very large jpeg2000s, for example.
<br><br>Given that raster archives can be many TB in size, what are the ramifications for backup and restore of a database containing toasted rasters ? <br><br>Since toasted values are in a separate table, can I backup and restore the primary table rows sans toasted columns, and manage the toast in slices ? What happens if my raster table gets, um, toasted ? Hopefully, a full multiTB restore/reorg would not be necessary before I'm running again.
<br><br>John Novak<br><br> <br><br><div><span class="gmail_quote">On 10/24/06, <b class="gmail_sendername">Frank Warmerdam</b> <<a href="mailto:warmerdam@pobox.com">warmerdam@pobox.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>Steve,<br><br>Excellent summary of the usual sequence of these conversations.<br><br>Marshall, Steve wrote:<br>> The code that really seems well-adapted to this problem is the Kakadu<br>> package written by David Taubman, one of the originators of JPEG 2000.
<br>> Unfortunately, this library is not open source. Kakadu has been<br>> included as an optional component of other open source projects, like<br>> GDAL. However, I think Kakadu's license would come into conflict with
<br>> the GPL used by PostGIS. I'm not a lawyer, but I think this<br>> conflict could be overcome if PostGIS could be release under dual<br>> licenses, such as GPL or LGPL. Whether or not its desirable to include
<br>> of Kakadu in a PostGIS extension is another question.<br><br>As mentioned by others I think the licensing conflicts may be more<br>fundamental then you suggest.<br><br>By the way, I've implemented "virtual" jpeg2000 access in GDAL for
<br>both the Kakadu library and the ECW library. The ECW library supports<br>jpeg2000 and ecw by the way. I suspect it is also possible for the<br>MrSID library by I haven't done it myself.<br><br>> If anyone has knowledge of other JPEG 2000 codecs that have these
<br>> low-level access capabilities, I'd be very happy to hear about them.<br>> Also, if I've mischaracterized any of the codecs, I'd love to be corrected.<br>><br>> In any event, I'm curious to see if there is significant interest in an
<br>> implementation of JPEG2000 raster data type within PostGIS. If so, I<br>> think I could dedicate a significant amount of my time over the next<br>> several months, as well as perhaps some funding from my employer,
<br>> depending upon whether some of the issues I raised above can be resolved.<br><br>I'm not sure I entirely buy into the idea of using jpeg2000 as a special<br>native raster data type. There are a variety of limitations, and odd
<br>performance characteristics with jpeg2000 which would make me nervous.<br><br>I would suggest instead the same general approach of treating BLOBs<br>in the database as seekable objects from which components could be<br>
extracted, but instead of restricting things to jpeg2000, allow any<br>GDAL supported raster format to be in the database, and use GDAL to<br>access them in the database.<br><br>Actually, when I say "any" format, I really mean any format in GDAL
<br>that supports the virtualization interface where all IO is routed<br>through VSIFOpenL() and related functions. This is a lot of the<br>drivers, but by no means all. This would also allow use of formats<br>like tiled GeoTIFF with overviews, from which chunks or overviews
<br>can be efficiently extracted with much better understood performance<br>characteristics.<br><br>If you have the time to work on a prototype, then I'd say go for it.<br>If you are interested in using GDAL, then I'd be willing to provide
<br>advice and answers on GDAL issues.<br><br>PS. I'd encourage you do some sort of performance testing to see<br>if there is a lot of overhead in doing random seeks in large<br>toasted blobs in postgres. I don't understand the mechanism well,
<br>but it seems like it could be an issue, and better to find out<br>sooner rather than later.<br><br>Best regards,<br>--<br>---------------------------------------+--------------------------------------<br>I set the clouds in motion - turn up | Frank Warmerdam,
<a href="mailto:warmerdam@pobox.com">warmerdam@pobox.com</a><br>light and sound - activate the windows | <a href="http://pobox.com/~warmerdam">http://pobox.com/~warmerdam</a><br>and watch the world go round - Rush | President OSGeo,
<a href="http://osgeo.org">http://osgeo.org</a><br><br>_______________________________________________<br>postgis-users mailing list<br><a href="mailto:postgis-users@postgis.refractions.net">postgis-users@postgis.refractions.net
</a><br><a href="http://postgis.refractions.net/mailman/listinfo/postgis-users">http://postgis.refractions.net/mailman/listinfo/postgis-users</a><br></blockquote></div><br>