[postgis-devel] [raster] Memory management and IO concerns

Pierre Racine Pierre.Racine at sbf.ulaval.ca
Wed Jun 29 07:23:44 PDT 2011


> If this is correct, the size limitation on in-database rasters is not the full 32Tb (or
> whatever) of the Postgresql backend, but the amount of memory (and swap)
> present on the server, minus whatever is being used by other processes
> (including other raster operations) on the same server.  Truly large images will
> have to be out-database rasters which GDAL understands how to
> "chunk"/"block"/"tile". Unless I'm missing something.

What you're missing is that big rasters should be tiled. Period. The limits are: 1GB per tile and 32TB per table (per image if you store one image per table).

> Actually, I've already hit something else which may be a more stringent limit:
> when trying to load a single band of a Landsat image (not tiled, loaded as a
> single raster), I got an out of memory error. I assumed this was in the parser, as
> the SQL file generated by raster2pgsql.py used a single "INSERT" statement to
> plop in the entire 64Mb binary image. I hadn't thought of this before, but large
> rasters may need to be loaded via many "ST_SetRasterDataBlock" calls rather
> than one INSERT.

This looks like a real issue (as the limit should be 1 GB). You should fill a ticket for that.

Pierre



More information about the postgis-devel mailing list