[gdal-dev] Filesize too large when writing compressed float's to a Geotiff from Python
Rutger
kassies at gmail.com
Thu Jun 4 00:29:24 PDT 2015
Even,
Thanks for the suggestions, the first two work well. I'll have a look at the
ds.WriteRaster, that seems an interesting way, since it also prevents
unnecessary looping over the bands.
Writing per block is what i usually do, maybe that's why i never noticed it
before. I now ran into it while fetching and writing a dataset from OpenDAP,
whereas i usually read blocks from GTiffs.
It makes sense that the order in which the data is written/stored affects
the performance of the compression, but i don't get why it would be
different for integers as compared to floats?
Regards,
Rutger
Even Rouault-2 wrote
> Le mercredi 03 juin 2015 15:21:07, Rutger a écrit :
>
> Rutger,
>
> the issue is that you write data band after band, whereas by default the
> GTiff
> driver create pixel-interleaved datasets. So some blocks in the GTiff
> might be
> reread and rewritten several times as the data coming from the various
> bands
> come.
>
> Several fixes/workarounds :
> - if you've sufficient RAM to hold another copy of the uncompressed
> dataset,
> increase GDAL_CACHEMAX
> - or add options = [ 'INTERLEAVE=BAND' ] in the Create() call to create a
> band
> interleaved dataset
> - more involved fix: since there's no dataset WriteArray() in GDAL Python
> for
> now, you would have to iterate block by block and for each block write the
> corresponding region of each band.
> - you could also use Dataset.WriteRaster() if you can get a buffer from
> the
> numpy array
>
> Even
--
View this message in context: http://osgeo-org.1560.x6.nabble.com/Filesize-too-large-when-writing-compressed-float-s-to-a-Geotiff-from-Python-tp5208916p5209075.html
Sent from the GDAL - Dev mailing list archive at Nabble.com.
More information about the gdal-dev
mailing list