[gdal-dev] warp memory limit default too low and does not have a
config option
Etienne Tourigny
etourigny.dev at gmail.com
Tue Feb 7 22:56:54 EST 2012
On Tue, Feb 7, 2012 at 9:13 PM, Even Rouault
<even.rouault at mines-paris.org> wrote:
> Le lundi 06 février 2012 02:41:40, Etienne Tourigny a écrit :
>> Hi all,
>>
>> The default memory limit for warp operations is 64MB, which is far too
>> low using recent hardware and leads to inefficiencies in I/O when
>> warping large images.
>>
>> This can be changed using the -wm option of gdalwarp, or if using the
>> API change the value of GDALWarpOptions::dfWarpMemoryLimit.
>> More information here:
>> http://trac.osgeo.org/gdal/wiki/UserDocs/GdalWarp#WillincreasingRAMincrease
>> thespeedofgdalwarp
>>
>> However, it would be better to do both of the following:
>>
>> 1) Change the default value of 64 MB to something higher, perhaps
>> depending on the machine's configuration (say 1/4 of RAM) - this
>> possibility is mentioned in gdalwarpoperation.cpp .
>
> Finding good default values is quite difficult and somehow arbitrary. There's
> however something to be aware : in the trac entry you quote above, there's a
> mention of a use case where more RAM for dfWarpMemoryLimit will actually
> decrease performance. So if you intend increasing it substantially (which
> would be the case for 1/4 of RAM on modern configurations), that might make
> things significantly worse for that use case. That's definitely something to at
> least benchmark and lilely fix in the warping code. The use case if I remember
> well is "gdalwarp smallimage bigimage" (the real use case being generally
> "gdalwarp smallimage1 smallimage2 ... smallimageN bigmosaicofallsmallimages").
> If you increase dfWarpMemoryLimit currently, we will currently read an area of
> bigimage much larger than needed to include the extent of smallimage, which
> cause excessive read operations before warping, and excessive write
> afterwards.
Is the problematic case you cite rather common or the exceptional?
Perhaps the -wm option is a good place to set a lower memory limit in
this case, with the default being higher (instead of the other way
around)?
Which is more common, 1) many small images being merged into a larger
one, 2) warping of large images or 3) merging of large+small images?
It would be nice as you suggest to have some benchmarks, would the 3
cases I just mentioned be representative enough? What would be typical
sizes for each case? Perhaps a compromise could be found, that
minimizes performance penalties on average.
The mentioned ticket has a proposed patch (by you) - perhaps it could
be used in some benchmarks?
On a side note, I *think* that most users use default value and do not
use the -wm option unless they really look into the docs - therefore
it would be nice to have a better default.
>
>> 2) Add a config option so that the default can be set for all
>> operations by the user (which overrides 1, but can be overridden for
>> each operation)
>
> Not sure to understand how option 2) if different from the -wm option and how
> it would be implemented ...
Simply a config option, like there is for raster i/o cache (GDAL_CACHEMAX).
This would be a simple fix that allows the user to set the default for
his/her typical usage (and select another value with -wm when needed).
There might not be a need to change the default value then.
I have a small patch that implements that, should I create a ticket to
centralize this discussion and post the patch?
cheers,
Etienne
>
>>
>>
>> Any comments?
>>
>> regards,
>> Etienne
>> _______________________________________________
>> gdal-dev mailing list
>> gdal-dev at lists.osgeo.org
>> http://lists.osgeo.org/mailman/listinfo/gdal-dev
More information about the gdal-dev
mailing list