[gdal-dev] Gdalinfo slow with big Rasterlite tables
Rahkonen Jukka
Jukka.Rahkonen at mmmtike.fi
Sun Aug 19 13:51:19 PDT 2012
Even Rouault wrote:
> Jukka,
>>
>> Is gdalinfo perhaps walking through every single tile in the
>> rasterlite table for gathering the image layer info? Could
>> there be any other way to do it in more effective way on the
>> GDAL side?
> The Rasterlite driver needs to fetch the extent of the "XXXX_metadata" layers
> to establish the extent of the raster XXXX, which might take a long time when
> there are a lot of tiles.
>>
>> When it comes to GDAL, could it make any sense to cache
>> gdalinfo from Rasterlite layers? Three minutes is rather a
>> long time and my 153600 x 249600 pixel sized layer with
>> 780270 rows/tiles in 5 meter resolution in the table is
>> not exceptionally big. If time is increasing with tile
>> count it would mean 12 minutes for getting gdalinfo from
>> 2.5 meter resolution and 48 minutes from 1.25 minutes
>> layer...
>
> Funny because independantly of the issue you raise here, I was working on
> improving the performance of GetFeatureCount() and GetExtent() on Spatialite
> DBs. In Spatialite 3.0, there is a SQL function triggered by "SELECT
> UpdateLayerStatistics()" that creates a "layer_statistics" table that cache
> those both row count and extent.
> I've just pushed an improvement (r24800) in which the SQLite driver can use
> those cached values, if they are up-to-date. The up-to-dateness is determined
> by checking that the timestamp of the last 'UpdateLayerStatistics' event
> recorded in the 'spatialite_history' table matches the timestamp of the file.
> When creating a new Spatialite DB or updating it with the OGR API, the SQLite
> driver makes sure that the statistics are kept up-to-date automatically.
> However, if a third-party tool edits the DB, it is then necessary to run :
> ogrinfo the.db -sql "SELECT UpdateLayerStatistics()". (The driver plays on the
> safe side, and will not use old statistics to avoid getting false results.)
> I've just made marginal changes (r24801) in the Rasterlite driver so that the
> above caching mechanism works automatically in simple gdal_translate and
> gdaladdo scenario. I would expect that it might solve your performance
> problem, although I have not checked that.
Yes, that makes gdalinfo fast. With my biggest layer the time went down from 3 minutes to 3 seconds. However, my gdal_translate test fails. It used to take three minutes before the zero appeared into the progress bar but after that translation itself took only few seconds. After updating to GDAL r24803 program shows the zero percent progress within couple of seconds but unfortunately nothing happens in any reasonable time after that.
>gdal_translate -of Gtiff -outsize 1% 1% RASTERLITE:test.sqlite,table=t0080 test.tif
Input file size is 153600, 249600
0
Overviews are ok so taking the one percent downsampes should be fast.
Overviews: 76800x124800, 38400x62400, 19200x31200, 9600x15600, 4800x7800,
2400x3900, 1200x1950, 600x975, 300x488
-Jukka-
More information about the gdal-dev
mailing list