[Gdal-dev] GDAL raster block caching issues
Steve Soule
stsyo3lwdia4 at vexcel.com
Fri Aug 26 18:40:37 EDT 2005
I originally sent the message below to Frank directly, but he asked me
to send a copy to gdal-dev for archiving and general discussion. Here
is the mesage:
In this discussion, "client" means the programmer writing an
application program that uses GDAL. "User" means a non-programmer
using such an application program.
In this document, I use the acronym LRUL to mean "least recently used
list". This is a data structure and algorithm used by GDAL to keep
track of flushable raster blocks in order from most recently to least
recently used, so that when the total size of cached blocks passes
a limit, the least recently used blocks are automatically flushed.
Issue 1: Global vs. dataset LRUL
Currently, the LRUL is global, that is, it contains blocks from all
open datasets. I think it would be better if each dataset had its
own LRUL (or possibly each raster band). This would have the following
advantages:
1. Thread-safety. A global LRUL is difficult to make thread-safe.
This work has not yet been done in GDAL. One thing that would be
particularly difficult to resolve is how to handle dirty blocks, that
is, blocks that need to be written to disk before they can be flushed.
A dataset or raster band level LRUL would be trivially thread-safe.
2. Cache size flexibility. With a global LRUL, there's one cache
limit (nCacheMax) for the entire process. This is simple but
not flexible. When you have more than one dataset open, it may be
that some datasets need caching more than others. In particular,
datasets accessed over the network are more likely to need caching
than datasets stored on a local disk. If each dataset had its own
cache limit, it would be easy to tailor it to the individual dataset's
block loading time.
3. Matches user behavior better. The application where dataset LRUL
could give a big performance improvement over global LRUL is in image
viewing and marking. In such an application, the user typically has
from two to six images open at once. They tend to scroll image one
and mark a point, then scroll image two and mark a point, and so on
until the point has been marked in all images, then start over with
image one. With global LRUL, if the scrolling is sufficient to consume
all of the cache, then by the time the user gets back to image one,
all of image one's raster blocks have been flushed. So you get nearly
100% cache misses. With dataset LRUL, this problem disappears.
This isn't an artificial example. In my mind, there are two types
of applications for GDAL: image processing and user interface.
Image processing applications typically process all of the blocks
in an image in order; caching those blocks is useless. It's in user
interface applications that caching is really useful. For applications
where the user is looking at a single image, there's no difference
between global and dataset LRUL because there's only one dataset.
So it's only in applications where the user needs more than one
image on the screen at once that global versus dataset LRUL makes
a difference. Such applications usually involve examining two or
more images of the same thing and comparing, matching, or marking.
FotoG (my main project) is such an application. We did a user test
of FotoG a few weeks ago, and I observed that most users spent most of
their time sitting and waiting for the screen to update after issuing
a scroll or zoom command. Turning on GDAL's block caching didn't help
much. After scratching my head about this for a few weeks, I came to
the realization that the problem would probably be greatly reduced if
GDAL used dataset LRUL instead of global LRUL. Users have a strong
tendency to think in parallel when more than one image is visible.
This leads them to try to complete a fundamental task in all open
images before moving on to the next fundamental task. Dataset LRUL
matches this work-in-parallel mindset much better than global LRUL.
4. Allows more cache algorithm flexibility. If you ever decide
to support a different cache flushing algorithm than LRUL, it is
highly likely that the client will want to make the choice of cache
flushing algorithm for each dataset, not for the whole application.
With global LRUL, all datasets must use the same algorithm.
5. Makes turning off write caching possible. I personally would
prefer not to have write caching in my applications. If the cache
size were a property of the dataset rather than being global, I could
turn off caching for datasets in which I'm writing data.
Issue 2: Client-controlled block flushing
Currently, if block caching is turned on in GDAL, the algorithm used
for flushing cache blocks is LRUL. It would be nice if the client
could manually control loading and flushing of blocks in order to
use a different flushing algorithm. After some thought on how to
do this, I realized that the capability already exists in GDAL:
to override LRUL for a block, you lock it with GetLockedBlockRef,
and when you want to return caching control to LRUL, you call DropLock.
Though these two functions are technically part of the GDAL API since
they're declared public in gdal_priv.h, it may be that you don't want
these to be officially part of the API. If you don't want these to
be officially part of the API, perhaps you should provide an alternate
mechanism for overriding LRUL.
Issue 4: OS versus GDAL caching
How does GDAL's block caching interact with the OS's caching of disk
blocks? The OS uses any main memory not used by processes for caching
disk blocks. So it's possible for a chunk of image data to be cached
in main memory twice: once in the GDAL raster block cache, and once
in the OS disk block cache. This double-caching may be sufficiently
wasteful to be a problem. In particular, by splitting main memory
between these two caches, GDAL's block caching may prevent either
cache from being effective.
To test this, I wrote the following simple test program that reads
an image from disk one raster block at a time, then repeats the whole
read a number of times specified on the command line. Also specified
on the command line is GDAL's raster block cache limit.
#include "gdal_priv.h"
#include <iostream>
#include <vector>
using namespace std;
int main(int argc, char **argv)
{
if (argc != 4)
{
cerr << "Usage: gdaltest cachemb repeat image\n";
return 1;
}
const char *filename = argv[3];
int cache = atoi(argv[1]), repeat = atoi(argv[2]);
GDALAllRegister();
GDALSetCacheMax(cache * (1024 * 1024));
GDALDataset *ds = (GDALDataset *)GDALOpen(filename, GA_ReadOnly);
if (!ds)
{
cerr << "GDALOpen failed on " << filename << "\n";
return 1;
}
int xsize = ds->GetRasterXSize();
int ysize = ds->GetRasterYSize();
GDALRasterBand *rb = ds->GetRasterBand(1);
int blockx, blocky;
rb->GetBlockSize(&blockx, &blocky);
vector<unsigned short> imgdata(blockx * blocky);
for (int iter = 0; iter <= repeat; ++iter)
for (int row = 0; row < ysize; row += blocky)
{
int chunky = blocky;
if (chunky > ysize - row) chunky = ysize - row;
for (int col = 0; col < xsize; col += blockx)
{
int chunkx = blockx;
if (chunkx > xsize - col) chunkx = xsize - col;
if (rb->RasterIO(GF_Read, col, row, chunkx, chunky,
&imgdata[0],
chunkx, chunky, GDT_UInt16, 0, 0) == CE_Failure)
{
cerr << "Reading raster data failed on " << filename <<
"\n";
return 1;
}
}
}
}
I executed this test program on two files, one of which was 172
MB in size, and the other of which was 380 MB in size. Both were
16-bit unsigned integer tifs. My computer has 768 MB of memory;
so image 1 fits in memory twice, and image 2 doesn't. In order to
ensure that each image was not cached in the OS disk block cache from
one run to the next, I made ten copies of each image, and ran each
test ten times, once on each copy. I ran tests with repeat counts of
zero through three, and cache sizes of 0, 1, 10, 100, 200, and 400.
I timed the runs (in seconds) and averaged for the ten runs for each
parameter set. Here are the results:
file 1 (172 MB):
cache
repeat 0 1 10 100 200 400
0 8.6 8.6 8.6 8.6 8.6 8.6
1 10.0 10.7 11.1 11.5 8.6 8.7
2 11.2 13.0 12.2 14.3 8.7 8.9
3 12.0 13.0 14.6 15.3 8.9 9.1
file 2 (380 MB):
0 1 10 100 200 400
0 19.6 19.6 19.6 19.6 19.6 19.7
1 27.4 32.2 30.6 32.2 37.2 19.9
2 36.2 45.2 42.1 51.3 44.6 20.5
3 37.4 46.5 44.2 51.6 70.4 20.7
Here's my analysis of the trends in this data:
1. If GDAL's raster block cache is large enough for the entire file
to fit in the cache (columns 200 and 400 for image 1, and column
400 for image 2), the raster block cache gets 100% cache hits, and
you see very little increase in loading time for each repeat. This,
of course, is the primary benefit of GDAL's raster block cache.
2. If GDAL's raster block cache is off (column 0), each repetition
takes significantly more time, but each repetition takes less time
than the previous one. This is the case where we're relying completely
on the OS disk block cache, and using as little memory as possible so
that the OS disk block cache is as large as possible. The fact that
the time goes up so much from repeat 0 to repeat 1 suggests that the
OS's cache flushing algorithm is not LRUL (at least not on Linux, which
is the OS I used for these tests). The OS's cache flushing algorithm
doesn't appear to be deterministic, predictable, or even consistent.
So for this test, which was designed specifically to test LRUL caches,
it doesn't perform as well as GDAL's LRUL cache.
3. As GDAL's raster block cache size increases, execution time
increases (as long as GDAL's cache size remains too small for the
image). I believe this is the effect this test program was designed
to test. The raster block cache is getting 100% cache misses, so the
raster block cache is rendered useless, and it takes memory away from
the OS's disk block cache, making that cache less effective.
4. Setting GDAL's cache size to 1 MB significantly hurts performance
relative to 0 MB. I don't know what causes this. I don't think
it's because of increased level 2 processor cache misses, because
that would affect the last columns (the 100% raster block cache hit
columns) as well.
5. For image 1, setting GDAL's cache size to 400 MB rather than 200 MB
hurt performance significantly. I really don't understand this trend.
From my understanding of the GDAL source, I would expect this to have
absolutely no effect: comparing an integer (the current cache in use)
against 400 takes no more time than comparing it against 200. Since
this trend is not relevant to this discussion, I didn't bother to
track down the cause.
My conclusion? I don't really have one. On the one hand, one could
claim that this test shows that even in the best case, GDAL's cache
isn't enough better than the OS cache to be worth it, and that the OS
cache could outperform GDAL's cache in real-world examples. If so,
then removing GDAL's raster block cache would simplify the code and
improve performance. On the other hand, one could claim that this
test shows that the OS cache performs sufficiently poorly compared to
GDAL's cache that GDAL's cache is clearly of great benefit. The only
trick is setting the cache size to maximize the benefit.
More information about the Gdal-dev
mailing list