[gdal-dev] Re: Raster Strategies Examples
Edi.Karadumi
edikaradumi at gmail.com
Tue Aug 17 03:11:04 EDT 2010
Thanks, i have read one of your posts:
>Glen,
>
> As Frank has pointed out in his reply (and Ed has said for several
> years),
>the answer is to pyramid your data. This means constructing layers at
>different resolutions, then setting MINSCALE MAXSCALE for each of the
layers
>in the map file. The goal is to improve the performance of Mapserver by
>limiting the number of files it has to open to render a requested image.
>
> For example, I've set up a site to render the NASA's Geocover global
>Landsat mosaic. To handle the range of requests from full resolution (14
>meters per pixel) to full world-wide coverage (20km per pixel), I set up
six
>levels (and I may add one more) of the pyramid (view in a fixed-pitched
font
>such as Courier):
>
> Largest
>Number of Tile
>Lvl: Map Scale: Resolution: Tile Size:
>Pixels: Tiles Size Total Size: Format:
>---- ---------- ----------- ---------------------------
----
>---- ---------- ----- ----------- ------
>6 1: 50k 14 meters 1:250k-ish (1/16 of 5x6 deg) 10k
>x 10k 14064 26mb = 338 gb ECW
>5 250k 50 meters 1:250k 1 deg H x 2 deg W 2k
>x 2k 14921 3mb = 38.4 gb ECW
>4 1m 200 meters ~1:1m 5 deg H x 6 deg W 3k
>x 2k 36x60 = 2160 20mb = 26.1 gb TIFF
>3 5m 30 sec ( 1km) 15 deg H x 30 deg W 2k
>x 3k 12x12 = 144 20mb = 2.4 gb TIFF
>2 20m 2 min ( 4km) 45 deg H x 90 deg W 1k
>x 2k 4x 4 = 16 10mb = 0.167 gb TIFF
>1 100m 10 min (20km) 180 deg H x360 deg W 1k
>x 2k 1 = 1 7mb = 0.007 gb TIFF
>
>Disclaimer: The above is just an example of pyramiding a large raster
>dataset (tuning by adding levels or adjusting scale will likely be
>required). And I've broken Ed's Rule #3 ("For best performance, don't
>compress your data") since levels 5 and 6 are ECW, and Rule #4 ("Don't
>re-project your data on-the-fly") since levels 4,5,6 are in UTM.
>
>Brent Fraser
>GeoAnalytic Inc.
>Calgary, Alberta
but it is still confusing for me, cuz i havent worked for a long time with
raster data. My pixel size is 0.08m and i use default resolution, 72. my
images are 9375x6250 pixels. now i am having problems calculating how many
overview should i build, meaning maybe i should not build the 128 overview
but create another copy of the mosaic data on that point? or i should build
the 256 overview and build the additional external layer in 512. I know
these are some math calculations but im having problems
Regards
Edi Karadumi
On Mon, Aug 9, 2010 at 4:24 PM, Brent Fraser <bfraser at geoanalytic.com>
wrote:
The steps you've listed below are correct for maximizing performance when
viewing the raster at (or around) the native resolution of the data.
However, as you've found out, the performance can still be poor when trying
to view the entire dataset.
The basic rule is that your application (mapserver?) should not have to open
more than 4 to 8 raster files to render the view. With that in mind (and
since disk space is not a problem), create "layers" of the raster dataset at
different scales, say a factor of 2, until you have only one image.
For example, if the resolution of your original data is 1 meter per pixel,
your layers would look like the following. Note the first eight layers are
handled by the internal overviews you created with gdaladdo.
Pixel Map Scale:
Size: 1:n
----- ------
1 m 4,000
2 8,000
4 16,000
8 32k
16 64
32 128
64 250
128 500
Additional External Layers
256 1m
512 2m
1024 4m
2048 8m
4096 16m
To create the additional external layers, use GDAL's gdalbuiltvrt to create
a VRT file of the original dataset. You can then use gdal_retile.py or
Maptiler to build the layers.
Best Regards,
Brent Fraser
Edi KARADUMI wrote:
I have read many strategies for raster performance, but i still have
problems with my case. The posts that i have read explain the strategie, but
are not very detailed. Im new to mapserver so i have problems implementing
them. My case is:
-about 6000 tiles that form the map -170mb each tile -aproximately 1.2T of
iamges -i have 10T HD and i think disk space is not a problem for me -tile
format is TIF -tile size 9375x6250 pixels
The strategy i have implemented and the problems i have
-First i divided the tiles in 60 folders, to increase the performance in
disk seek/read.
-Than i transformed source files into internally tiles with the command
gdal_translate -of GTiff -co "TILED=YES" original.tif tiled.tif
-i used gdaladdo to add internal pyramids
gdaladdo -r average 2 4 8 16 32 64 128
-Created a tileindex using gdaltindex -write_absolute_path MapAll.shp
//server/Maps/Subfolder1/*.tif
-Created a spatial index .qix file shptree MapAll.shp
-than added the layer to the mapfile without the .shp extension so the
application can use the .qix
as you may know, i have very slow performance when i zoom out and im stucked
here. As i have read i should make a copy of the tiles with reduced
resolution. Merge the tiles together and use min/max scale to show different
layers in different scales. the min/max scales i zoom in/out are
100/1200000. Now my questions are
- How can i calculate the scale where i should create another layer of the
tiles, or i shoud see it with some tests?
- how much should be the resolution of the new layer?
- is there any tools or program to merge the tiles? merging 6000 tiles with
the gdalwarp by writing the command by myself is frustrating
- how many tiles should i merge together to create the new layer? (how many
tiles should have the new layer) i know that in each zoomscale its better to
appear only one tile but i dont know how to calculate it
- the tiles that i should merge are the originals or those with internal
tiling and overviews?
--
View this message in context: http://osgeo-org.1803224.n2.nabble.com/gdal-dev-Raster-Strategies-Examples-tp5364468p5430942.html
Sent from the GDAL - Dev mailing list archive at Nabble.com.
More information about the gdal-dev
mailing list