[gdal-dev] Quick question about gdal2tiles

Petr Pridal - Klokan Technologies GmbH petr.pridal at klokantech.com
Mon Feb 2 17:24:19 PST 2015


Hi Jorge,

rendering of your sample file down to zoomlevel 19 would generate over 2
billion tiles - probably tens of terabytes of data. I guess it is not what
you want to do... Typically the overzooming is done on client side in a
JavaScript viewer or on the tileserver hosting.

The tiling utilities suggests you the native zoom level automatically -
based on the raster resolution of the input.

In case you want to render large datasets composed of multiple files
efficiently I would recommend to try our MapTiler Pro command line utility.
It is significantly faster then GDAL2Tiles and produces optimised and
correct tiles, while handling merging of multiple files, solving issues
with dateline crossing, reprojection and merging of data in different srs,
direct output to MBTiles, OGC WMTS compatible export, and many more
features.

The usage on command line is quite easy:

$ maptiler -o merged_output_dir input1.tif input2.tif input3.tif

Input files can be in the original srs - every extra reprojection degrades
visual quality of the output.
Merging with VRT is not recommend - it slow down the tile rendering, as
there are artificial blocks introduced - it is better for performance to
pass the original files directly.

MapTiler is a complete C/C++ reimplementation of my GDAL2Tiles utility.
There are several significant improvements in the merging and internal
tiling process including multi-thread CPU parallelization and automatic
file size optimisation (PNG color quantization and JPEG tweaks).
Installation is possible with the .deb or .rpm packages or binaries for Mac
OS X or Windows or Docker.

The small file, like your sample mentioned above, can be rendered directly
with the MapTiler Free in the graphical user interface which is available
for download at:http://www.maptiler.com/download/

If you drop us an email we send you also the Pro version with command line
for your free testing.

Best regards,

Petr Pridal, PhD
Klokan Technologies GmbH

On Wed, Jan 28, 2015 at 8:17 PM, Jorge Arévalo <jorge at cartodb.com> wrote:

>  Hi,
>
> I'm working with a patched version of gdal2tiles, which makes use of
> parallelization:
> http://gis.stackexchange.com/questions/7743/performance-of-google-map-tile-creation-processes/74446#74446
>
> I want to create a complete TMS cache from raster imagery. No assumptions
> about SRS of data type for input data.
>
> I want the tiling process to be as fast as possible (gdal2tiles is really
> heavy process), do you have any recomendations about the data or the
> parameters used?
>
> Right now, I'm doing this
>
> Step 1: build vrt from input images
>
> gdal_vrtmerge.py -o merged.vrt <list of input tif files>
>
> Step 2: build tiles from vrt file
>
> gdal2tiles.py -r cubic -s epsg:XXXX -z 0-19 -w all merged.vrt tms_dir
>
> Even with parallelization, process still feels really slow. Would it be
> faster if, for example, I convert all my input files to epsg:3857? Or if I
> scale them to 8-bit? Or if I use near resampling method instead of cubic?
> (I'm using cubic because I'm working with continuous data: satellite
> images, am I doing it right?).
>
> Any other tips?
>
> Many thanks in advance
>
> Best regards
>
>
> --
> Jorge Arévalo
>
> CartoDB
> http://cartodb.com/
>
>
> _______________________________________________
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev
>



-- 
Petr Pridal, Ph.D.
(managing director)

Klokan Technologies GmbH
Hofnerstrasse 96, 6314 Unterageri, Switzerland
Tel: +41 (0)41 511 26 12
Email: info at klokantech.com
Web: http://www.klokantech.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/gdal-dev/attachments/20150203/53a55f2f/attachment.html>


More information about the gdal-dev mailing list