[gdal-dev] Failing to write to s3 using GTiff driver.

William Roper roper.william at gmail.com
Tue Aug 16 04:11:23 PDT 2022


Thanks for the reply Even.

I've run with "--debug on" and I think the interesting bit is:

GDAL: GDALWarpKernel()::GWKCubicNoMasksOrDstDensityOnlyByte()
> Src=0,15684,6x73 Dst=686,22448,98x92
> WARP: Using 1 threads
> GDAL: GDALWarpKernel()::GWKCubicNoMasksOrDstDensityOnlyByte()
> Src=0,15750,4x72 Dst=686,22540,98x92
> WARP: Using 1 threads
> .COG: Reprojecting source dataset: end
> GTiff: ScanDirectories()
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.msk)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.MSK)=0
>  response_code=404
> GDAL: GDALDefaultOverviews::OverviewScan()
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.ovr)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.OVR)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.aux)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.AUX)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.aux)=0
>  response_code=404
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp.AUX)=0
>  response_code=404
> COG: Generating overviews of the imagery: start
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.ovr.tmp)=0
>  response_code=404
> S3: /vsis3/my-bucket/test_batch_output/test.tif.ovr.tmp is not a object
> GTiff: File being created as a BigTIFF.
> ERROR 1: Only read-only mode is supported for /vsicurl
> COG: Generating overviews of the imagery: end
> S3: UploadId:
> Rok8InTcYwFMoHYA3kVAwlLDycIxreczdLSODBpLA5exAshfS46ZclK.3.2.a_EAAY3vhoJQnc7DsVk9X3pzsRetSAZ.oc39kDwRe755ftroj5rH9BRXLPaNCcnuTGZP
> S3: Etag for part 1 is "52647419376831771b40964bad80d86e"
> S3: Etag for part 2 is "4bea862d0878bbe1dbe717aa8cea48c8"
> S3: Etag for part 3 is "5ba3c9e4ea924f06cdc2ee614e5627c0"
> S3: Etag for part 4 is "07e53a2e07a2575d1b2d89cf8040ebe8"
> GDAL:
> GDALClose(/vsis3/my-bucket/test_batch_output/test.tif.warped.tif.tmp,
> this=0x55a288913060)
> S3: GetFileSize(
> https://my-bucket.s3.amazonaws.com/test_batch_output/test.tif.warped.tif.tmp)=207439264
>  response_code=206
> GDAL: GDALClose(/vsis3/my-bucket/dtm/test.tif, this=0x55a2882e8570)
> GDAL: In GDALDestroy - unloading GDAL shared library.
>

 It looks to me like gdal attempted to write a lot of the tmp files to s3,
rather than locally? the command I used is:

gdal_translate --config AWS_SECRET_ACCESS_KEY <hidden> --config
> AWS_ACCESS_KEY_ID <hidden> --config CPL_VSIL_USE_TEMP_FILE_FOR_RANDOM_WRITE
> YES --config CPL_TMPDIR /tmp -of COG -co
> TILING_SCHEME\=GoogleMapsCompatible -co COMPRESS\=DEFLATE -co
> NUM_THREADS\=ALL_CPUS --debug on /vsis3/my-bucket/dtm/test.tif /vsis3/
> my-bucket/test_batch_output/test.tif
>

This is on Ubuntu 22.04 using gdal 3.4.1

The iam user has full control over the bucket, checked with the "aws s3api
get-bucket-acl" command, and I can read and write from the bucket using the
"aws s3" command, so if it's an s3 permissions issue, then I'm failing to
give gdal the necessary info - I understood AWS_SECRET_ACCESS_KEY and
AWS_ACCESS_KEY_ID
were sufficient.

Thanks again for taking a look!
Will



On Tue, 16 Aug 2022 at 08:18, Even Rouault <even.rouault at spatialys.com>
wrote:

> William,
>
> can you add "--debug on" and retry to see if more hints are emitted about
> what's going on.
>
> Even
> Le 15/08/2022 à 11:45, William Roper a écrit :
>
> Hi list,
>
> I'm trying to convert a GeoTiff on s3 into a COG, that will also live on
> s3. I'm working on EC2 instances, or fargate containers, because local
> download and upload speeds are a bit slow.
>
> gdalwarp \
>   --config AWS_SECRET_ACCESS_KEY <my-secret-key> \
>   --config AWS_ACCESS_KEY_ID <my-access-key> \
>   --config CPL_VSIL_USE_TEMP_FILE_FOR_RANDOM_WRITE YES \
>   --config CPL_TMPDIR /tmp \
>   -of COG
>   -co TILING_SCHEME=GoogleMapsCompatible
>   -co COMPRESS=DEFLATE
>   -co NUM_THREADS=ALL_CPUS
>   /vsis3/my-bucket/original-old-geotif.tif
>   /vsis3/my-bucket/shiny-new-cog.tif
>
>
> And get the following error:
>
> 0...10...20...30...ERROR 1: Only read-only mode is supported for /vsicurl
>
>
> I get the same issue running in a docker container (osgeo/gdal:latest) or
> on an ubuntu EC2 instance with gdal-bin installed with apt  (version
> 3.4.1).
>
> I can work around it by writing the COG to the machine, and then uploading
> to s3, but was wondered what I was doing wrong/not understanding.
>
> Many thanks
> Will
>
>
>
> _______________________________________________
> gdal-dev mailing listgdal-dev at lists.osgeo.orghttps://lists.osgeo.org/mailman/listinfo/gdal-dev
>
> -- http://www.spatialys.com
> My software is free, but my time generally not.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/gdal-dev/attachments/20220816/351ca4cf/attachment-0001.htm>


More information about the gdal-dev mailing list