[gdal-dev] RE: progressive rendering

Even Rouault even.rouault at mines-paris.org
Wed Sep 3 18:42:01 EDT 2008


just a very quick review of the latest state of the RFC after a quick reading 
of it... 

- do we really need to make new classes GDALAsyncDataset and 
GDALAsyncRasterBand. I've raised that point on previous email. --> potential 
problem for the corresponding C API, but, I agree, not critical. We just need 
to be aware of it.
- if we allow classical RasterIO to be mixed with CreateRasterIOContext, or 
not, should be clearly stated. Or maybe just state that it is "left to the 
implementation driver. RTFM of the driver where it is specified and check 
well the returned error codes !"
- same remark with 2 or more CreateRasterIOContext being used "at the same 
- I don't really see how "static GDALDataset *GDALAsyncDatasetOpen" can be 
used by GDALOpen(). The current GDALOpen method iterates on each DRIVER 
object and calls  pfnOpen() pointer function on it. It has only knowledge of 
the driver object, so it couldn't make any use of GDALAsyncDatasetOpen(). Do 
we need to open the dataset in a particular way for the progressive/async 
rendering ? Can't arguments just be provided as the dataset name. Currently 
most drivers opening datasets that are not real files have a syntax 
like "DRIVER_NAME:host=XXXXX:dbname=XXXXX:user=XXXX:". This could be a way of 
initiating async io mode if necessary. Or maybe the papszOptions should be 
used at the CreateRasterIOContext() level. That's a convenient way of 
extending stuff, but we should avoid abusing it too... I guess your knowledge 
and experience of JPIP can guide the choice. My feeling is that it would be 
more appropriate to add it into CreateRasterIOContext(), as this will define 
a new transaction with the server. It seems unlikely that we would need to 
change the parameters of this transaction during it. This is just an 
- I don't understand how SetView() interacts with the corresponding parameters 
in the initial CreateRasterIOContext().
- Really detail question... but when and by whom is the GDALRasterIOContext * 
freed ? I think one of my proposal was to add a symetrical method to 
CreateRasterIOContext. Something like
"void the_good_class_name::DeleteRasterIOContext(GDALRasterIOContext *);" Then 
CancelIO() would be unnecessary.
- Do we really need the GDALAsyncRasterBand class or corresponding 
functionnality ? My feeling is that we don't need it. If the user is only 
interested by one band, he can do that with the CreateRasterIOContext() at 
the dataset level.


Le Wednesday 03 September 2008 19:24:40 Norman Barker, vous avez écrit :
> Hi Tamas,
> Comments inline, thanks again for your thoughts.
> -----Original Message-----
> From: Tamas Szekeres [mailto:szekerest at gmail.com]
> Sent: Wednesday, September 03, 2008 10:59 AM
> To: Norman Barker
> Cc: gdal-dev at lists.osgeo.org
> Subject: Re: [gdal-dev] RE: progressive rendering
> 2008/9/3 Norman Barker <nbarker at ittvis.com>:
> > <ncb>
> > It would be useful to maintain both ReadNextBlock and RasterIO in the
> > case of JPIP, it can be advantageous (if the client can handle it) to
> > call regions of a server image on tile boundaries -> ReadNextBlock,
> but
> > in most cases I think RasterIO will be called.
> >
> > If you think it will just be easier to keep RasterIO then I am fine
> with
> > that.
> > </ncb>
> Norman,
> I don't see the difference between the functionality of ReadNextBlock
> and RasterIO. I guess it was only a name change when Adam have renamed
> my ReadnextBlock to RasterIO and I didn't object to it.
> Can you explain a possible difference from the aspect of the JPIP
> driver.
> I admit the rasterIO region have already determined when
> CreateRasterIOContext have been called. I think we cannot safely
> change these parameters within the same IO context.
> <ncb>
> Isn't there a function in GDAL to get the most appropriate block size,
> it will certainly be possible to advertise this through JPIP. Whereas
> with RasterIO you could specify a region that covers many blocks.  Again
> I defer to you on this, we can make it simple and just keep the one
> function.
> </ncb>
> > <ncb>
> > In comparison to a callback function, I think this behaviour is
> > synchronous, we are blocking until a timeout, and then calling again
> for
> > another time period even if it returns immediately.  However this
> isn't
> > a bad thing.
> > </ncb>
> I consider the blocking will occur until that time when a valid update
> is ready to be rendered by the user. In the meantime the user won't
> really be able to render anything so I guess waiting here is not a big
> issue. However if you would support the user to do any other thing
> within this thread, then implement to return immediately when the
> timeout is set to 0. Then status= IO_PENDING will show that there's
> nothing new can be found in the buffer and RasterIO should be called
> again soon.
> <ncb>
> Ok, I have got it now - we have a polling thread on the status - sounds
> good to me.
> </ncb>
> > I don't think if the driver would be required to copy padding data
> > into the user buffer since the user will be notified about the update
> > region that should be copied over.
> >
> > <ncb>
> > In the case of JPIP (however unfortunate) the server is allowed to
> > adjust the region of the image returned this may require some padding
> > the requested client buffer.
> Hmmm... That's difficult for me to follow. Won't it result in ugly
> effects in the image display?
> <ncb>
> It can result in ugly effects on the display, however those 'ugly'
> regions will be filled with subsequent requests to the server.  It is
> part of the JPIP spec. so I mentioned it.
> Many thanks.
> </ncb>
> Best regards,
> Tamas
> _______________________________________________
> gdal-dev mailing list
> gdal-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/gdal-dev

More information about the gdal-dev mailing list