[gdal-dev] GSoC Image Correlator
Even Rouault
even.rouault at mines-paris.org
Mon Aug 27 08:50:14 PDT 2012
(Answering and forwarding to the list since you're apparently not subscribed.)
> Even,
> My implementation is based on SURF algorithm.
> http://en.wikipedia.org/wiki/SURF (brief summary and bunch of external
> links)
> http://www.vision.ee.ethz.ch/~surf/papers.html (here you can download
> original pdf paper which entirely describes algorithm)
Thanks for the links
> Normalization is a convenient technique.
> Luminosity image (buffer) should have values in [0, 1], because this image
> is used
> to create an integral representation which is sum of values. So, if don't
> perform
> normalization, in further it's possible to overflow data type. Furthermore,
> threshold for feature point detection will be drastically different for
> each image
> (that's why it will be impossible to use only one threshold for most of
> photos).
> Therefore, it's a very important requirement. (and I don't have enough
> information
> what results (stable or not) will be without it).
> I noticed that I've used a "255" value. It was my instant decision when I
> was
> working with my image samples. I agree that it should be replaced with more
> appropriate value. Something like "RasterBand->GetMaxValue()".
> As I mentioned above, normalization is significant step and it have to be
> done
> before feature point detection.
Ok, that's what I had imagined. To sum it up, the constraint is that
GDALIntegralImage::Initialize receives a buffer of values in the [0,1] range.
>
> The other important point is that the algorithms assume that each image can
> > entirely fits in RAM
> >
> It's a weak part of current realization. Now algorithm deals with entire
> image, because
> initially I planned to work with small copies (otherwise, i think it's
> possible to cut
> photo into pieces and run algorithm using pieces one by one. It will not
> detect points near
> corners of photo's parts, but it is a still possible solution).
> I think that in further development algorithm for detection feature points
> can be implemented
> using several threads in case of huge rasters. Every thread will be use
> only a fragment of image.
> By the way, I didn't write some tricky methods which make algorithm faster.
> In future I'm going to implement these refinements. There are still a lot
> of work.
Making algorithm work with imagery that is piece-wise loaded can be very
complicated indeed. And if you try to correlate images that are taken by sensors
that have very different angles, then you could need to correlate parts that are
not at all at the same position in the 2 images.
IMO, it is OK for now that the algorithm has this limitation. It just needs to
be mentionned in the doc, and catching allocation exceptions should be
sufficient to handle those situations (we don't want C++ exceptions to upwind to
C callers).
>
> Because I see that the first processing step is to compute a luminosity
> > buffer from the R,G,B
> > bands. This could perhaps be done in a different API, like :
> >
> > GDALRasterBandH GDALCreateLuminosityBandFromRG
> > B(GDALDatasetH hDS);
> >
> > void GDALFreeLuminosityBand(GDALRasterBandH );
> >
> It's a possible modification.
>
> Overall, I'm not confident about changes in API, it's a tricky task for me
> now.
I didn't imply that you needed to made them ;-) I was curious about Frank's and
your's opinions. That's something that other GDAL committers could do. This only
touches the "front-end" of the algorithm, so it does not require an in-depth
knowledge of it. Actually the RGB to luminosity converter can be done in an
independant step (that could be usefull for other purposes).
>
> Best regards,
> Andrew Migal
>
More information about the gdal-dev
mailing list