[gdal-dev] GSoC Image Correlator

Even Rouault even.rouault at mines-paris.org
Mon Aug 27 11:04:42 PDT 2012


Le lundi 27 août 2012 18:12:13, Frank Warmerdam a écrit :
> On Mon, Aug 27, 2012 at 11:50 AM, Even Rouault
> 
> <even.rouault at mines-paris.org> wrote:
> > Making algorithm work with imagery that is piece-wise loaded can be very
> > complicated indeed. And if you try to correlate images that are taken by
> > sensors that have very different angles, then you could need to
> > correlate parts that are not at all at the same position in the 2
> > images.
> 
> Even,
> 
> I think it will actually not be too hard to remove this problem
> with loading the whole image.  It seems that the reference
> point objects (the somewhat poorly named GDALFeaturePoint)
> stores a "descriptor" representation of the point with it and so
> after these are collected from one of the images it is not
> necessary to keep the original image in memory.

Good point. Then the simplest solution I imagine would be to call 
GDALIntegralImage::Initialize() and GDALSimpleSURF::ExtractFeaturePoints() on 
image extracts (for example horizontal swaths), and shift the y of the 
returned points according to the y of the top of the swath.

But I feel there's a risk not to identify features that are located at the top 
or bottom of each swath (or perhaps having false positives, but that would be 
annoying for the general non-windowed case). What makes me believe that, is 
that I see a lot of loops that are iterating over each pixel, with 
computations based on a neighborhood of pixels. It is then likely that no 
usefull computation can be made on border pixels.
So, the alg would probably need to make sure thath there is some overlapping 
between successive swaths (typically of the order of 2 times the window 
radius), and discard the feature points that are too close of the border of a 
swath.

Example with ASCII art of what I mean :

-------------------------------------------------------


      1


                   2

      3
                                      4
++++++++++++++++++++++
               5
~~~~~~~~~~~~~~~~~~~~~~
                           6
-------------------------------------------------------
       7

                    8





++++++++++++++++++++++

Let's say that the first processing swath is limited by the 2 ------ lines , 
and the second swath is limited by the 2 +++ lines. The distance between ~~~~~ 
and ------- (or between ~~~~~~ and ++++++) would be the window radius.
When processing the first swath, you would keep 1,2,3,4 and 5, but would reject 
6 (if it is revealed by the algorithm) because potentially affected by edge 
effects. When processing the second swath, you would ignore 5 for the same 
reason, and keep 6, 7, 8, etc...


> 
> I'm still not sure how to handle the image scaling and
> conversion to a luminosity image more smoothly.
> 
> Best regards,


More information about the gdal-dev mailing list