[GRASS-user] Workflow of a classification project with orthophotos
Nikos Alexandris
nikos.alexandris at felis.uni-freiburg.de
Thu Jul 31 14:39:07 EDT 2008
On Thu, 2008-07-31 at 11:17 -0700, Jonathan Greenberg wrote:
> Nikos:
>
> Performing relative radiometric normalization is a *requirement* of
> applying a single classification to multiple images (also for change
> detection). Unfortunately, it is not an algorithm that is available (to
> my knowledge), out-of-the-box, on ANY remote sensing platform (GRASS,
> ENVI, etc.). However, you can do the radiometric normalization yourself
> -- the idea is that pixels in the overlap zone between two images which
> are invariant (e.g. have not changed in structure, spectral properties
> or, in more complex architectures like trees, sun angle) should be
> linearly related to their counterpart in the other image. Assuming
> this, you can either manually choose a set of "psuedoinvariant" targets
> (pairs of pixels which are at the same location and are not changing)
> between the two images, and calculate an orthogonal regression to
> generate gains and offsets. One of those images, therefore, becomes
> your "reference" and the other one your "target". The gains/offsets are
> applied to the target image.
>
> There are automated algorithms for doing the pseudoinvariant pixel
> selection (search for "radiometric normalization remote sensing" on
> google scholar), or if you assume that the images do not change between
> dates and are WELL rectified to one another, you can extract the ENTIRE
> overlap zone between the two images and calculate the regressions based
> on those. This last suggestion is probably the fastest, but also incurs
> the most error and I wouldn't neccessarily recommend it.
>
> This would be a VERY good algorithm to add to GRASS -- if anyone is
> interested in pursuing coding this, I can help design the algorithm
> (including which are the best automated invariant target selection
> algorithms).
>
> --j
Jonathan,
thank you very much for your reply. I've done my homework and I already
read previous posts of yours as well as from other people. I already
know this process as I performed it on a change detection project [1]
It's a time consuming process even for just 2 images. My real BIG
question is: how do Open Source Professionals image normalisation for
aerial photos... let's say 300 photos? I cannot imagine that people
sit-down and extract psuedoinvariant targets for 300 photos (except they
are payed a lot for that).
As I wrote the Mosaic that I work on is a MESS. And the people do not
provide the original data. So I don't have any overlapping zones at
all :D So I forget the normalisation anyway!
The next possible solution for mapping my forest gaps (see first and
second mail of mine) is, I think, to extract only segments somehow and
the identify the forest gaps visually. The segmentation would save me
since it's faster to recognise homogenous gaps that way. Now I am kind
of disappointed since I can't get i.smap do this segmentation-solo task.
And of course I cannot collect training samples for 300 photos.
Any Open Source alternatives for image segmentation?
.
[1] Details: I performed an empirical image normalisation, that is a
regression-based normalisation, for burned area mapping with MODIS
satellite imagery, a pre-fire and a post-fire image more or less the way
you describe it. I intend to participate in FOSS4G in South Africa
(although other difficulties do not allow me to participate in the
upcoming conference). I have a step-by-step document with more than 120
pages and I don't know anybody with experience who would like to have a
look at it so it's still under heavy corrections :-)
P.S. If anyone is interested to have a look in my step-by-step document
I invite him for free vacation in my home in Central Greece :D
More information about the grass-user
mailing list