[GRASS-user] Advice for starting a GRASS project

Ned Horning nedh at lightlink.com
Fri Jan 9 06:53:35 EST 2009


I'd like to get some feedback on my proposed approach for a land cover classification
project in GRASS. Basically I'm looking for pointers to help me avoid too many
learning curve pitfalls. I have the 3rd edition of the Grass book (great resource)
and lots of image processing experience using other software but limited GRASS
experience.

After several years of failed attempts at learning GRASS I have (nearly) decided
to do a large project using it to force myself to become proficient enough with
GRASS to use it on a regular basis. In the past I would always start a GRASS
project, get frustrated and then would turn to the proprietary software I was
very familiar with to do the project to save time. I have been encouraging other
people to use GRASS and figure it's time that I do the same. 

The project is fairly straightforward but it involves processing about 30 fairly
large (1 – 2 GB each) 2.5m SPOT and 1m IKONOS images. The initial step is to
create a shrub/non-shrub map using the high resolution imagery and then use
the shrub presence locations (from the shrub / non-shrub map) and several Landsat
TM/ETM+ images to train regression trees to create a percent shrub cover map.
Many sets of images were acquired in the same pass on the same day and can be
mosaicked but then they would then be near or over the BIGTIFF limits. 

These are the steps I expect to follow. Any comments or answers to my questions
would be greatly appreciated.

1) Import SPOT and IKONOS images into GRASS [one location for each image]

2) Shrub / non-shrub classification using GRASS. I am trying to think of ways
to avoid having to pick training data from each image (to save time) since there
are many sets of images that were collected consecutively on the same path on
the same day and therefore have similar illumination conditions. 
I could mosaic them but I'm concerned the images will be too cumbersome to handle
(~3-8GB). How is GRASS performance with large images and a 4 year old computer?
To avoid mosaicking the images I could collect training statistics from one
(or maybe combine statistics from two images?) and apply those statistics to
each of the images in the set using a supervised classification algorithm. I
may just end up picking training data from each image but that will be very
time consuming. I am considering using i.smap. Is this feasible with large images?
A maximum likelihood algorithm would probably work fine but I thought it would
be interesting to try smap. 

3) Project shrub/non-shrub and SPOT/IKONOS images Albers using GDAL [automate
using a script]

4) Georeference shrub / non-shrub map to reference Landsat images [might do
this with ENVI or ERDAS unless GRASS is a good choice]

5) Create percent shrub cover map using regression tree algorithm, Landsat imagery,
and shrub location  data from the high resolution shrub / non-shrub map. I will
probably do this using proprietary software unless I can do it easily in GRASS
since a method has been established for another project. 

All the best, 

Ned




More information about the grass-user mailing list