sensitivity analysis

James Darrell McCauley mccauley at ecn.purdue.edu
Sat Jun 25 19:16:29 EDT 1994


mike camann (camann at pick.uga.edu) writes on 25 Jun 94:
>mis-classifications in the underlying data layers.  To do this, I need
>to randomly select a fixed percentage of polygons within each map
>layer, and reclassify them to random categories within the set of
>possible categories for that map.

when you say "polygons," are you talking about vector data? If so,
you might want to copy the vector map and consecutively
label each polygon (1-400). This could be done fairly quickly
with a little awk (on the dig_att file, assuming all polygons
are already labeled).

For raster data, you might look at r.clump to make contiguous
areas separate categories (1-400).

>each to a random value between 1 and 50 (I'm excluding NO_DATA from the

You'll probably have to write a little 10-20 line program to do this.
What distribution should the random values have? Uniform? Normal?
There's probably enough in the source for s.rand and s.perturb to
get you started with this.

First, select 20 random values between 1-400 (uniformly, without
replacement, I suppose). Then, generate 20 random values and you're
done.

depending upon how critical this work is (e.g., are peoples' lives
involved?), you may want to be careful when selecting a RNG. The 
appended code uses one distribution with the OS (which is not
known to be super-good).

--Darrell


#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#define RANDOM(lo,hi) ((float)rand()/RAND_MAX*(hi-lo)+lo)
 
main()
{
  int i;
  for(i=0;i<20;i++) 
    printf("%d\n", RANDOM(1,400));
}



More information about the grass-user mailing list