David Mark's suggestion
westerve at marla.urban.uiuc.edu
Wed Jan 22 10:25:04 EST 1992
Providing an automated capability to estimate execution time of GRASS commands
has been stuck in the rumination-over-beer stage. We find the following
hurdles to be problematic - some more so than others:
1) Working under UNIX, GRASS has the wonderful advantage of utilizing virtual
memory. A program running entirely in memory without system page swapping
can run many times faster than the same program running while page swapping
is taking place. Whether or not the system will be busy swapping depends
on the entire system load. With multi-user multi-tasking systems it is
very difficult to estimate up front wall-clock execution time.
2) GRASS is a multi-platform system. It runs on a wide range of platforms
(Cray - workstations - PC's) each of which can have anywhere from 1 to
hundreds of megs of main memory and can have little to gigabytes of swap
space. Estimating wall-clock time on different machines is problematic.
3) Variable load. Assuming that a quick estimation can be made of memory
and CPU requirements of a GIS operation up-front and that current system
characteristics can be estimated accurately (CPU speed, available real
memory, available virtual memory, amount of allocated memory actually
active (being paged in and out), and speed of associated peripherals (e.g.
plotters), it is still difficult to anticipate system use. Some program
running continuously (at low load) could record a history of system loads.
At run-time, the GIS program could compare CPU needs with the time-of-day,
day-of-the-week, and holiday schedule with the historical information to
make a best guess of anticipated clock-time. Of course, unanticipated high
use will throw off the estimate.
Currently our approach has been to leave time estimation up to the user. After
a very few repetitions of a given operation, the user can readily get a feel
for the operation of the software on the given hardware.
One approach we've had is to operate, by default, some of the more CPU
dependent operations in the background. The user automatically gets mail
upon the completion of the task. Duane Marble once responded to GIS-L
complaints about this procedure by suggesting that if a GIS operation takes
that long it should have its algorithm overhauled. While always a possibility,
some operations are inherently difficult to speed up. For example, consider
the following analyses on a 4000x4000 raster map (16M of pixels): proximity
analysis (buffer zones) - especially on a latitude-longitude projection,
many neighborhood operations, application of an expert-system rule base to
individual pixels, and many of the statistical operations involved in image
processing. Many vector operations can similarly take a very long time to
operate. Hence, the conversation that I am continuing above.
More information about the grass-dev