[GRASS5] Parallel GRASS modules for high performance

Yann Chemin ychemin at gmail.com
Sat Nov 26 03:49:10 EST 2005


What are generally the things that change through release cycles?

our experience with very simple MPI parallelization is that it can be
incorporated in a code relatively easily. The module may be compiled and run
sequentially, but with some IFDEF conditions, the sequential loop maybe
osculted and another version of the loop maybe compiled with MPI if
configured with it. So in that case, the code is holding (though a bit
longer) both types of infrastructures. People may port the sequential
anytime, and if happens to have a "parallel" user, most likely the
compilation trouble shooting will be limited to MPI code.
we did not try that IFDEF trick yet but it is something we want to end up
doing with our MPI and NinfG GRASS modules.

0.02 cents

On 11/26/05, Helena Mitasova <hmitaso at unity.ncsu.edu> wrote:
>
> You must have missed a lot in your search - numerous parallel versions
> of interpolation and other modules
> have been written since 1993, there should be a link to parallel idw on
> the grass web site and a parallel
> version of the s.surf.rst module is here:
> http://skagit.meas.ncsu.edu/~helena/grasswork/grasscontrib/
> rstmods2fixed.tar.gz
>
> The problem with these implementations is that unless the developer is
> committed to keeping them up to date
> they die pretty quickly (e.g. the rst works with GRASS5 but not GRASS6).
> So I have been begging everybody who tries to do parallel stuff for
> grass to do the parallelization
> on top of the modules rather than within the modules so that they are
> minimally dependent on changes
> within the modules. For example, v.surf.rst can be run efficiently by
> splitting region into smaller overlapping subregions
> and sending each subregion to a different processor and then patch the
> results together. Same can be
> done for r.mapcalc , r.slope.aspect and many other modules (there are
> some exceptions such as modules
> that include flow routing). This may have its own problems but it is
> definitely more general and has much better
> chance of surviving beyond one release cycle than writing a parallel
> version of a module.
>
> I have plenty of large data sets (tens to hundreds of millions of
> points) but you need to get GRASS read them first.
>
> Helena
>
>
>
> On Nov 25, 2005, at 12:33 PM, Muzaffer Ayvaz wrote:
>
> > Hi;
> >
> > Some modules in GRASS, especially surface generation modules, takes
> > long times according to the our data and our parameters.I am trying to
> > write parallel versions of theese modules via MPI library, to be able
> > to run GRASS in high performance parallel machines..
> >
> > I have looked mailing list archives, and there is a people who is
> > trying to do also same thing. But this mail was in 1993. I couldnt
> > reach this person.
> >
> > Now, I want to hear about your experiences. Does anybody try
> > samething? or Does anybody can give me knowledge about the compilation
> > of GRASS with this library (MPI), -mpcc must be used to compile-? Or
> > do you have large data sets for surface generation.
> >
> > Or which modules takes longs time in additon to the surface generation
> > modules,
> > I mean minutes, hours or more.
> >
> > Tha! nk you for all
> >
> > Your respectfully
> >
> > Muzaffer Ayvaz
> >
> > Yahoo! Music Unlimited - Access over 1 million songs. Try it free.
> Helena Mitasova
> Dept. of Marine, Earth and Atm. Sciences
> 1125 Jordan Hall, NCSU Box 8208,
> Raleigh NC 27695
> http://skagit.meas.ncsu.edu/~helena/
>
> _______________________________________________
> grass5 mailing list
> grass5 at grass.itc.it
> http://grass.itc.it/mailman/listinfo/grass5
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osgeo.org/pipermail/grass-dev/attachments/20051126/fe370b18/attachment.html


More information about the grass-dev mailing list