[postgis-devel] GSoC 2012 PostGIS Raster Project - Distance Analysis in PostGIS Raster (Qing Liu)

Pierre Racine Pierre.Racine at sbf.ulaval.ca
Mon Jun 25 14:35:42 PDT 2012


Sorry I could not answer to this before. Was teaching intensively QGIS.... And got some holydays.

> -----Original Message-----
> From: Paul Ramsey [mailto:pramsey at opengeo.org]
> Sent: Tuesday, June 19, 2012 1:00 PM
> To: Qing Liu
> Cc: postgis-devel at postgis.refractions.net; Pierre Racine
> Subject: Re: [postgis-devel] GSoC 2012 PostGIS Raster Project - Distance Analysis
> in PostGIS Raster (Qing Liu)
> 
> Start only from points seems fundamentally limiting for no good
> reason. The added "precision" you get by working directly against the
> points seems pretty pointless for most grids. You'll get a lot of
> users forced to convert their input geometries into point sets before
> starting, and then their question will be "why is it slow" and you'll
> say "because you have too many points, then them" and they'll say
> "how" and you'll say "snap them to a grid basis" and at that point
> they might as well have rasterized anyways.

The idea was not to start from one point but from a coverage of point, however numerous they are. The gain is not in precision, it is in scalability. Give me any point coverage (of line or polygon coverage) and some raster specifications and I will return a (1 tile) distance raster coverage. Producing an intermediate raster is costly if the requested raster is high resolution. Carrying a simple point is much lighter.

> I still think a generic cost calculator would be more useful than a
> single-purpose distance calculator.

Right. But we were directing toward a more generic "nearest neighbor" solution to reuse the logic to implement other interpolation tool. I still think computing a simple distance raster is nearer to "computing a pixel value based on some neighbors" than computing a cost raster. The logic behind computing a cost raster is way more complex than any type of interpolation and it seems inefficient to me to add this complexity to the way more simple problem of computing the Euclidian distance to the nearest point/line/polygon possibly using KNN indexing to find it.

We also might want this (this is not necessarily well stated as a constraint in the document) on a tiled coverage.

Another factor to add in the equation is that you don't have any extra cost raster to provide to such a simple distance or interpolation function. For cost distance you have to provide this cost in some way and it might be heavy. How we could make such an algorithm to work with a cost tiled coverage, I still have no idea.

So I think we should go like this:

-Continue to work on producing a distance raster with an approach opening the way to further interpolation.

-Adopt another, more restrictive approach (forget about tiling for now), to produce a distance cost raster.

BTW before thinking about distance cost raster, we need to have a way to produce the cost raster. ST_Union from geometries was the way planned long ago but it is still very inefficient at producing one raster from a set of geometries. Maybe we could think about working with a cost polygon coverage instead since that's generally what we have in the beginning anyway... Just an idea. I want us to rethink raster/vector analysis in the context of huge raster/vector coverage. Not just copy what other software are doing. They are generally bad at working with huge tiled coverage. And don't even think about irregularly tiled.

Pierre



More information about the postgis-devel mailing list