[postgis-users] Geoprocessing & BigData

David Haynes haynesd2 at gmail.com
Thu Jan 28 10:03:30 PST 2016


The way that were are thinking of handling something like that now would
load the partitioned data into nodes. I imagine you want to load all the
data into 1 table on 1 node.
This would result in a large table that might be difficult for simple
queries, alternatively you could use the postgresql inheritance and load
data and create indices by state.

Address the comment that Remi-C brought up.
I have spent a lot of time investigating Postgres-Xc / Postgres-XL /
CitusDB and greenplum. None of them really parallelize your spatial
functions. This is what we have found.

The current implementation of Paragon uses a variant of round-robin
declustering. The declustering algorithm produced 1024 spatial partitions
after processing the dataset in Table 1. The physical storage and
management of the partitions in Paragon is done by taking advantage of
PostgreSQL’s sharding feature [16]. We extended the SQL create table
statement to specify spatial declustering parameters, such as, the number
of partitions to be created, the declustering method, and a label for the
declustering scheme. To execute a spatial join, the labels of the two
tables, being joined, must match. This mechanism allows the same spatial
dataset to be partitioned using different declustering schemes.

Table 1. Spatial data used for comparison

*Database Table (acronym)*

*Geometry*

*Number of Objects*

Area-water (Aw)

Polygon

39,334

Area-landmass (AI)

Polygon

5,5951

Edge (Ed)

Polyline

4,173,498



Table 2. Comparison of Query Times: Paragon vs PostgreSQL

*Query (acronym)*

*PostgreSQL (seconds)*

*Paragon (seconds)*

*Speedup*

Polygon overlaps Polygon (Aw_ov_Aw)

      77.3

      53.5

 1.37

Polyline Touches Polygon (ED_to_Al)

    452.9

    246.0

 1.84

Polyline Crosses Polyline (Ed_cr_Ed)

  1693.2

  1022.0

 1.65



We executed spatial join queries from the Jackpine spatial database
benchmark [6] with Paragon in a two node cluster. The queries are expressed
in SQL with some of the spatial predicates adopted by Open Geospatial
Consortium (OGC). For instance, Code 1 demonstrates the “Polyline Touches
Polygon” query shown in Table 2.

Code 1. Spatial SQL Query

SELECT COUNT(*) FROM edges ed, arealm al WHERE ST_Touches(ed.geom, al.geom);


On Wed, Jan 27, 2016 at 8:56 PM, Lars Aksel Opsahl <Lars.Opsahl at nibio.no>
wrote:

> Hi
>
>
> We have done some testing on this using a single Postgis server.
>
>
> -layer 1 has 7924019 rows with 11 columns and about 1 billion points.
>
> -layer 2 has 1088614 rows with 20 columns.
>
> Both layers covers all of Norway.
>
>
> I do a “esri” union in a psql function and get a new table with 27852836
> rows and 30 columns with multipolygon. The size of the new table is about
> 40 GB.
>
>
> This is done in less than 3 hours (real 152m8.248s)
>
>
> I have made a psql function called func_esri.get_esri_union that I calls
> shown below.
>
>
> psql -t -q -o /tmp/vroom2.sql sl -c"drop table IF EXISTS sl_lop.r1; drop
> table IF EXISTS sl_lop.c1; select
> func_esri.get_esri_union('org_ar5arsversjon.ar5_2013_komm_flate id
> geo','org_ar.ar250_flate sl_sdeid geo', 'sl_lop.r1','sl_lop.c1',3000,false)"
>
>
> Then I take the output from this function and uses Gnu parallel to run the
> computed sqls in 20 parallel threads.
>
>
> time parallel -j 20 psql -h vroom2 -U postgres sl -c :::: /tmp/vroom2.sql
>
>
> This is fast Postgis server with ssd disks and a lot of memory and cpu.
>
>
> The basic idea is that I use
> https://github.com/larsop/content_balanced_grid/ to make a grid and then
> create sqls adjusted for this grid. The size of the cells varies a lot. The
> 3000 parameter in the sql function sets the limit to max 3000 bounding box
> pr cell.
>
>
> I will post the code on git hub as soon as I have time, I need to clean it
> up and make some comments first.
>
>
> We also did a small comparison with Arcgis where we ran on a small subset
> of the tables and we got result file with 186372 rows. That took about 5
> minutes with Arcgis software and 1 minute in postgres. This test was
> running on smaller postgres database server.
>
>
> Since there are different hardware for Arcgis and Postgis I will not put
> much in to this comparison, but my point is Postgis scales very good on big
> data having the right hardware and software.
>
>
> Lars
>
>
> ________________________________
> Fra: postgis-users [postgis-users-bounces at lists.osgeo.org] på vegne av
> Ravi Pavuluri [ravitheja at ymail.com]
> Sendt: 27. januar 2016 21:31
> Til: PostGIS Users Discussion
> Emne: Re: [postgis-users] Geoprocessing & BigData
>
> Hi David,
>
> I are dealing with census blocks/census block groups spanning a few
> million records.
>
> Thanks,
> Ravi.
>
> On Monday, January 25, 2016 10:18 AM, David Haynes <haynesd2 at gmail.com>
> wrote:
>
>
> We have done some work, implementing parallel spatial queries using a
> spatial declustering algorithm. How large are your datasets?
>
> On Mon, Jan 18, 2016 at 1:51 PM, Rémi Cura <remi.cura at gmail.com<mailto:
> remi.cura at gmail.com>> wrote:
> Hey,
> if you have one beefy server you can parallelize throwing several queries
> working on sub set of your data.
> (aka parallel processing trough data partition).
> One conceptual example : you want to process the world, you create 20
> workers, a list of countries, and then make the worker process the list
> country by country.
>
> If you think one postgres server will not be sufficient,
> you could of course shard your data across several servers,
> with options ranging from writting from scratch (you rewrite everything),
> to using existing open source code, to dedicated solution like
>  Postgresql-Xc, greenplum, ...
>
> However, sorry to say this but in your case it looks like your first
> improvement step will not come from massive paralleling but from first
> better understanding the world of geospatial data and postgis.
>
> Cheers,
> Rémi-C
>
> 2016-01-18 19:30 GMT+01:00 Vincent Picavet (ml) <vincent.ml at oslandia.com
> <mailto:vincent.ml at oslandia.com>>:
> Hi Ravi,
>
>
>
>
> On 18/01/2016 19:14, Ravi Pavuluri wrote:
> > Hi All,
> >
> > I am checking if there is a way to process quickly large datasets such
> > as census blocks in PostGIS and also by leveraging big data platform. I
> > have few questions in this regard.
> >
> > 1) When I try intersect for sample census blocks with another polygon
> > layer, PostGIS 2.2(on Postgres 9.4) takes ~60 minutes (after optimizing
> > from http://postgis.net/2014/03/14/tip_intersection_faster/ ) while on
> > ESRI ArcMap takes ~10 minutes. PostGIS layers already have geospatial
> > indices. Is there anyway to optimize this further?
>
> Following the links on your page, here is a good answer from Paul (TL;DR
> : st_intersection is slow, avoid it) :
>
> http://gis.stackexchange.com/questions/31310/acquiring-arcgis-like-speed-in-postgis/31562
>
> > 2) What is an equivalent of ESRI Union in PostGIS? I didn't see any out
> > of the box functions and any tips here are appreciated.
>
> If ESRI Union makes a union, maybe st_union ? But I guess there are some
> semantic issues here.
>
> > 3) Is there anyway we can expedite these geoprocessing
> > tasks(union/intersect etc) using big data platform (Ex: hadoop)? Most
> > examples talk about analysis (contains etc)  but not about geoprocessing
> > on geospatial data. Any input is appreciated.
>
> Lots of people do geoprocessing too with PostGIS, including long-running
> jobs on large volumes of data ( worldwide osm data processing namely).
> "Big data" is a really subjective word. Are your geoprocessing needs
> really parallelizable ? What kind of volumes are we talking about ? MB,
> GB, TB ? What kind of hardware do you have at hand ?
>
> One way to do some sort of map-reduce with PostGIS is to use a bunch of
> servers with FDW connections between a source master and these slaves,
> map the data processing to the slave servers and reduce it on the main
> server. With a bit of Python as glue code this can be automated and
> quite efficient, even though this kind of sharding is not automated (
> yet ?).
>
> Vincent
>
> >
> > Thanks,
> > Ravi.
> >
> >
> > _______________________________________________
> > postgis-users mailing list
> > postgis-users at lists.osgeo.org<mailto:postgis-users at lists.osgeo.org>
> > http://lists.osgeo.org/mailman/listinfo/postgis-users
>
> >
>
> _______________________________________________
> postgis-users mailing list
> postgis-users at lists.osgeo.org<mailto:postgis-users at lists.osgeo.org>
> http://lists.osgeo.org/mailman/listinfo/postgis-users
>
>
> _______________________________________________
> postgis-users mailing list
> postgis-users at lists.osgeo.org<mailto:postgis-users at lists.osgeo.org>
> http://lists.osgeo.org/mailman/listinfo/postgis-users
>
>
> _______________________________________________
> postgis-users mailing list
> postgis-users at lists.osgeo.org<mailto:postgis-users at lists.osgeo.org>
> http://lists.osgeo.org/mailman/listinfo/postgis-users
>
>
> _______________________________________________
> postgis-users mailing list
> postgis-users at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/postgis-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/postgis-users/attachments/20160128/0c13affa/attachment.html>


More information about the postgis-users mailing list