[postgis-users] Huge query, out of memory

Nicolas Gillet - MARKET-IP nicolas.gillet at market-ip.com
Thu Jan 14 02:23:16 PST 2010


Hello,

 

I have a big shape file to process (about 300 Mo,  1.5 million records)

I have to aggregate the touching linestrings having the same attributes
therefore the best way I could find (with help of this mailing list) was to
aggregate all the linestrings with the same attribute and then splitting the
ones that don't touch.

 

Here is the query :

SELECT subQ.id, subQ.feattyp, subQ.frc, subQ.shieldnum, subQ.name, null,
(st_dump(merged_geom)).geom AS the_geom

FROM (

                SELECT MAX(id) AS id, feattyp, frc, MAX(shieldnum) AS
shieldnum, nw.name, ST_LineMerge(ST_Collect(the_geom)) AS merged_geom

                FROM nw

                WHERE nw.frc >= 0

                GROUP BY nw.name, nw.feattyp, nw.frc

) AS subQ;

 

This has worked fine on smaller shapes  but on this one I get an out of
memory error.

 

I used pgTune to change database default settings and the result is as
follow :

 

#custom_variable_classes = ''                   # list of custom variable
class names

default_statistics_target = 50 # pgtune wizard 2010-01-13

maintenance_work_mem = 120MB # pgtune wizard 2010-01-13

constraint_exclusion = on # pgtune wizard 2010-01-13

effective_cache_size = 1408MB # pgtune wizard 2010-01-13

work_mem = 12MB # pgtune wizard 2010-01-13

wal_buffers = 8MB # pgtune wizard 2010-01-13

checkpoint_segments = 16 # pgtune wizard 2010-01-13

shared_buffers = 480MB # pgtune wizard 2010-01-13

max_connections = 80 # pgtune wizard 2010-01-13

 

All the other parameters have their default values.

 

What can I do to run my script through this heavy shape files ?

 

Thank you

 

Nicolas.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/postgis-users/attachments/20100114/09fca8dc/attachment.html>


More information about the postgis-users mailing list