[postgis-users] Memory problem with ST_Within

Burgholzer,Robert rwburgholzer at deq.virginia.gov
Mon Aug 25 06:27:04 PDT 2008


Birgit,
My guess is that the part of the query that is causing the memory dump
is not the GIST index, but rather the within query, which must iterate
through all of your points.

That said, if you wish to determine if the GIST index is causing
problems, you could do the && in a separate query, store the results,
along with ID columns to map your results in a temp table, and then do
the standard "within" query (not "st_within" which forces the index
usage) on this subset of records whose bounding boxes overlap.

I suspect that you will still find the memory dump problems.  

One other approach that you could use, to winnow down the field of
candidates, is to add a second geometry field to your polygon table, and
place a simplified version of the geometries in there (using
simplify(geomcol, tolerance).  Some times, the geometries can be
simplified without losing any relevant spatial information.  My
experience is that the greatest way to reduce query intensity, and
improve speed as well as eliminate memory trouble is to reduce the
number of points that you are analyzing.

r.b.

Robert W. Burgholzer
Surface Water Modeler
Office of Water Supply and Planning
Virginia Department of Environmental Quality
rwburgholzer at deq.virginia.gov
804-698-4405
Open Source Modeling Tools:
http://sourceforge.net/projects/npsource/

-----Original Message-----
From: postgis-users-bounces at postgis.refractions.net
[mailto:postgis-users-bounces at postgis.refractions.net] On Behalf Of
Birgit Laggner
Sent: Monday, August 25, 2008 6:34 AM
To: PostGIS Users Discussion
Subject: Re: [postgis-users] Memory problem with ST_Within

Frank Koormann schrieb:
> * Birgit Laggner <birgit.laggner at vti.bund.de> [080821 09:59]:
>   
>> Hallo Regina, hallo Frank,
>>
>> thanks for your suggestions!
>>
>> Leaving out the_geom didn't solve the problem - I still ran out of
memory. 
>> I attached my postgresql.conf settings and the result of the EXPLAIN 
>> ANALYSE SELECT to this e-mail.
>>     
>
> Hm, the explain analyse finishes successfully. Explain analyse
actually
> execute the query (in your case in 675 milliseconds), so the problem
seems
> to be returning the results to pgAdmin, assuming that the queries run
on
> the same data set.
>
> Your memory related settings are already significantly increased.
> Without knowledge about your system if too high.
>
> Other points: 
> - What is the full error message?
> - Do you have more success when using psql instead of pgAdmin?
>
> Regards,
>
>         Frank
>   
Hallo Frank,

this is actually a misunderstanding: of course, the Explain analyse did 
not run on the full dataset either, therefore I started the Explain 
analyse for my test dataset, which I used to test if my query is working

at all.

The full error message is:
ERROR: Memory exhausted
SQL Status:53200
Detail: Failed on request of size 32.

I'm still testing if psql is more successful then pgAdmin. Results 
tomorrow...

Thank you!

Regards,

Birgit.

Mark Cave-Ayland schrieb:
> Hi Birgit,
>
> If PostgreSQL stops due to lack of memory, it generally dumps 
> information about memory usage into the server log file before it 
> terminates. Can you see any such information in your PostgreSQL log 
> file? If so, posting it here would help greatly, along with the output

> of "SELECT postgis_full_version()".
>
>
> ATB,
>
> Mark. 
Hallo Mark,

this is my postgis version:

"POSTGIS="1.3.3" GEOS="3.0.0-CAPI-1.4.1" PROJ="Rel. 4.6.0, 21 Dec 2007" 
USE_STATS"

The memory usage information of the PostgreSQL log file, I attached to 
the e-mail (pg.log).

Thank you!

Regards,

Birgit.



More information about the postgis-users mailing list