[postgis-users] Query that locks up system memory/disk usage
Obe, Regina
robe.dnd at cityofboston.gov
Tue Jul 15 03:26:27 PDT 2008
How about just killing the query with the built in functions in
postgresql. It won't kill the backend but will abruptly kill off the
query running in a process. I presume it probably does something similar
to a kill -2 and has the advantage of being cross platform.
1) To look at running backends
SELECT *
FROM pg_stat_activity;
The current_query field will tell you what query is currently running in
each backend process.
2) To kill an annoying query
SELECT pg_cancel_backend(5220);
Where 5220 is the procpid from pg_stat_activity
You could also use it to cancel more than one query by doing something
like
SELECT pg_cancel_backend(procpid)
FROM pg_stat_activity
WHERE usename = 'mleahy';
Hope that helps,
Regina
-----Original Message-----
From: postgis-users-bounces at postgis.refractions.net
[mailto:postgis-users-bounces at postgis.refractions.net] On Behalf Of Mike
Leahy
Sent: Tuesday, July 15, 2008 1:15 AM
Cc: PostGIS Users Discussion
Subject: Re: [postgis-users] Query that locks up system memory/disk
usage
Kevin,
Perhaps I can cause the same situation (maybe tomorrow when the server
isn't occupied) and give this a try. But I think I have already ruled
this out as an option. When I would hit ctrl+c in the psql terminal,
I'd see a message that the query was being cancelled...then it would
just sit there and continue grinding away - unless there's a possibility
that maybe the ctrl+c command didn't actually reach the postgres
process, my guess is that maybe the attempt to rollback gracefully is
just as intensive.
Mike
Kevin Neufeld wrote:
> Paul, while "kill -9" will undoubtedly stop the current running query,
> it will also crash the entire database cluster since the shared memory
> will become corrupt.
>
> I recommend using a "kill -2" instead which is the same thing as
issuing
> a ctrl^c while in the terminal program. It may take longer since it
has
> to rollback the transaction, but it will do so gracefully.
>
> -- Kevin
>
> Paul Ramsey wrote:
>> Break yourself of the subquery habit:
>>
>> select a.* from a join b on (st_dwithin(a.the_geom,b.the_geom,50000))
>> where b.gid = 10;
>>
>>
>> On your process:
>>
>> ps ax | grep postgres
>>
>> Find the process id that is using all the CPU and just kill -9 it.
The
>> glory of running a proper ACID database like PgSQL is that if you
>> don't like what it's doing, you can rip the power cord out of the
>> wall, and it'll still start up clean. (Do not try this with MySQL.)
>>
>> P.
>>
>> buffer((select b.the_geom
>>
>>> where gid = 10),50000));
>>>
>>
>> On Mon, Jul 14, 2008 at 4:33 PM, Mike Leahy <mgleahy at
alumni.uwaterloo.ca> wrote:
>>
>>> Hello list,
>>>
>>> I've run into some situations where running certain queries end up
locking
>>> up all of my system's ram memory, with constant disk access. I
can't cancel
>>> the query by hitting ctrl+c in the psql terminal, by restarting the
service,
>>> or even killing the postmaster. I'm running on a fairly high end
system, so
>>> it's not an issue with CPU power or available ram. Here's an
example of
>>> what I did today that caused this:
>>>
>>> Table A is a table I imported from a tile index shapefile.
>>>
>>> Table B has several fairly large irregular polygons of different
study
>>> areas.
>>>
>>> To get all of the polygons in Table A within a certain distance
(50km) of
>>> one of the polygons in Table B, without giving it much thought I did
the
>>> following:
>>>
>>> select * from a where st_intersects(a.the_geom,buffer((select
b.the_geom
>>> where gid = 10),50000));
>>>
>>> I realize how wrong that is, as calculates the buffer for every tile
it
>>> compares to...I should have done something like:
>>>
>>> select * from a where st_intersects(a.the_geom,(select b.the_geom
where gid
>>> = 10)) or st_distance(a.the_geom,(select b.the_geom where gid =
10))<=50000;
>>>
>>> The problem is...I'm still waiting for the first query to either
finish, or
>>> cancel, or something. In the meantime, postmaster is still using
99% of my
>>> memory, and the disk is still thrashing away (though CPU usage
pretty much
>>> at 0). What's the best strategy to kill the previous query without
having
>>> to shut down the entire server?
>>>
>>> Keep in mind that is just an example of how this can happen for me -
I've
>>> had it happen in other more complex situations where it was less
obvious
>>> what I was doing wrong in the logic of the query. I'm just
wondering how I
>>> can recover from these sorts of mistakes without potentially
damaging the
>>> database.
>>>
>>> Regards,
>>> Mike
>>>
>>>
>>> _______________________________________________
>>> postgis-users mailing list
>>> postgis-users at postgis.refractions.net
>>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>>>
>>>
>> _______________________________________________
>> postgis-users mailing list
>> postgis-users at postgis.refractions.net
>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>>
_______________________________________________
postgis-users mailing list
postgis-users at postgis.refractions.net
http://postgis.refractions.net/mailman/listinfo/postgis-users
-----------------------------------------
The substance of this message, including any attachments, may be
confidential, legally privileged and/or exempt from disclosure
pursuant to Massachusetts law. It is intended
solely for the addressee. If you received this in error, please
contact the sender and delete the material from any computer.
More information about the postgis-users
mailing list