[postgis-users] Query that locks up system memory/disk usage

Kevin Neufeld kneufeld at refractions.net
Tue Jul 15 09:38:58 PDT 2008


Ah yes, I've come across the same thing several times.  For example, if 
you use the pgstattuple module that comes with postgres and perform a 
dead tuple query on at a table, it will scan the entire table, even 
after issuing a ctrl^c or kill -2 ... but it will eventually terminate.

However, using top and killing queries using signal 2 has saved me many 
times on a development box with several developers.  I can quickly kill 
that runaway query from a dropped JDBC connection without bringing down 
the whole database.  Usually the query stops right way and the rollback 
is not very intensive, but it depends on what you are doing.

-- Kevin

Mike Leahy wrote:
> Kevin,
> 
> Perhaps I can cause the same situation (maybe tomorrow when the server 
> isn't occupied) and give this a try.  But I think I have already ruled 
> this out as an option.  When I would hit ctrl+c in the psql terminal, 
> I'd see a message that the query was being cancelled...then it would 
> just sit there and continue grinding away - unless there's a possibility 
> that maybe the ctrl+c command didn't actually reach the postgres 
> process, my guess is that maybe the attempt to rollback gracefully is 
> just as intensive.
> 
> Mike
> 
> 
> Kevin Neufeld wrote:
>> Paul, while "kill -9" will undoubtedly stop the current running query, 
>> it will also crash the entire database cluster since the shared memory 
>> will become corrupt.
>> I recommend using a "kill -2" instead which is the same thing as 
>> issuing a ctrl^c while in the terminal program.  It may take longer 
>> since it has to rollback the transaction, but it will do so gracefully.
>>
>> -- Kevin
>>
>> Paul Ramsey wrote:
>>> Break yourself of the subquery habit:
>>>
>>> select a.* from a join b on (st_dwithin(a.the_geom,b.the_geom,50000))
>>> where b.gid = 10;
>>>
>>>
>>> On your process:
>>>
>>> ps ax | grep postgres
>>>
>>> Find the process id that is using all the CPU and just kill -9 it. The
>>> glory of running a proper ACID database like PgSQL is that if you
>>> don't like what it's doing, you can rip the power cord out of the
>>> wall, and it'll still start up clean. (Do not try this with MySQL.)
>>>
>>> P.
>>>
>>> buffer((select b.the_geom
>>>  
>>>> where gid = 10),50000));
>>>>     
>>>
>>> On Mon, Jul 14, 2008 at 4:33 PM, Mike Leahy <mgleahy at 
>>> alumni.uwaterloo.ca> wrote:
>>>  
>>>> Hello list,
>>>>
>>>> I've run into some situations where running certain queries end up 
>>>> locking
>>>> up all of my system's ram memory, with constant disk access.  I 
>>>> can't cancel
>>>> the query by hitting ctrl+c in the psql terminal, by restarting the 
>>>> service,
>>>> or even killing the postmaster.  I'm running on a fairly high end 
>>>> system, so
>>>> it's not an issue with CPU power or available ram.  Here's an 
>>>> example of
>>>> what I did today that caused this:
>>>>
>>>> Table A is a table I imported from a tile index shapefile.
>>>>
>>>> Table B has several fairly large irregular polygons of different study
>>>> areas.
>>>>
>>>> To get all of the polygons in Table A within a certain distance 
>>>> (50km) of
>>>> one of the polygons in Table B, without giving it much thought I did 
>>>> the
>>>> following:
>>>>
>>>> select * from a where st_intersects(a.the_geom,buffer((select 
>>>> b.the_geom
>>>> where gid = 10),50000));
>>>>
>>>> I realize how wrong that is, as calculates the buffer for every tile it
>>>> compares to...I should have done something like:
>>>>
>>>> select * from a where st_intersects(a.the_geom,(select b.the_geom 
>>>> where gid
>>>> = 10)) or st_distance(a.the_geom,(select b.the_geom where gid = 
>>>> 10))<=50000;
>>>>
>>>> The problem is...I'm still waiting for the first query to either 
>>>> finish, or
>>>> cancel, or something.  In the meantime, postmaster is still using 
>>>> 99% of my
>>>> memory, and the disk is still thrashing away (though CPU usage 
>>>> pretty much
>>>> at 0).  What's the best strategy to kill the previous query without 
>>>> having
>>>> to shut down the entire server?
>>>>
>>>> Keep in mind that is just an example of how this can happen for me - 
>>>> I've
>>>> had it happen in other more complex situations where it was less 
>>>> obvious
>>>> what I was doing wrong in the logic of the query.  I'm just 
>>>> wondering how I
>>>> can recover from these sorts of mistakes without potentially 
>>>> damaging the
>>>> database.
>>>>
>>>> Regards,
>>>> Mike
>>>>
>>>>
>>>> _______________________________________________
>>>> postgis-users mailing list
>>>> postgis-users at postgis.refractions.net
>>>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>>>>
>>>>     
>>> _______________________________________________
>>> postgis-users mailing list
>>> postgis-users at postgis.refractions.net
>>> http://postgis.refractions.net/mailman/listinfo/postgis-users
>>>   
> 
> _______________________________________________
> postgis-users mailing list
> postgis-users at postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users



More information about the postgis-users mailing list