[postgis-users] Large Databases

Jim C. Nasby jnasby at pervasive.com
Tue Jun 13 16:33:22 PDT 2006


On Tue, Jun 13, 2006 at 02:51:42PM -0400, Robert Burgholzer wrote:
> Very interesting. This is probably why I found that without vacuum full,
> my 2 gigabyte database ballooned to over 10 Gb in a few short weeks. Our
> db is used for the parameterization and calibration of an extremely
> large environmental model, and so parameter values are generated and
> regenerated as many as 10 times per day - causing there to be a whole
> lot of unreleased file space. As one butts up against the partition max
> size, this extra disk space can become an issue.

BTW, proper use of lazy vacuum should eliminate the need for vacuum full
in almost all cases. The one exception is if you often update the entire
table, though in a case like that you'll almost certainly get better
performance from something like:

BEGIN;
LOCK table;
SELECT blah INTO table_new FROM table;
CREATE INDEX ... ON table_new ...;
DROP table;
ALTER TABLE table_new RENAME table;
COMMIT;

An alternative to the LOCK would be creating a rule or trigger that
dis-allowed people from changing the table while this was running.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby at pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461



More information about the postgis-users mailing list