[postgis-users] Large Databases
Michael Fuhr
mike at fuhr.org
Tue Jun 13 09:49:56 PDT 2006
On Tue, Jun 13, 2006 at 11:44:16AM -0400, Robert Burgholzer wrote:
> I just went through an upgrade of server, and was initially quite
> disappointed that my 2.0 Ghz with 1 G RAM was taking forever on a
> "vacuum full" of a table with about 50 million records. I abandoned the
VACUUM FULL should seldom be necessary if you regularly perform
ordinary VACUUMs (without FULL) and if your free space map settings
are adequate (see links below). In cases where you do want to
shrink a table, CLUSTER is sometimes faster than VACUUM FULL.
> analyze after 6 hours, and I then read the Pg manual, and changed
> maintenance_work_mem in postgresql.conf from the default value as
> follows:
>
> #maintenance_work_mem = 16384 # min 1024, size in KB
> maintenance_work_mem = 163840
>
> I then re-ran the vacuum and it processed in under 5 minutes.
>
> Just food for thought that those settings can really help you out,
> although I have found it difficult to get good solid advice on which
> settings to change and what to change them to.
Here are some commonly-cited tuning guides:
http://www.powerpostgresql.com/PerfList
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
--
Michael Fuhr
More information about the postgis-users
mailing list