[postgis-users] Always loosing my data to "invalid memory alloc request size"
strk at refractions.net
strk at refractions.net
Sun Jan 15 14:35:29 PST 2006
This doesn't look as a postgis issue, or am I missing
something ? You don't call any postgis function in your
query.
--strk;
On Sun, Jan 15, 2006 at 12:05:35PM -0800, Dustin Moore wrote:
> I'm about a week into my experience with PostGIS, but I'm an old hand at
> MySQL. I have a table 'frame' that has about 1.3 million entries:
>
> Table "public.frame"
> Column | Type | Modifiers
> ----------------+------------------------+----------------------------------------------------------
> frame_id | bigint | not null default
> nextval('frame_frame_id_seq'::regclass)
> frame_file | character varying(512) | not null
> frame_number | integer | not null
> frame_rand | integer |
> frame_loc | geometry | not null
> frame_route_id | bigint |
> Indexes:
> "frame_pkey" PRIMARY KEY, btree (frame_id)
> "frame_file_index" btree (frame_file)
> "index_frame_loc" gist (frame_loc) CLUSTER
> Check constraints:
> "enforce_dims_frame_loc" CHECK (ndims(frame_loc) = 2)
> "enforce_geotype_frame_loc" CHECK (geometrytype(frame_loc) =
> 'POINT'::text OR frame_loc IS NULL)
> "enforce_srid_frame_loc" CHECK (srid(frame_loc) = 4326)
>
>
> It seems that everytime I do a large update I corrupt the database and
> loose everything.
> In particular, a command like this:
> update frame set frame_route_id = route_id from route where
> frame_file=route_file;
> [route being a simple table with a btree indexe on route_file]
>
> I get errors like "ERROR: invalid memory alloc request size 1671807492"
> and these
> errors won't go away. If I then attempt to pg_dump -Fc it will fail with
> the same error. So
> basically, the data is gone (or at least I can't recover it in its
> entirety).
>
> I'm running Postgis 1.1.0 on Postgresql 8.2.1 with Geos and Proj on Debian
> stable.
> I compiled Postresql from scratch. I've also had the same problem with
> Postgis 1.0.6.
> My swap file size is 3GB. Are there some memory or buffer settings I
> should be messing with?
>
> Based on the mailing list traffic, I see people that have had similar
> trouble using
> geos functions but I'm not using any geos functions in the queries that
> initially
> fail and corrupt the database.
>
> Is there a tested "stable" configuration of Postgis and Postgresql that can
> handle a
> table with 100 million rows?
More information about the postgis-users
mailing list