[postgis-devel] Shp2pgsql picks wrong field type?

Mark Cave-Ayland m.cave-ayland at webbased.co.uk
Wed Apr 6 02:19:24 PDT 2005


Hi Markus (and David), 

> -----Original Message-----
> From: postgis-devel-bounces at postgis.refractions.net 
> [mailto:postgis-devel-bounces at postgis.refractions.net] On 
> Behalf Of Markus Schaber
> Sent: 06 April 2005 09:44
> To: PostGIS Development Discussion
> Subject: Re: [postgis-devel] Shp2pgsql picks wrong field type?

(cut)

> This and optimizing possibilities (i. E. if a char(1) field 
> is guaranteed to only contain Y and N we can make boolean out 
> of it, or some ints can be even made an int2) is the reason 
> why I invented the -p flag (see my patches). We usually save 
> the output of shp2pgsql and modify it to fulfil our needs, 
> resulting in a file that creates all ouf our tables, and a 
> second one that adds the indices after data insertion.

I think we'd need to be very careful if we went this route because here we
tend to load all shapefiles into PostGIS, do data processing, and then
export them once we're done. If we were to start changing field types in
other ways then this may cause unexpected behaviours on other applications.

> Another reason is that we can add additional columns (e. G. 
> source of data or release version) that are implicitly set by 
> modifying their columns DEFAULT values before inserting the 
> data via shp2pgsql -a -D. Thankfully, shp2pgsql includes the 
> column names in their COPY and InSERT statements, so this 
> works great. We have up to about 50 shapefiles with identical 
> colunms inserted into the same table, and this way we can 
> easily know which row originated from which shapefile.
> 
> In addition to the speedup, using -D forces an all-or-nothing 
> approach, you do not end up with half-loaded shapefiles.

Yes, I think I prefer this approach since I had a half loaded shapefile and
may not have noticed if I wasn't paying attention to the screen when loading
the file.

(cut)

> Maybe you can open the associated dbf file containing this 
> data with any dbf reading tool, and see what the column type 
> is. If it is something numeric, then I would blame the 
> shapefile generator.

The only tool I have really is JUMP - is this good enough? Loading into JUMP
I see that the field type is indeed integer. So I guess blame the tools they
used to produce TIGER for this one - trust me to pick the one file to test
that breaks things ;) However, JUMP has converted the erroneous attribute to
a value of 0 - should shp2pgsql copy this behaviour but issue a warning?
What does other software do in this circumstance?


Kind regards,

Mark.

------------------------
WebBased Ltd
South West Technology Centre
Tamar Science Park
Plymouth
PL6 8BT 

T: +44 (0)1752 791021
F: +44 (0)1752 791023
W: http://www.webbased.co.uk





More information about the postgis-devel mailing list