[GRASSLIST:1473] Re: Using v.in.shape
David D Gray
ddgray at armadce.demon.co.uk
Mon Feb 19 16:30:57 EST 2001
Gaetan GUYODO wrote:
> I have the following files in the same directory :
v.in.shape complies with the published standard, which describes the
structure of the .shp and .shx files, and implicitly the .dbf which is
just a standard xbase file.
> and I want to import these files (vector shapefiles)
> I use in grass 5b10, the v.in.shape window in tcltkgrass
There are some new features in the most recent version that are not yet
in the GUI. Those allow import of selected shapes, based on a single
numeric field only, but this still has bugs.
> I give the good file (a.shp), OK
> What is the debugging level ? I let it empty
> I give a log file, OK
These two have been in this module since it was first developed. I have
left them in, but really for now, most useful information is dumped to
standard output, so the important information will be seen in the output
screen. Best leave these blank.
> What is "Snap distance for node during import" ? I let it empty
This defines the granularity of a microgrid for quantising the vector
point positions. We have to do this often because the shapefile is a
`double-digitised' format that represents a polygon by the actual
positions of the vertices, so boundaries are represented twice - and
guess what - they're often different! If you set this to a value that
defines the minimum spacing you would expect for any real-life
resolution in your map, points inside a grid cell will always be
represented only once - but it doesn't move points unless it has to.
> Colinearity distance, default
The import cleans the map by stripping out duplicate edges where these
overlap but are different. This sets the angle you consider to be the
minimum useful angular separation in your map in degrees - any less and
they are considered colinear.
> Init scale for vector map, default
> Name of field with category attribute number : nothing
> Name of field with category label : nothing
> List fields in database : on
> then I have the following fields :
> I think the "field with category attribute number" is NAME_ID (how can I be sure
> and I think the 2 names "DEPARTEMEN" are not complete
Probably - the xbase standard defines a maximum length of 11 for field
names. These names are 10 long, but perhaps the producing application
had further restrictions.
> (is it possible to have 2
> fields with the same name ? I don't think so), how to know their complete name ?
Again I think these errors originate from the production of the files.
> In the window, I want to give a few "names of fields with category label", but
> what delimiter doI use (: ; , . or an other one, or impossible)
Unfortunately, at present, we can only have one main numerical value
field for defining a linked data field. This is often called in our
parlance (confusing!) the `attribute' and the value and its type and
position are stored in the dig_att file. The value can be unique but
needn't be, so can be used as either an entity ID or as a kind of type
ID representing, say, a soil type.
A second set of `categories' in the dig_cats is optional and can be used
to associate a unique value from the attributes with a piece of text.
This is clearly an unsatisfactory and confusing setup, and there is a
team currently working to create better support for category data in
You can however do the following: create a separate map - effectively a
layer for each field you want to use. Do not define the `attribute'
option, and use the intended field for the `label' option. This will
force a unique ID field to be used as the attribute, and the value of
the field will be associated with that unique value in the dig_cats
file. Many modules can use the information in dig_cats, and because it
is a text file, it can be `hacked' easily to get the information you
want. As I say - not entirely satisfactory - but a workaround for now.
> When I try whith these informations, I had these messages :
> Not assigning category labels
> WARNING: No attribute value field assigned. Using record ID.
This means each object - here area - imported is assigned a unique ID
and this is its attribute - the values do not come from anything
> Selected information from vector header
> Map Name: a
> Source Date:
> Orig. Scale: 2400
> No snapping will be done
> Reading Vector file.
> Identical lines check
> Compressing Data:
> Compressing NODES.
> Compressing NODES. LINES.
> Compressing NODES. LINES. AREAS.
> Compressing NODES. LINES. AREAS. ISLES.
> Compressing NODES. LINES. AREAS. ISLES. ATTS.
> Compressing Data: DONE.
> Building areas
> Building islands
> Attaching labels
> PNT_TO_AREA failed: (553946.122000, 168047.572000) (Category 1)
> PNT_TO_AREA failed: (580329.888625, 165774.694969) (Category 2)
> PNT_TO_AREA failed: (553820.389375, 165992.972000) (Category 3)
> PNT_TO_AREA failed: (555546.658000, 165060.742000) (Category 4)
> PNT_TO_AREA failed: (584116.122000, 163689.940000) (Category 5)
> PNT_TO_AREA failed: (578261.371750, 164009.422000) (Category 6)
> PNT_TO_AREA failed: (557716.410000, 162928.068000) (Category 7)
> PNT_TO_AREA failed: (611381.458000, 49127.158000) (Category 1290)
> Number of lines: 0
> Number of nodes: 0
> Number of areas: 0
> Number of isles: 0
> Number of atts : 0
> Number of unattached atts : 1290
> Snapped lines : 0
> What do I do to have not a PNT_TO_AREA failed (what's that) ?
The module calls another - ultimately v.build - to create a `topology'
file that contains the more composite objects, like areas, out of the
primitive data type, which are `arcs' or polyline segments. This
consults each entry in dig_att and tries to find a valid surrounding
polygon that can be assigned the associated value. It also checks the
topological integrity of the polygon. If there is anything wrong, the
area will not build, and this message is what you get. Also if it finds
the area has already been labelled, you get a warning like - WARNING:
label <x> matched another label <y> - which is also likely an error.
In this case, you get the errors because there is nothing there. This
might be because you are importing to a region that doesn't include the
co-ordinate values imported. ie your map is outside the region. This can
be detected more easily in the current version (in Beta 11), which has a
`-b' flag you can set that just reports your region's bounding box and
exits, so you can create a matching location if needed.
Also there was a bug in Beta 10 and earlier that caused some co-ordiante
values to be corrupted. This is almost fixed though there are still a
few problems showing up on rare occasions.
A general point: there is a lot of map data in circulation in formats
like ESRI shapefiles, MapInfo MIF files and the like, that is seriously
corrupted. We do our best to develop import filters that will convert
data as best they can to reliable and correctly formatted vector maps,
but it's doubtful if there will ever be a final result, as there are a
lot of maps that are beyond salvageing.
For now, I would recommend an upgrade to Beta 11
> If you can help me, that will be very good...
More information about the grass-user