[MetaCRS] Responses to some of Norm's wiki comments.
Landon Blake
lblake at ksninc.com
Thu Nov 5 13:05:09 EST 2009
I wanted to respond to some of Norm's wiki comments here:
Norm wrote: "One of the two or three PRIMARY goals of this project is to
produce a file which will be used. If we make it a significant project
to just to parse the data and use the test results, it will not get
used."
I agree. I'm just trying to get us started with something. Sometimes you
have to throw something imperfect out to get the conversation started.
Norm wrote: "In this regard:
Coordinates for the test cases are:
a> Always in the order defined by the related CRS. This applies to both
the source and target coordinates; order and units. Thus, the test
application does not need to jump through hoops determining what the CRS
requires and then figure out what the test case is providing. The test
application simply extracts the source and target values and uses them
as is in the order provided."
Agreed.
However, using the label prefix clarifies which ordinate value one is
dealing with, and would allow one to verify coordinate conversion
without all the CRS information. For example, the current format of the
row allows a test to verify conversion results with just the user
friendly name of the test provided in the first column. This allows for
"dumb" tests with no prior knowledge of axis order and units. It
shouldn't be too hard for parsers to separate the prefix from the actual
ordinate value. I think there is a tradeoff here between the amount of
information we put in the CSV file row and how much information we
expect the tests to already know. I don't feel strongly about the
ordinate value prefix and I can remove the prefix if needed.
Norm wrote: "b> The tolerance values (order and units) are treated the
same way, and always in relation to the target CRS. The test application
again simply extracts three values and uses then as is in the order
provided in the test database."
See my comments from 'a'.
Norm wrote: "c> Formatting of the coordinates is a tricky problem, but
we must choose something that makes it easy for a test application
developer to use the data file. We should support two or three simple
numeric formats. Obviously a pure signed decimal number is one and
degrees, minutes, and seconds in a format for which the probablility of
a generic parsing function is most likely. For obvious reasons, I like
the form supported by CS-MAP, but that might not be the most usable for
an application written in Java or Python. To the degree that code exists
to parse DMS, we should use the most widely supported format so as to
make it as easy as possible for a test application developer to use the
data file."
I can parse just about anything in Java. :] Let's just come to a
consensus on how we want to represent DMS values. I'll parse whatever we
choose.
Norm wrote: "I would also like to see a "Test Type" field. The following
is a short list of the types of tests I'd like to incorporate into the
file:
* 2D Coordinate Conversion
* 3D Coordinate Conversion
* Geoid Height Determination
* Vertical Datum Shift
* Grid Scale factor (meridional and parallel)
* Convergence Angle
* Datum Shift Calculation
* Geocentric Calculations"
I forgot to put this field in the file. I will add it.
Norm wrote: "The function of the ordinate fields in each test type
would/could be slightly different. Specific test types would simply
ignore fields which it does not need, and those fields simply left empty
by the originator of the test. Generally, the first four value fields
would be reserved for the source information and the second four fields
reserved for the target values, and a thrid set of four fields reserved
for tolerance information.
Four fields as mention immediately above might be overkill right now,
but we are already encountering the problem of not only knowing where
something is, but when was it there!!!"
I guess I was trying to come up with something that could be used for
map projection transformations/datum conversions first. I think the more
types of tests we try to support the more variable our test file format
will need to be. The more variation we support in the test file, the
harder it will be to read and parse. Can we start simply and add support
for additional test types as we move forward?
Landon
Warning:
Information provided via electronic media is not guaranteed against defects including translation and transmission errors. If the reader is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this information in error, please notify the sender immediately.
More information about the MetaCRS
mailing list