[PROJ] Proposal for a geodetic deformation model format

Chris Crook ccrook at linz.govt.nz
Thu Dec 5 14:50:06 PST 2019


Hi Even

Thanks for detailed reading and questions.

> - regarding "The horizontal displacement is represented by metres east and
> north. Typically base datum will be a geographic coordinate system using
> latitude and longitude and it would be more efficient to define displacement
> in terms of degrees.".
> Is it the same convention as PROJ deformation method
> (https://proj.org/operations/transformations/deformation.html) which uses
> grids to store corrections in the ENU space as far as I understand ?
>
> (more a question for Kristian: I presume this ENU space is the same as the
> UVW topocentric coordinate system described in 4.1.2
> "Geocentric/topocentric conversions" of https://www.iogp.org/wp-
> content/uploads/2019/09/373-07-02.pdf ?)
>
> For the easting, northing I presume yes. But I'm not sure if the convention
> you mention for vertical correction "The vertical direction will be the local
> vertical of the coordinate system." corresponds to the U of ENU. Or perhaps
> this falls in your remark "For deformation there is no practical need to
> distinguish gravitational and geometric vertical" ?
> That said, PROJ deformation method is a bit different in that it is meant to
> perate with input and output coordinates in geocentric space: it uses
> internally a geocentric to geodetic conversion to be able to retrieve the
> correction values from the grid(s), and then convert those corrections from
> the ENU space to the geocentric space.

I hadn't realised PROJ would be working in a geocentric space.

What I was thinking when I suggested degrees would be more efficient was that
had converted coordinates to geographic lon/lat/ellipsoidal height (as you need lat/lon
to interpolate), then applying a correction to the lon/lat coordiantes in metres requires (a little)
more calculation than applying corrections in degrees.   This could make more difference
for an inverse calculation, as if you are iterating you could end up converting multiple times
between geocentric and geographic (and geocentric to geographic is itself iterative).

However if we are applying the correction to geocentric coordinates then
this doesn't apply.

For the inverse calculation it is probably sensible to work in the geographic coordinates
if the solution is iterated, and then convert to geocentric once the inverse
deformation has been calculated.

> - regarding "Strictly the inverse calculation of the deformation model is an
> iterative calculation, as the location of a point in the interpolation coordinate
> system is not known until it is transformed back to that system."
> I just want to mention that PROJ uses an iterative method for the inverse of
> horizontal shift grids (NTv2 etc), so wouldn't be a problem to use that for
> deformation models as well. Actually PROJ deformation method also uses
> that iterative method.

The NTv2 grids were originally designed for transformations between quite
different datums that pre-dated satellite geodesy, and for which coordinate
changes of a few hundred metres are quite likely, so in that case the iteration
could be quite important.

The coordinate changes due to deformation are likely to be much smaller (hopefully!).
I think 10 metres would be an upper bound.  On the other hand the deformation will
vary more rapidly with location in the event of earthquake related deformation.  I'll
do some testing with the LINZ deformation model to see what difference it makes.

But good to know that PROJ will do it the right way :-)

> - regarding nesting of grids, is my understanding correct that if you have a
> grid G, that has subgrids G_1 et G_2, and that a point falls in G and G_1, then
> the corections of both G and G_1 should be applied ? Just wanted to
> underline this, as it is different of the NTv2 model where you apply one or
> the other (preferably G_1), but not both.

No - I've tried to make this clearer in the document.  The proposal is to use the
same approach as eg NTv2.   A single component (eg deformation due to an
earthquake) may be defined by multiple grids.  At any location only one of those
grids will apply.

However the deformation model as a whole has multiple components (eg
secular velocity, earthquake 1, earthquake 2, ...) and these are evaluated
separately and summed together.

> - it could help for a better understanding to have a small example of a CSV
> file illustrating the different concepts you expose. Nested grids, different
> time functions etc.

Good idea - I'll build a CSV file matching the NZGD2000 deformation model.

> - The approach master CSV file + separate grids in dedicated files has its
> merits indeed. The actual grid format, GTiff, HDF5 or whatever raw binary
> format that can support multiple samples per grid node will make little
> difference because the specificities/complexities of the description of the
> deformation model is in the CSV file itself.
>
> - So if a earthquake happens, basically you'll create a new grid in the area that
> is affected, and create a new version of the CSV file where you add an extra
> row that references it, right ? (I'm probably over-simplifying)

That would be the most likely result.  For the last earthquake Kaikoura there
would be two rows, as we treated near field and far field deformation
differently (we still need to write this up).  Also we had two releases of the
deformation model.  The second release was about a year later once we
had better information about the total deformation (the update is an awkward
and largely inseparable mix of better estimation of co-seismic deformation
and additional post-seismic deformation).

> - Any indication of the number of grids a deformation model might have ? For
> the NZ case for example

The NZGD2000 has a secular velocity grid plus grids for 12 events since 2000.0
124 grids at last  count I think, but that is individual grids, not the number of nested grid.
Each event could be represented by 2 or 3 components with different time
functions, so probably between 20 and 40 in total.

However as our ability to measure and model deformation quickly is improving
then the rate of  generating grids increases.

> - Just a detail: regarding "velocity f(t) = (t – t0)/(365.2425 days)", if time and
> volicities are expressed respectively in decimal year and
> some_linear_unit/decimal_year, that should probably be just f(t) = t-t0

Yes - I copied this directly from the LINZ deformation model documentation.
Practically this is no different to expressing the time in decimal years.  I haven't
defined anywhere the way in which date/time are defined, but we use
a full date/time format rather than decimal years for defining the model as
that is more intuitive for earthquakes.  And of course most data we are transforming
has a specific date/time.   So time differences are determined in terms of "real"
time units such as seconds, days, or whatever.

> - I'm still having issues to grasp the practical implications of the "reverse
> step". Is my understanding correct that a coordinate computed for example
> for
> NZGD2000 @ 2010.0 with a deformation model of 2019 might potentially
> change with a deformation model of 2020 ? Should the deformation model
> used be also captured in the metadata associated with a coordinate? If so, it
> looks like a change in the deformation model would be better handled by
> defining a revised version of the datum (but I know you don't want to publish
> a new NZGD datum every 6 month). I'm super confused... I'm pretty sure we

It absolutely is confusing!   Technically whenever we change the deformation
model it is a new version of NZGD2000.  However the world, and particular the
users of NZGD2000 coordinates,  was not ready for this, and possibly still isn't.
Practically the time required to propagate new coordinate systems through the
made creating multiple versions of the coordinate system impractical.   Also there
are 30 projection coordinate systems associated NZGD2000, so we would need
to create multiple versions of those as well.  The very recent changes to coordinate
reference system models provide for concepts that will help with this such as
datum ensembles.  Having PROJ able to retrieve coordinate system definitions
from EPSG is a huge step towards making this practical in the open source stack.
Having software that can handle deformation is another huge step.

The way we have been managing NZGD2000 is to try to protect the NZ users
from dealing with this more than necessary.   This has been very pragmatic.
Since the tools most users have available are not "deformation aware", we
have been managing the coordinate system to allow users to ignore deformation
as far as possible.

As far as possible is to provide a coordinate reference system that hide the
deformation.  (I personally think NZGD2000 is better described as a geospatial reference
system than a geodetic datum).

To take the example of the coordinate NZGD2000 at 2010.  When we change
the deformation model we certainly are changing the relationship between
an NZGD2000 coordinate and the corresponding ITRF coordinate.
However we do preserve the relationship  between the NZGD2000 coordinate
and the physical location that it refers to.  So you don't need to change the
coordinates stored in your GIS systems etc - they are still correct for the features
they refer to.

As far as possible doesn't cover very heterogeneous deformation close to
an earthquake epicentre.  But for example in the case of the 2016 Kaikoura earthquake
the city of Wellington 150km from teh epicentre was shifted 20cm as a whole.
This is absorbed in the deformation model so that the NZGD2000 coordinates of marks do not change.

If you have a data set for Wellington observed in 2010 in terms of an ITRF based
coordinate system and convert to NZGD2000 then it will used the relationship
between ITRF and NZGD2000 that applied in 2010.  The NZGD2000 coordinate
that you get is the same as if you had observed the same physical feature in
terms of ITRF in 2019 it would have a different ITRF coordinate (both because
of the earthquake and the 5cm/year secular deformation).  However if you
and convert it to NZGD2000 using the post earthquake version of the deformation
model you will get the same NZGD2000 coordinate (give or take errors of observation
and errors in the deformation model).

Nearer the epicentre the situation is different - the deformation is too intense to hide
from users and so in that area the NZGD2000 coordinates of points do change.  LINZ
provided grids of the deformation to allow users to update their data sets to account
for the deformation (as well as updating the deformation model).  The good thing is
that in this region users expect the coordinates to change, and in fact are asking
for updates.

> had that discussion already, but I've forgotten the outcome. I'd be very
> interested in having a concrete example of a workflow of how users are
> supposed to use such deformation models, and which metadata they are
> supposed to store to be able to later reprocess their coordinates.

This is the critical question, and one that we don't have a good answer to!
At  the moment there is no workflow as virtually no software can use the
deformation model.  That is why NZGD2000 is implemented the way it is,
so that NZGD2000 data sets from different epochs can be compared and
used without having to account for deformation.

The nearest there is to a workflow is the process for users near an area
affected by deformation where they can use the grids we provide to update
their data (we provide NTv2 grids for the horizontal component).  We have
been calling this a "patch".

Your question about metadata is key.  The general answer is that (as for
all coordinates) they need to be aware of what coordinates are referenced
to when they are originally captured, and what transformations have been applied to
them since then.   In areas affected by a patch this is more critical.

Looking to the future this could change in many ways.  We could create
new versions of the datum for each update of the deformation model.  We
could imagine that coordinates are held with their effective epoch, and if
users want to compare data sets in terms of the physical bit of ground they
refer to they will transform the data sets to a common epoch on the fly.
Many possibilities!  We are not there yet, but whatever it is the thing we do
need is the ability to define and apply complex time dependent transformations.

Whew! That was a long answer.  I hope this helps a bit.  At the recent FOSS4G Oceania
conference I presented a talk on this.   https://www.youtube.com/watch?v=xdvI9Yrs4iA&t=141s
There is also some information on the LINZ website at https://www.linz.govt.nz/data/geodetic-system/datums-projections-and-heights/geodetic-datums/new-zealand-geodetic-datum-2000-nzgd2000/nzgd2000-deformation-model,
but most of this is detail rather than conceptual.

One thing we are very aware at LINZ is that we have not had sufficient resource/time
to publish some of the background to how we have implemented recent deformation
model updates, but conceptually the driver hasn't changed, which is to keep things
working as simply as we can for the users of the NZGD2000 coordinates, most of whom are
probably unaware of the deformation model or the need for one.

> Even

________________________________

This message contains information, which may be in confidence and may be subject to legal privilege. If you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message. If you have received this message in error, please notify us immediately (Phone 0800 665 463 or info at linz.govt.nz) and destroy the original message. LINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ. Thank You.


More information about the PROJ mailing list