[gdal-dev] Design for sub-second accuracy in OGR ?

Even Rouault even.rouault at spatialys.com
Sun Apr 5 14:39:02 PDT 2015


Le dimanche 05 avril 2015 21:49:19, Craig Bruce a écrit :
> On 2015-04-05 16:25, Even Rouault wrote:
> 
>     struct {
>         GInt16  Year;
>         GByte   Month;
>         GByte   Day;
>         GByte   Hour;
>         GByte   Minute;
>         GByte   TZFlag;
>         GByte   Precision; /* value in OGRDateTimePrecision */
>         float   Second; /* from 00.000 to 60.999 (millisecond accuracy) */
>     } Date;
>  If it's not too different from what exists, I have found that a good
> general solution is what Unix uses:

I should have mentionned what currently exists indeed :

    struct {
        GInt16  Year;
        GByte   Month;
        GByte   Day;
        GByte   Hour;
        GByte   Minute;
        GByte   Second;
        GByte   TZFlag; /* 0=unknown, 1=localtime(ambiguous), 
                           100=GMT, 104=GMT+1, 80=GMT-5, etc */
    } Date;

> 
>  struct {
>      GInt64  second;      /* assuming GInt64 and GUInt32 types exist */
>      GUInt32 nanosecond;
>  } Date;
> 

Hum, the issue with that is you must do conversions between the textual form 
(which is the one found in most OGR formats I mentionned) like "YYYY-MM-DD 
HH:MM:SS.sss[+/-hh:mm]" and this representation. Which brings all those 
questions of the convention for the timestamp (UTC, TAI, whatever..., how leap 
seconds are handled) if we allow people to set/get the raw field directly (if 
that was only for internal purposes, we could do whatever we want). Whereas 
most OGR formats I mention use the textual form, so no need for conversion 
currently (just separating each part of the string into the dedicated field).

> 
> This representation is good for nearly 300-billion years with uniform
> resolution of one nanosecond.  Don't take the evolutionary path of
> repeatedly discovering that you don't have enough precision and just jump
> straight to nanoseconds.  Time zones don't matter either;

We probably still want to be able to keep track of timezone as we do currently 
so as to be able to transport them losslessly when doing format conversion.

> just represent
> everything in UTC and display it in the local time zone of the client
> (using POSIX localtime_r() which is compatible with this representation). 

One of the annoyances is that time_t is 32-bit on 32-bit Linux. On Windows you 
must use localtime_s(), etc... For other needs, I had introduced portable 
versions of gmtime()/mktime() that work with 64bit integers in port/cpl_time.h 
, so I'd rather use that if we go to that path (but not sure how/if they work 
well with leap seconds ;-)).

But honestly, I'm not really enthousiastic about overhauling how we deal with 
date/time currently.


> Parsing strings isn't a big problem either; I've written code that can
> parse a packed-digit string like "20150405193638" into this structure in
> 27 nanoseconds on a computer that is several years old (by comparison, a
> simple string copy of the same string takes 25 nanoseconds). Date
> arithmetic and comparison is also very simple.  You can add the
> 'Precision' field if you need it.

-- 
Spatialys - Geospatial professional services
http://www.spatialys.com


More information about the gdal-dev mailing list