[postgis-users] design problem

pcreso at pcreso.com pcreso at pcreso.com
Fri Mar 1 16:05:52 PST 2013


Hi Bob,

This may be of interest.

We do exactly this with some 350,000,000 million (& growing) GPS records for instrument readings from the research vessel Tangaroa. Every reading has both a time & a Postgis position value. For ease of plotting, the points for each day are also aggregated into a linestring, the daily vessel track.

Each year is a new table, which is a partition in the parent. As users are normally querying data within a specified interval, the historic partitions also use a clustered index on timestamp, so that blocks read from disk are likely to include multiple required records in each read, further improving performance. The current year's partition does not have a clustered index, so inserts do not require the index to be rebuilt.

We have added a further field to ease user access. Shorter period queries are usually for readings at a fine resolution (1-10 minutes), long period (20 years say) may want daily or hourly readings.

It has worked very well for us for several years now, & we have a 
GUI sitting on top of it with mapserver enabling a map based view of the
 vessel track, optionally coloured by the selected reading. The data is 
also downloadable at the users selected interval over the specified 
period. This is not formally released as Open Source, but we have no 
problems providing anyone the source if they are interested. It was 
developed by an external  contractor, so support is also available if 
useful. Ditto the db documentation.

The underlying raw data are in netCDF tables generated by the IFREMER TECHSAS application. The Postgis summary tables, which are adequate for almost all user purposes, are at 1 minute granularity. Records with a timestamp at 1 minute (eg: 12:01) have a timer value of 1, 2 minutes (12:02; 12:04; 12:06; 12:08) are 2, 5 minutes (12:05) are 4, 10 minutes (12:10; 12:20) are 8, 15 minutes (12:15; 12:45) are 16, 30 minutes are 32, 1 hr (01:00; 03:00; ...) are 64, 2 hr (02:00; 04:00; ...) are 128, 12 hr (12:00) are 256, midnight (00:00) are 512 (or something like this approach).

A user can then get 12 hourly records simply by selecting where (timer = 256 or timer = 512) for a very fast result. A five minute result by where (timer >= 4). 

10 minutes is a bit trickier, because users wanted a 15 minute option, but not hard: where (timer >=8 and timer != 16).

This was developed to meet our user needs, & may or may not be useful for your purposes...  

HTH,

Brent Wood

--- On Sat, 3/2/13, Basques, Bob (CI-StPaul) <bob.basques at ci.stpaul.mn.us> wrote:

From: Basques, Bob (CI-StPaul) <bob.basques at ci.stpaul.mn.us>
Subject: Re: [postgis-users] design problem
To: "PostGIS Users Discussion" <postgis-users at lists.osgeo.org>
Date: Saturday, March 2, 2013, 11:31 AM

Steve,

Could this process of inherited tables be used to roll up GPS data by time increments for example?  Maybe roll up the data by day, week or month?  I need to figure out a way to handle queries potentially against millions of records for reporting purposes.

Bobb



>>  -----Original Message-----
>>  From: postgis-users-bounces at lists.osgeo.org [mailto:postgis-
>>  users-bounces at lists.osgeo.org] On Behalf Of Stephen Woodbridge
>>  Sent: Friday, March 01, 2013 4:23 PM
>>  To: postgis-users at lists.osgeo.org
>>  Subject: Re: [postgis-users] design problem
>>  
>>  On 3/1/2013 4:44 PM, Andy Colson wrote:
>>  > On 3/1/2013 3:11 PM, Denise Janson wrote:
>>  >> Hi,
>>  >>
>>  >> I have an application that is going to receive lots of
>>  georeferenced
>>  >> files every day. Each file has information of several points.
>>  >> Probably in few years my application will have a Tera of
>>  points
>>  >> information stored.
>>  >>
>>  >> I think I can do this design  in two ways:
>>  >>
>>  >> 1. Two tables, one of “uploaded_files”, and another of
>>  “points” (one
>>  >> uploadedFile to N points). And I'll have to partition the
>>  points
>>  >> table, maybe by month … 2. Or I can create one table per file,
>>  having
>>  >> thousands of tables in few years.
>>  >>
>>  >> Which case is better for my application?  Is there any better
>>  way to
>>  >> do this?
>>  >>
>>  >
>>  > If performance is a concern, and the file's are of any
>>  meaningful
>>  > size, you might consider leaving them on the filesystem and
>>  have the
>>  > table point to it (full path name sort of thing).
>>  >
>>  > Storing the file in PG is possible, and its nice because
>>  everything is
>>  > kept together, but if you have to get to and read the files
>>  fast, then
>>  > leave them on the filesystem.
>>  >
>>  > The lots of tables approach is problematic if you ever want to
>>  write
>>  > queries that look back in time.  Its much harder to say, give
>>  me every
>>  > record from the beginning of time at this point.
>>  >
>>  > With a good index, PG wont have a problem with a single table
>>  > containing billions of rows.  Just try to avoid doing bulk
>>  operations
>>  > (like update and delete) on the entire table.
>>  >
>>  >
>>  >  > uploadedFile to N points). And I'll have to partition the
>>  points
>>  > table,
>>  >
>>  >
>>  > Why will you have to partition it?
>>  
>>  you might want to consider using inherited tables. This you can
>>  have something like:
>>  
>>  master_table
>>      - table1 inherits from master_table
>>      - table2 inherits from master_table
>>      - etc
>>  
>>  This has the advantange that you can set constraints on the sub-
>>  tables like date_from, data_to or other constraints that you
>>  might need in your queries.
>>  
>>  Then when you make you query on the master_table if will
>>  eliminate all the tables that fail the constraint test and this
>>  is very fast. Also if you ever need to make adhoc queries on the
>>  master_table you still have a structure that supports that.
>>  
>>  There might ne some issues with inheriting 10 of 1000s of tables.
>>  
>>  The real answer to your design can only be answered by
>>  understanding what your queries are going to look like with
>>  respect to all this data.
>>  
>>  -Steve
>>  
>>  _______________________________________________
>>  postgis-users mailing list
>>  postgis-users at lists.osgeo.org
>>  http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users

_______________________________________________
postgis-users mailing list
postgis-users at lists.osgeo.org
http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/postgis-users/attachments/20130301/4700f03f/attachment.html>


More information about the postgis-users mailing list