[postgis-users] design problem

Basques, Bob (CI-StPaul) bob.basques at ci.stpaul.mn.us
Mon Mar 4 07:34:20 PST 2013


Hi Brent,

Very interesting.  I’m getting ready to open source my product as a configuration  set of MapServer, GeoMoose OpenLayers and Postgres/PostGIS, maybe some room for cross pollination here.

We’ve just started collecting data (compared to your description) about 8 months ago.  Our installation is slightly different in that it handles up to 300+ vehicle tracks at a time, with average daily numbers at 120-150 trails.  We’re averaging 1 million new records a month, with snow emergencies increasing that number to 2 to 3 time.  We’re just short of 19 million records which are collected about every 15-20 seconds.  The reporting is staggered from the field to reduce overloading.  We have a few vehicles running at 1 & 3 sec reporting times, which are a whole other deal.  This may be more the norm moving forward.

We also have a variety of sensors integrated into each device that is transmitted back, so the table structure gets a bit more complicated, but still, we think, in a flexible and manageable manner.

I haven’t actually gotten to the point of separating anything out yet.  Everything is running off the raw table, but I know I’ll need to figure something out eventually.

We have very similar usage needs as well, most all queries will be time related which so far has been fine  until more than a few days are queried from (100 records or so).  Our desired response times are in the 3-4 sec range and built around the idea of doing on the fly reporting via the web.  I understand that I may need to set up some predefined long running functions at some point, but I’ve avoided it so far.

I’ve pondered all sorts of approaches, one that’s probably overkill but very automatable would be to use a table domain structure per vehicle, separate tables for collecting per vehicle.  This allows for all sorts of control and granularity down to the individual asset, but seems like overkill on the surface.  It start to get into some interesting items with respect to adding new assets automatically as well.

Anyway enough about us.  how might we make our systems complement each other?

Bobb



From: pcreso at pcreso.com [mailto:pcreso at pcreso.com]
Sent: Friday, March 01, 2013 6:06 PM
To: PostGIS Users Discussion
Cc: Basques, Bob (CI-StPaul)
Subject: Re: [postgis-users] design problem

Hi Bob,

This may be of interest.

We do exactly this with some 350,000,000 million (& growing) GPS records for instrument readings from the research vessel Tangaroa. Every reading has both a time & a Postgis position value. For ease of plotting, the points for each day are also aggregated into a linestring, the daily vessel track.

Each year is a new table, which is a partition in the parent. As users are normally querying data within a specified interval, the historic partitions also use a clustered index on timestamp, so that blocks read from disk are likely to include multiple required records in each read, further improving performance. The current year's partition does not have a clustered index, so inserts do not require the index to be rebuilt.

We have added a further field to ease user access. Shorter period queries are usually for readings at a fine resolution (1-10 minutes), long period (20 years say) may want daily or hourly readings.

It has worked very well for us for several years now, & we have a GUI sitting on top of it with mapserver enabling a map based view of the vessel track, optionally coloured by the selected reading. The data is also downloadable at the users selected interval over the specified period. This is not formally released as Open Source, but we have no problems providing anyone the source if they are interested. It was developed by an external  contractor, so support is also available if useful. Ditto the db documentation.

The underlying raw data are in netCDF tables generated by the IFREMER TECHSAS application. The Postgis summary tables, which are adequate for almost all user purposes, are at 1 minute granularity. Records with a timestamp at 1 minute (eg: 12:01) have a timer value of 1, 2 minutes (12:02; 12:04; 12:06; 12:08) are 2, 5 minutes (12:05) are 4, 10 minutes (12:10; 12:20) are 8, 15 minutes (12:15; 12:45) are 16, 30 minutes are 32, 1 hr (01:00; 03:00; ...) are 64, 2 hr (02:00; 04:00; ...) are 128, 12 hr (12:00) are 256, midnight (00:00) are 512 (or something like this approach).

A user can then get 12 hourly records simply by selecting where (timer = 256 or timer = 512) for a very fast result. A five minute result by where (timer >= 4).

10 minutes is a bit trickier, because users wanted a 15 minute option, but not hard: where (timer >=8 and timer != 16).

This was developed to meet our user needs, & may or may not be useful for your purposes...

HTH,

Brent Wood

--- On Sat, 3/2/13, Basques, Bob (CI-StPaul) <bob.basques at ci.stpaul.mn.us<mailto:bob.basques at ci.stpaul.mn.us>> wrote:

From: Basques, Bob (CI-StPaul) <bob.basques at ci.stpaul.mn.us<mailto:bob.basques at ci.stpaul.mn.us>>
Subject: Re: [postgis-users] design problem
To: "PostGIS Users Discussion" <postgis-users at lists.osgeo.org<mailto:postgis-users at lists.osgeo.org>>
Date: Saturday, March 2, 2013, 11:31 AM
Steve,

Could this process of inherited tables be used to roll up GPS data by time increments for example?  Maybe roll up the data by day, week or month?  I need to figure out a way to handle queries potentially against millions of records for reporting purposes.

Bobb



>>  -----Original Message-----
>>  From: postgis-users-bounces at lists.osgeo.org</mc/compose?to=postgis-users-bounces at lists.osgeo.org> [mailto:postgis-
>>  users-bounces at lists.osgeo.org</mc/compose?to=users-bounces at lists.osgeo.org>] On Behalf Of Stephen Woodbridge
>>  Sent: Friday, March 01, 2013 4:23 PM
>>  To: postgis-users at lists.osgeo.org</mc/compose?to=postgis-users at lists.osgeo.org>
>>  Subject: Re: [postgis-users] design problem
>>
>>  On 3/1/2013 4:44 PM, Andy Colson wrote:
>>  > On 3/1/2013 3:11 PM, Denise Janson wrote:
>>  >> Hi,
>>  >>
>>  >> I have an application that is going to receive lots of
>>  georeferenced
>>  >> files every day. Each file has information of several points.
>>  >> Probably in few years my application will have a Tera of
>>  points
>>  >> information stored.
>>  >>
>>  >> I think I can do this design  in two ways:
>>  >>
>>  >> 1. Two tables, one of “uploaded_files”, and another of
>>  “points” (one
>>  >> uploadedFile to N points). And I'll have to partition the
>>  points
>>  >> table, maybe by month … 2. Or I can create one table per file,
>>  having
>>  >> thousands of tables in few years.
>>  >>
>>  >> Which case is better for my application?  Is there any better
>>  way to
>>  >> do this?
>>  >>
>>  >
>>  > If performance is a concern, and the file's are of any
>>  meaningful
>>  > size, you might consider leaving them on the filesystem and
>>  have the
>>  > table point to it (full path name sort of thing).
>>  >
>>  > Storing the file in PG is possible, and its nice because
>>  everything is
>>  > kept together, but if you have to get to and read the files
>>  fast, then
>>  > leave them on the filesystem.
>>  >
>>  > The lots of tables approach is problematic if you ever want to
>>  write
>>  > queries that look back in time.  Its much harder to say, give
>>  me every
>>  > record from the beginning of time at this point.
>>  >
>>  > With a good index, PG wont have a problem with a single table
>>  > containing billions of rows.  Just try to avoid doing bulk
>>  operations
>>  > (like update and delete) on the entire table.
>>  >
>>  >
>>  >  > uploadedFile to N points). And I'll have to partition the
>>  points
>>  > table,
>>  >
>>  >
>>  > Why will you have to partition it?
>>
>>  you might want to consider using inherited tables. This you can
>>  have something like:
>>
>>  master_table
>>      - table1 inherits from master_table
>>      - table2 inherits from master_table
>>      - etc
>>
>>  This has the advantange that you can set constraints on the sub-
>>  tables like date_from, data_to or other constraints that you
>>  might need in your queries.
>>
>>  Then when you make you query on the master_table if will
>>  eliminate all the tables that fail the constraint test and this
>>  is very fast. Also if you ever need to make adhoc queries on the
>>  master_table you still have a structure that supports that.
>>
>>  There might ne some issues with inheriting 10 of 1000s of tables.
>>
>>  The real answer to your design can only be answered by
>>  understanding what your queries are going to look like with
>>  respect to all this data.
>>
>>  -Steve
>>
>>  _______________________________________________
>>  postgis-users mailing list
>>  postgis-users at lists.osgeo.org</mc/compose?to=postgis-users at lists.osgeo.org>
>>  http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users
_______________________________________________
postgis-users mailing list
postgis-users at lists.osgeo.org</mc/compose?to=postgis-users at lists.osgeo.org>
http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/postgis-users/attachments/20130304/49f81fb3/attachment.html>


More information about the postgis-users mailing list