[pgrouting-dev] Time dependent data and input format
Stephen Woodbridge
woodbri at swoodbridge.com
Tue Jul 19 00:59:35 EDT 2011
On 7/19/2011 12:24 AM, Jay Mahadeokar wrote:
>
>
> On Tue, Jul 19, 2011 at 9:05 AM, Stephen Woodbridge
> <woodbri at swoodbridge.com <mailto:woodbri at swoodbridge.com>> wrote:
>
> Jay, Daniel,
>
> I think we need to get our heads around this or it will be very hard
> for Jay to complete this as a useful tool.
>
>
> Jay says:
>
> What im not clear is, how do we package this in the overall
> pgRouting project?
>
>
> This is exactly the problem. So here is my 2 rupees :) on this:
>
>
> Wow, that was innovative! :)
Well probably not, sorry that is what happens when I'm too tired and try
to be cute. <sigh>
>
> I think the core algorithm needs to be generic so it works with any
> time frames that someone might want to use.
>
>
> As far as I think, this part is ready.
>
> Then we layer over that application implementations using wrappers
> or additional code. So since most people today are using pgRouting
> for vehicle routing applications I think we need to implement and
> easy to use wrapper that deals with vehicle applications of date and
> time of day dependencies. This does not preclude someone else coming
> in later and writing a new wrapper to deal with their application
> working in years or microsecond intervals.
>
>
> Now, general pgRouting query is somewhat like this:
>
> 1. Pass parameters to pgRouting query
> 2. Extract data from database according to sql parameter passed
> 3. Generate the graph and other parameters using data extracted
> 4. Pass data and other parameters to internal function
> 5. Function calculates the result
> 6. Convert the result into pgRouting result and return it.
>
> So, we have steps 3,4,5 and 6 ready.
>
> For tdsp, In step 2 we need to query 2 tables separately. Query to table
> ways will be generic.Query to table time_dep_costs, should take care of
> cyclic nature and other things.
>
> We can have two approaches:
>
> 1. Take some specific case like day and time, and write down code that
> will take care of above thing. When we bundle this for with pgRouting
> the query will become specific to that format.
> Whenever a new data format is there, someone will need to re-code step
> 2. Since, this is internal to pgRouting, this cannot be done by
> application developer. Then we would need to recompile whole thing as a
> separate query before using it.
>
> 2. We could implement specific functions that can be used as query to
> time_dep_costs. Those can be installed as separate pgRouting functions
> itself. For simple applications where the data is in intervals of 24
> hours, and cyclic we can just provide a postgres plsql function and it
> should serve the function. I am not good at plsql, but I believe it is a
> very flexible tool and can be used for complex data formats as well
> (thoughts?)
>
> I think approach 2 is much better and it keeps the main tdsp query
> independent of the data format. We just need to write plsql functions
> for different data formats and supply those as sql argument to tdsp
> query to retrieve data from time_dep_costs table.
>
> If you think this is a good approach, I can start learning plsql in
> detail and work on writing retrieval functions for specific data formats.
>
> Is this model right? I can go ahead and start work on that.
Yes, I think approach 2 is the better model because application
developer should not be required to recompile anything.
plsql is very straight forward, and I think we can all help with the
learning curve or even coding some of it if we have time. I think if you
assume a set returning function then you have inputs and a record
structure of your results. The function takes the inputs and returns a
set of records. So for example:
-- define an output record structure as a type
-- using pgr_ as namespace for function, change as appropriate
create type pgr_time_dep_costs as (
edge_id int,
time_start float,
time_stop float,
avg_speed float,
transit_time float
);
CREATE OR REPLACE FUNCTION pgr_get_time_dep_costs(schedule_tab text,
start_date_time timestamp, max_time_duration_sec int, ...)
RETURNS SETOF pgr_time_dep_costs AS
$BODY$
DECLARE
out pgr_time_dep_costs;
BEGIN
LOOP
-- compute values of out from the input
-- load the values into out
out.edge_id := ...;
out.time_start := ...;
out.time_stop := ...;
out.avg_spd := ...;
out.tansit_time := ...;
RETURN NEXT out; -- return an out record
END LOOP;
RETURN; -- exit the procedure
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
This is pretty basic, but it should help get you started.
-Steve
> For this project we need a concrete implementation that allow
> someone to create a table of time based restrictions of some kind on
> edges and reasonable easy say generate a route start at X and use
> these time dependent restrictions.
>
> I do not think it needs to deal with every single possible case, but
> it should be a reason set of cases. We can always expand on this
> after GSoC. I think it would be good in the documentation, we add
> some notes about things we thought of be did not support and a short
> paragraph of our thoughts on how to support it in the future.
>
> You need to size this to the time you have available so you can be
> successful. I will let Daniel be the final arbiter on this as I will
> be out of town again later this week and my time here seems to on
> overload lately.
>
> The key idea here is layer of tasks, and structure. We need to
> bridge the gap between low-level generic code and higher level
> application specific presentation of which there my be many
> depending on the application in discussion.
>
> Best regards,
> -Steve
>
>
> On 7/18/2011 11:03 PM, Jay Mahadeokar wrote:
>
>
>
> On Mon, Jul 18, 2011 at 11:21 PM, Stephen Woodbridge
> <woodbri at swoodbridge.com <mailto:woodbri at swoodbridge.com>
> <mailto:woodbri at swoodbridge. com
> <mailto:woodbri at swoodbridge.com>>> wrote:
>
> Hi Jay,
>
> I there are a few important points to this problem:
>
> 1. the time based adjustments need to be stored in in real human
> terms, ie: similar to crontab time specification or similar
> to the
> Travel Time Analysis you found.
>
> 2. the query to pass the appropriate time dependent info can
> easily
> be normalized to the the route start time by subtracting the
> route
> start time from the feature start and end times, except when
> it get
> complicated like rolling over from a week day schedule to a
> weekend
> schedule on a friday to saturday transition. Likewise for
> holiday
> and event schedules.
>
> This seems to be the hard part if there is a complicated
> schedule.
> So what we need is to define that are the inputs and outputs
> of this
> query, something like:
>
> INPUTS:
> Start date:
> Start time:
> Max duration time:
> Start location of route:
> Ending location of route:
>
> OUTPUTS:
> Segment id:
> time offset from:
> time offset to:
> avg speed:
> transit time:
> restriction:
>
>
> Hi Steve,
>
> Well, this was our initial plan, but we had not kept the query
> like this
> since we wanted to keep the query general. I think you or Daniel had
> pointed out that we should not restrict our query to time units like
> date, hours etc since in some application, time interval could be
> milliseconds or in some other application it could be years etc.
> So, I
> think if we leave retrieval from time_dep_costs table as sql query
> parameter, that would be more flexible as we had concluded earlier.
>
> 3. I assume because your code only looks at an interval from the
> start, that it is not limited to an restricted time window
> like 12
> hr, or 24 hr, etc.
>
> more below ...
>
>
> On 7/18/2011 7:49 AM, Jay Mahadeokar wrote:
>
> Hi all,
>
> Hi, I am working on creating wrapper functions for tdsp,
> corresponding
> to the wrapper with and without bounding box as in
> dijkstra_shortest_paths.
>
> As discussed many times, the TDSP algorithm expects the
> start_time as 0
> and all times offset from start time. For the data I had
> created, I had
> taken time from 0 to 24, corresponding to 24 hours and
> assigned
> different travel_time for different time periods as per my
> intuition.
>
> If start time is say 23:30, our query for retrieving
> entries from
> time_dep_costs should properly do calculation such that
> travel
> times are
> taken cyclically.
>
>
> What to you mean by cyclically? I think the times should be
> linear
> and repeating if they are cyclical. Is this what you mean?
>
>
> Yes.
>
> For example, a route starting Monday at 8AM and a segment has
> 9AM-10AM restriction and the route max duration is 48hrs then we
> should see two records:
>
> ID, From, To, MPH, ...
> 27, 0900, 1000, 25, ...
> 27, 3300, 3400, 25, ...
>
>
>
> Instead of seeing keeping this as two records, we can add a
> parameter
> called is_cyclic in query. If that is true, then once we run out
> of the
> range, we can go back to zero.
>
> That would need many modifications in current code, but I guess
> it is
> manageable.
>
> Would that be ok?
>
>
> I'm not even sure how I would compute this, I think this
> puts a lot
> of burden on the user to figure this out and get it correct. The
> obvious flip side of this would be for us to define a rigid
> format
> for entering schedule information where we target support for
> something like 80+% of the common schedule definitions and then
> figure this out once for everyone. I'm not sure if you feel this
> will fit in your schedule or not?
>
>
> We can sure provide functionality for some common formats, I
> will try
> and do that within GSoc timeline if it works out, or we can work
> on that
> later too. What im not clear is, how do we package this in the
> overall
> pgRouting project?
>
>
> Thoughts?
>
> -Steve
>
> Now, this query should depend on the format of the time
> dependent data
> and its units. I think it is best that we keep this task
> with
> the user
> so that he writes proper sql query such that the values
> are properly
> retrieved as required by the function.
>
> Do you see any specific cases, data formats where we can
> provide the
> wrapper functions for above task?
>
>
>
> On Sun, Jun 12, 2011 at 8:26 PM, Stephen Woodbridge
> <woodbri at swoodbridge.com <mailto:woodbri at swoodbridge.com>
> <mailto:woodbri at swoodbridge. com <mailto:woodbri at swoodbridge.com>>
> <mailto:woodbri at swoodbridge <mailto:woodbri at swoodbridge>. com
> <mailto:woodbri at swoodbridge. com
> <mailto:woodbri at swoodbridge.com>>>> wrote:
>
> On 6/12/2011 12:33 AM, Jay Mahadeokar wrote:
>
>
>
> On Sun, Jun 12, 2011 at 1:07 AM, Stephen Woodbridge
> <woodbri at swoodbridge.com <mailto:woodbri at swoodbridge.com>
> <mailto:woodbri at swoodbridge. com <mailto:woodbri at swoodbridge.com>>
> <mailto:woodbri at swoodbridge <mailto:woodbri at swoodbridge>. com
> <mailto:woodbri at swoodbridge. com <mailto:woodbri at swoodbridge.com>>>
> <mailto:woodbri at swoodbridge <mailto:woodbri at swoodbridge>
> <mailto:woodbri at swoodbridge <mailto:woodbri at swoodbridge>>. com
>
>
> <mailto:woodbri at swoodbridge <mailto:woodbri at swoodbridge>. com
> <mailto:woodbri at swoodbridge. com
> <mailto:woodbri at swoodbridge.com>>>>> wrote:
>
> On 6/11/2011 12:58 PM, Jay Mahadeokar wrote:
>
>
>
> On Thu, Jun 9, 2011 at 9:16 AM, Jay
> Mahadeokar
> <jai.mahadeokar at gmail.com <mailto:jai.mahadeokar at gmail.com>
> <mailto:jai.mahadeokar at gmail. com <mailto:jai.mahadeokar at gmail.com>>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>. com
> <mailto:jai.mahadeokar at gmail. com
> <mailto:jai.mahadeokar at gmail.com>>>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>>. com
>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>. com
> <mailto:jai.mahadeokar at gmail. com
> <mailto:jai.mahadeokar at gmail.com>>>>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>>. com
>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>. com
> <mailto:jai.mahadeokar at gmail. com
> <mailto:jai.mahadeokar at gmail.com>>>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>>. com
>
> <mailto:jai.mahadeokar at gmail <mailto:jai.mahadeokar at gmail>. com
> <mailto:jai.mahadeokar at gmail. com
> <mailto:jai.mahadeokar at gmail.com>>>>>> wrote:
>
> Hi,
>
> Just a recap to our planned tdsp
> query model:
>
>
>
> I agree. So, now can we assume
> we have
> following
> database
> tables, as per things discussed
> till now:
>
> 1. Ways: (On lines of the one in
> pgrouting-workshop)
>
>
> * gid
> * source
> * target
> * name
> * the_geom
> * static_length (In case time
> dependent
> routing
> is not used)
> * any other attributes..
>
> 2. TimeVaryingCosts (Any other
> suitable name?)
>
> * gid
> * start_time_and_date (May
> divide
> this into two
> columns:
> day_of_week and
> time_of_day ? )
> * travel_time / arrival_time
>
> 3. TurnRestrictions (Time
> dependent)
>
> * node/source
> * time_from
> * time_to
> * node_to
> * ancestors
>
>
> If only static shortest path
> query is
> made, then
> only
> table 1 is
> required.
> If time-dependent shortest path
> query is
> made,
> table 1 and 2
> will be used.
> If in time-dependent shortest
> path query
> with turn
> restrictions
> is made , all 3 tables will be used.
>
>
> So, for a general time dependent
> query, we will
> fetch edge
> information from ways table and the time
> dependent costs
> information
> from the
> TimeDepCosts table. (See previous
> msg above.)
>
> The query on ways table can be
> supplied by
> the usual
> as on
> normal
> dijkstra shortest_path() query.
>
> How do we form the query on the
> TimeDepCosts
> table?
> I see
> two options:
>
> 1. We generate the query according
> to the gids
> obtained from the
> query on ways table. We will have to
> query
> all the
> time slots
> available, or form a query so that the
> time-window is
> appropriate
> and we have sufficient travel_time
> info so
> that we
> can reach
> destination.
>
> Here we can form query like
>
> Select * from TimeDepCosts where gid in
> (list of gids
> queried from
> ways table).
>
> We can allow user to more filtering
> parameters
> according to
> start_time , date ?
> But this will make our query
> specific to the
> database design. As
> already discussed, we cant predict
> the time
> units /
> requirements of
> applications and so it would be
> better if we
> keep
> the query
> independent of the database design.
>
> Thats the reason our weight_map assumes
> start time from
> source as 0,
> and all travel times offset from the
> start
> time from
> source. A
> function that will generate such
> weight map
> is to be
> written.
>
> 2. We allow user to pass the query for
> TimeDepCosts
> table
> too. So,
> we will have our pgsql function as:
>
> CREATE OR REPLACE FUNCTION
> shortest_path(
>
> ways_sql text,
>
>
>
> time_dep_cost_sql text,
>
> source_id
> integer,
>
>
> target_id
> integer,
>
> directed
> boolean,
>
>
> has_reverse_cost boolean)
> RETURNS SETOF path_result
>
>
> This way, user can have more control
> on the
> amount
> of data being
> queried. We will only consider the time
> dependent
> windows
> that fit
> the query.
>
> Example of the time_dep_cost_sql can be:
>
> Select * from TimeDepCosts where "
> USER_SUPPLIED_CONSTRAINTS".
>
> The user can now supply his constraints
> according to
> time_window he
> needs, start times, holidays etc
> according
> to the
> database
> table design.
>
> The problem with this is, we cannot
> filter
> out edges
> according to
> bounding box etc since the_geom
> field is not
> there
> in this
> table.
> This is because that information
> would be
> redundant.
>
> Thoughts? Any idea that combines the
> merits
> of both
> approaches?
>
>
>
> Hi all,
>
> I am writing some background code will
> be needed
> for the
> above
> functionality.
>
> Any input on the above mentioned point?
> I suppose I
> should not start
> coding for any specific approach until
> we agree
> upon a
> viable query
> strategy.
>
>
> Hi Jay,
>
> I always find it easiest to answer these
> questions by
> thinking about
> the problem in terms of use cases, so:
>
> A web app, where someone enters a start and
> destination and some
> constraints like departure time or arrival
> time and
> maybe
> restrictions on modes of travel.
>
> Are there other ones we are trying to
> support? These
> should
> probably
> be documented as part of the project
> description.
>
>
> I agree, we need to document the use-cases. A
> separate
> thread
> for this
> discussion?
>
>
> Anyway, assuming the above, then the
> application which
> presumably
> has some basic knowledge of the time constraints
> could do
> something
> like:
>
> Get the straight line distance between start and
> destination
> double
> it and apply some average transportation
> speed to
> approximate the
> time window. The numbers can be changed to be
> acceptable by the
> application as long as it over estimates the
> duration of
> travel. It
> can then, specify the CONSTRAINTS in terms of a
> start and
> end time.
> We do something similar with our spatial
> bounding
> box used to
> collect the edges needed to build the graph.
>
> Now as far as collecting the constraints and
> making then
> accessible
> for the Boost_dijkstra, that is another
> problem. This
> problem boils
> down to the following:
>
>
> Just a correction, in this case we will not be using
> boost_dijkstra, but
> instead core_tdsp() function which we have build
> ourselves.
>
>
> 1. is there an acceptable abstraction that
> we can use
> internally to
> represent various data sets that we might
> want to
> load. Or
> can we
> define a schema that will work for some set of
> constraints
> we wish
> to implement initially.
>
>
> Well, we had agreed upon the following schema for
> testing purpose:
>
>
> Yes, you are correct, sorry I was not paying
> attention when
> I wrote
> that.
>
> 1.
>
> Ways: (On lines of the one in
> pgrouting-workshop)
>
> * gid
> * source
> * target
> * name
> * the_geom
> * static_length (In case time dependent
> routing is not
> used)
> * any other attributes
> 2.
>
> TimeDepCosts
>
> * gid
> * day_of_week
> * time_of_day
> * travel_time / arrival_time
>
> The time_dep_costs table schema can vary
> according to
> application and
> its needs and the nature of data available.
>
> 2. do want to move this data into some
> static memory
> structures for
> the analysis like we do for the graph and then
> dispose of
> them after
> the analysis.
>
>
> We are doing exactly that! The Weight_Map data
> structure
> that is
> supplied to the core_tdsp() is designed
> specifically to
> fulfil the
> above functionality. It assumes the start time
> as 0,
> and all other
> times offset from start time. So, there will be
> schema
> specific
> wrapper
> functions that will help generate the weight_map
> data
> structure.
>
> Which brings me back, to my original question
> (couple of
> messages back),
> what is the best way to generate weight_map, ie
> retrieve
> data
> from time
> dependent cost table.
>
> Well, I think for now, I am going to go with
> approach
> #1. i.e
> user will
> supply his query. This might retrieve unwanted data,
> since we cant
>
>
> Yes, I think this is the way to go. It is up to the
> user that
> presumable knows his data to formulate an efficient
> query to
> retrieve the needed data.
>
>
> filter by the_geom in table#2. But, I think we can
> refine this
> by adding
> constraints like: where gid IN
> (list_of_edgeids_retrieved_
> from_ways)
>
>
> A query something like:
>
> select a.*
> from table2 a, table1 b
> where b.the_geom && <BBOX_OF_REGION>
> and b.gid=a.gid
> and <TIME_CONSTRAINTS>;
>
> This would allow the planner to pick the best index.
>
>
> So, I guess this approach combines the merits of
> both
> approaches
> discussed before. We can provide flexibility to
> application as
> well as
> add constraint on the number of rows retrieved.
>
> Thoughts?
>
>
> Yes, I think this is a good plan.
>
> I think that the bulk of the roads in the network will
> probably not
> have time dependent data associated with them. If
> you look
> at what
> Navteq and the Traffic people (however they are) are
> doing,
> you will
> notice that mostly they only track major highways
> in/out of
> major
> metropolitan areas. They do not track that
> information on minor
> roads or rural areas. So I think that the number of
> segments
> that
> are likely to have time dependent data for any given
> request is
> going to be in the 5% of the total segments. Even if
> someone
> were to
> collect say GPS cellphone average speeds over a much
> wider
> range of
> segments, I still think it is manageable. Anyway we
> just have to
> solve the problem, they have to represent that data
> in a way
> the we
> can handle it quickly or this will not be a solution
> for them.
>
> -Steve
>
> 3. or do we want to construct a method that can
> dynamically
> request
> constraint information from the database
> when it is
> needed by a
> visitor. (2. is probably much more efficient
> and the
> better
> way to
> go from a complexity and stability point of
> view).
>
> Don't wait on me for additional input as I
> will be gone
> through June
> 21st. Keep it simple and modular and we can
> change
> or add to
> it in
> the future and I;m sure whatever you come up
> with
> will be fine.
> After all it is code and it can be changed it we
> need to. :)
>
> Project seems to be moving along fine.
>
> -Steve
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>.
> osgeo.org <http://osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>>.
> osgeo.org <http://osgeo.org> <http://osgeo.org>
>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>.
> osgeo.org <http://osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>>>
>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
>
>
>
>
> --
> Regards,
> -Jay Mahadeokar
>
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org <mailto:pgrouting-dev at lists.osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>.
> osgeo.org <http://osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org <mailto:pgrouting-dev at lists.osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>
> <mailto:pgrouting-dev at lists <mailto:pgrouting-dev at lists>.
> osgeo.org <http://osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> > >
>
>
>
>
> --
> Regards,
> -Jay Mahadeokar
>
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> >
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>
> <mailto:pgrouting-dev at lists. osgeo.org
> <mailto:pgrouting-dev at lists.osgeo.org>>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev> >
>
>
>
>
> --
> Regards,
> -Jay Mahadeokar
>
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org <mailto:pgrouting-dev at lists.osgeo.org>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev>
>
>
> ______________________________ _________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org <mailto:pgrouting-dev at lists.osgeo.org>
> http://lists.osgeo.org/ mailman/listinfo/pgrouting-dev
> <http://lists.osgeo.org/mailman/listinfo/pgrouting-dev>
>
>
>
>
> --
> Regards,
> -Jay Mahadeokar
>
>
>
> _______________________________________________
> pgrouting-dev mailing list
> pgrouting-dev at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/pgrouting-dev
More information about the pgrouting-dev
mailing list