<div dir="ltr"><br><div class="gmail_quote"><br><div dir="ltr"><div><div><div><div><div><div><div><div>Finally I was able to test with your data.<br> <br></div>I only worked on the big fat polygon with 40k rings<br></div>First I dumped the ring<br><img style="margin-right:0px" alt="Images intégrées 1" src="cid:ii_14be015a0ed15799" height="196" width="223"><br><br></div>Then I create a grid. I aimed to 300k cells, good for my memory<br><img style="margin-right:0px" alt="Images intégrées 2" src="cid:ii_14be01663762d9e4" height="126" width="190"><br><br></div>Then it is another dump : every segment (pair of successive points) of every rings is put into a table<br></div>This is the special trick that make computing the intersection geometry so fast. This table is annoyingly big (2.4 millions)<br><img style="margin-right:0px" alt="Images intégrées 3" src="cid:ii_14be019e3bd01b2b" height="160" width="190"><br><br><br></div>The next step is crucial, we find the mapping between the segments and the grid (which segment is in which cell).<br></div>We naturally take the segment that cross the cell, so we don't have any precision issue<br><div><img style="margin-right:0px" alt="Images intégrées 1" src="cid:ii_14be01cff8c6fbe7" height="173" width="190"><br><br></div><div>Then comes the geometry crunching :<br></div><div>for each cell, we group the segment and use ST_polygonize to compute possible surfaces (part of squares).<br></div><div>This is actually what would be very long without the trick of dumping ring then dumping segments.<br></div><div><img style="margin-right:0px" alt="Images intégrées 2" src="cid:ii_14be0241a1fa68a1" height="145" width="190"><br><br><br></div><div><br></div><div>We create "squarized" version of rings (conceptually, the ring is replaced with a generalized version)<br></div><div>The idea is simple, on simple cells, we can use squarized version of rings which will be a lot faster than using actual geometry<br></div><div><img style="margin-right:0px" alt="Images intégrées 3" src="cid:ii_14be02e2430840a5" height="162" width="224"><br><br>then comes what should be a major speed up, but does't speed things in your case.<br></div><div>This is because there are very "simple case", because the inner rings are almost filling up the polygon<br></div><div>So the squarized rings allow to rule out only a few number of cells<br><img style="margin-right:0px" alt="Images intégrées 4" src="cid:ii_14be0303054bf10c" height="224" width="156"><br><br></div><div>Now we have simplified the problem as much at is was possible.<br></div>Up to this it is very fast;<br><br></div>The last part (and I didn't found how to accelerate it) is to decide wether part of square are within the outer and outside rings or not.<br><div><div><div><div><div><br></div><div><br></div><div><div><div><br><div><div><br><div><div><div>Cheers,</div><div>Rémi-C</div></div></div></div></div></div></div></div></div></div></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2015-02-28 10:05 GMT+01:00 Mark Wynter <span dir="ltr"><<a href="mailto:mark@dimensionaledge.com" target="_blank">mark@dimensionaledge.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>I will post tomorrow <br><br>Sent from my iPhone</div><div><div><div><br>On 28 Feb 2015, at 5:54 pm, John Abraham <<a href="mailto:jea@hbaspecto.com" target="_blank">jea@hbaspecto.com</a>> wrote:<br><br></div><blockquote type="cite"><div>That is so awesome. That makes total sense to dump the rings and do them each separately. <div><br></div><div>I have a client visiting the office Monday and Tuesday, so can't work on this much until Wednesday. </div><div><br></div><div>Are you going to post to the list?</div><div><br></div><div>--</div><div>John</div><div><br><div><blockquote type="cite"><div>On Feb 27, 2015, at 10:23 PM, Mark Wynter <<a href="mailto:mark@dimensionaledge.com" target="_blank">mark@dimensionaledge.com</a>> wrote:</div><br><div><div style="word-wrap:break-word"><div>Hi John</div><div><br></div><div>I have a tutorial version2 solution that tiles lc_class34 (not 32, my mistake earlier). </div><div>This involves using ST_DumpRings, and using the path value to discern between exterior and interior rings.</div><div>We tile and then use ST_Difference to remove the negative space of the interior rings.</div><div><br></div><div>I’ve experimented with different tile sizes on a 2 vCPU machine. </div><div>32000 x 32000 tiles - Total run time = 170 seconds </div><div>8000 x 8000 tiles - Total run time = 845 seconds </div><div>4000 x 4000 tiles - total run time = 2408 seconds</div><div><br></div><div>Could probably go smaller again. The smaller the tile size, the quicker your downstream ST_Intersection queries will be with your Poly overlay.</div><div><br></div><div>One way to offset the increase in tiling time (to produce smaller tiles) is to parallelise over more CPUs. </div><div>The alternative to more CPUs is to use a quadgrid tiling algorithm which recursively subdivides tiles into smaller tiles if say ST_NPoints exceeds a certain threshold. </div><div><br></div><div>I use quadgrids quite a bit - an example is on the front slider at http:://<a href="http://www.dimensionaledge.com/" target="_blank">www.dimensionaledge.com</a>. The quadgrid SQL algorithm sits in my private code repo and I’m keeping it close to my chest (for now).</div><div><br></div><div><br></div><div><span><Screen Shot 2015-02-28 at 3.50.24 pm.png></span><span><Screen Shot 2015-02-28 at 3.50.58 pm.png></span></div><br><div><div>On 26 Feb 2015, at 4:19 am, John Abraham <<a href="mailto:jea@hbaspecto.com" target="_blank">jea@hbaspecto.com</a>> wrote:</div><br><blockquote type="cite">Wow guys. 131 seconds is a lot faster than 10 days.<br><br>Remi, if you can work out a fast intersection method that uses points and lines as a preprocessor to dealing with the polygons, I think it would be a great addition to PostGIS. ST_Intersection in PostGIS is often quite a bit slower than the implementation in Geomedia, Mapinfo, and (I hear) ArcGIS, so any functionality that results in speed improvements would be great.<br><br>Mark, I can't wait to figure out why your system was fast! I was following your (preliminary) tutorial and gridding the data was progressing very slowly. <br><br>I have a provincial boundary file but there seems to be much ambiguity in GIS representations of the provincial boundary, so I won't send you the one I have. I can try to assemble one from other sources.<br><br>--<br>John Abraham<br><a href="mailto:jea@hbaspecto.com" target="_blank">jea@hbaspecto.com</a><br>403-232-1060<br><br>On Feb 25, 2015, at 6:28 AM, Mark Wynter <<a href="mailto:mark@dimensionaledge.com" target="_blank">mark@dimensionaledge.com</a>> wrote:<br><br><blockquote type="cite">Hi John<br><br>I’ve just crunched your whole dataset. The process takes 131 seconds for the vector tiling (using a 16 CPU machine). Plus another 170 seconds for data prep at the start including making the poly's valid.<br>For a 2 CPU machine, it will take circa 15 minutes, or double that using a single CPU.<br><br>Only one small issue outstanding - and that relates to clipping the regular grid prior to tiling. For clipping I used the union of the abmiw2wlcv_48tiles as supplied with the data - the problem is the abmiw2wlcv_48tiles are rough and ready, which produces voids. The voids using my method unfortunately get the same feature class as lc_class = 32. You’ll see this clearly on second screenshot.<br>The way around this is to clip the regular grid using a high-res shapefile of the Alberta state boundary prior to the tile crunching the lancover_polygons_2010 table. This is easily done - I just didn’t have the state boundary data.<br><br>I need to get some sleep. John, Remi, I will share with you the code tomorrow. For others, I’ll be posting a tutorial that steps through the methods touched upon in this thread..<br><br>John, the only difference between my tutorial and its application to your land cover data was a few tweaks to the data prep stage. Otherwise the same code pattern (no modification at all needed to the worker_function). It was great to test the code with your data.<br><br>Speak tomorrow.<br>Mark<br><br><br><br><br><Screen Shot 2015-02-25 at 11.29.15 pm.png><br><br><br><Screen Shot 2015-02-25 at 11.39.04 pm.png><br><br><br><Screen Shot 2015-02-25 at 11.41.41 pm.png><br><br></blockquote><br></blockquote></div><br></div></div></blockquote></div><br></div></div></blockquote></div></div></div></blockquote></div><br></div>
</div></div></div><br></div>