<div dir="ltr"><div><div>Computing finished successfuly on your data.<br></div>The change I did is :<br>ST_Polygonize(ST_Node(ST_Collect(geom)) )<br><br></div>It is awfully slow tough (1 hour for all you dataset I think)<br><br>Cheers,<br>Rémi-C<br></div><div class="gmail_extra"><br><div class="gmail_quote">2015-02-18 13:07 GMT+01:00 Rémi Cura <span dir="ltr"><<a href="mailto:remi.cura@gmail.com" target="_blank">remi.cura@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div><div><div>Hey,<br></div>last chance for you ^^<br><br></div>I successfully was able to remove errors in 2 ways :<br></div> - first your geometry array contains a lot of duplicated geometry. <br> ( rest of this is based on geom_set_id= 1)<br></div> I suspect you forgot a where in an inner join, or something like this. Removing the duplicate in your data seems to solve the problem<br></div><div> (removing duplicate : DISTINCT ON (geom))<br></div><div> I generated each pair of geometry in this set, none gave the error individually (which lessen the chance of a legit GEOS problem)<br></div> - second, without removing the duplicates, a combination of tanslate and snaptogrid(0.5) was sufficient to also remove the error.<br><br></div>I would recommend that you analyse a little bit your data.<br></div>For instance, simply deuplicating on geom make your data set going from <br></div>--15317 geometry in arrays to --9934<br><br></div>Snapping to a 0.5 grid before deduplicating further reduce the data set to 9617 (which might indicates that somewhere in your workflow you have a precision related issue).<br><br></div>I never used ARCGIS, but I can bet you that ARCGIS is not precision-safe.<br></div>The only product that is truly safe is CGAL, which comes with other type of constraints.<br></div>You could also probably use GRASS safely if cleaning the data with v.clean on import.<br><br></div><div>Anyway, there is no safe tool, only safe way to use it. You could very easily create another SRS that is a translation of your original srs to increase precision.<br></div><div>So you simply change your workflow to ST_Transform before computing, then ST_Transform after computing.<br></div><div><br></div>Lastly, I don't see the interest of using an array of geom here because you don't need to be able to do my_array[N] (no need to access).<br>So you could simply use a geometry collection, so your input is a geom, and not a geom[].<br><br></div>Cheers,<br>Rémi-C<br><div><div><div><div><div><div><div><div><br><div><div><br></div></div></div></div></div></div></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">2015-02-18 10:39 GMT+01:00 BladeOfLight16 <span dir="ltr"><<a href="mailto:bladeoflight16@gmail.com" target="_blank">bladeoflight16@gmail.com</a>></span>:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><span>On Mon, Feb 9, 2015 at 6:25 AM, Sandro Santilli <span dir="ltr"><<a href="mailto:strk@keybit.net" target="_blank">strk@keybit.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">First of all I confirm it still happens with GEOS="3.5.0dev-CAPI-1.9.0 r4038".<br>
Second, I took a look at a random set (geom_set_id=1) and I found it pretty<br>
big. That's to say you could probably further reduce the dataset for the<br>
ticket. That set contains 109 polygons, I can get the error by attempting<br>
to union the boundaries of the first 40 in that set, and I'm sure you can<br>
further reduce the input.<br></blockquote><div><br></div></span>Thanks for taking a look. I'll work on doing that when I can find the time, but I don't expect that to be a fast process at all. Even just checking for the pairwise case took a decent amount of time to develop the query and took overnight to finish running (even with optimizations like a bounding box intersection test). I don't really have any good heuristic that could narrow down the possibilies for reproducing, so I don't see much option other than to brute force it possibly with some kind of filter. That's why I didn't put more effort into shrinking the input set to begin with.<br><br></div>Is a PostGIS database dump an okay format to provide the shapes, or would you prefer something else? I suppose I could dump groups into shapefiles or something like that if it's more convenient.<br><div><div class="gmail_extra"><br><div class="gmail_quote"><span>On Mon, Feb 9, 2015 at 7:00 AM, Rémi Cura <span dir="ltr"><<a href="mailto:remi.cura@gmail.com" target="_blank">remi.cura@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>this is a precision related issue, coordinates are way too big and should be translated.</div></div></blockquote><div><br></div></span><div>I understand what you mean by that (as in floating point problems due to size), but these coordinates are pretty typical. This is a standard UTM projection, zone 15N in middle America. It's even predefined in PostGIS' list of spatial references. I'm told ESRI had this kind of problem years ago, but they dealt with it as far as I know. While I would choose PostGIS over ESRI any day, this could be viewed by my coworkers as a good argument against using PostGIS; it represents a serious reliability concern since it applies to a broad range of functions and 2D projections. Basically, if you use any projection with coordinates of this size (of which there are a good number), there seems to be no telling when any function will just blow up in your face at random.<br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span>On Mon, Feb 9, 2015 at 7:04 AM, Rémi Cura <span dir="ltr"><<a href="mailto:remi.cura@gmail.com" target="_blank">remi.cura@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:12.8px"><div>Hey,<br></div>I executed your data,<br></div><div style="font-size:12.8px">the following command solve the problem (with very recent GEOS for me)<br>(POSTGIS="2.2.0dev r12846" GEOS="3.5.0dev-CAPI-1.9.0 r0" PROJ="Rel. 4.8.0, 6 March 2012" GDAL="GDAL 2.0.0dev, released 2014/04/16" LIBXML="2.8.0" RASTER)<br></div></div></blockquote><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div> </div></blockquote></span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div>[Snip] <br></div></blockquote><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div> </div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:12.8px"></div><div style="font-size:12.8px">The change compared to your approach : convert input to table of simple polygons, (no array, no multi).<br></div><div style="font-size:12.8px">Then translate to improve precision in geos computing<br><div style="font-size:12.8px">Then the union.<br></div><div style="font-size:12.8px">I don't really understand what you are trying to do,<br></div>but ist_union seems dangerous and quit ineffective for that .</div></div></blockquote><div><br></div></span><div>I'm looking at this function call in your code: ST_Union( ST_MakePolygon(ST_ExteriorRing(geom)) ). That call seems to remove all holes and then create the union of all the covered areas (a single multipolygon that covers all areas covered by the originals). That is not what I'm trying to do; I already have an outer boundary that I could use for that purpose.<br><br>Here's what I'm trying to do. I'm trying to create an overlay (as I said in the first sentence of my original e-mail but didn't elaborate on), in the sense that the term is used by these two articles: <a href="http://boundlessgeo.com/2014/10/postgis-training-creating-overlays/" target="_blank">http://boundlessgeo.com/2014/10/postgis-training-creating-overlays/</a> or <a href="http://trac.osgeo.org/postgis/wiki/UsersWikiExamplesOverlayTables" target="_blank">http://trac.osgeo.org/postgis/wiki/UsersWikiExamplesOverlayTables</a>. What they do is they create a sort of Venn diagram, if you will; they take all the polygons and create a new polygon for each area with a different set of overlappying polygons. Both of them use ST_Union in the same way I am: they take a set of linestrings and union them to create a fully noded linestring, and then they use this linework to create a bunch of new polygons. I'm leveraging ST_Boundary instead of ST_ExteriorRing in my code because I need to preserve holes, but that shouldn't change the results as far as I know. In what way is ST_Union "dangerous" and "quite ineffective" for the purpose of noding lingstrings? If that's true, I'm apparently not the only one who has some misconceptions, since both these articles use it the same way. I should also note that I don't show the whole process here; I'm only showing the part where I'm noding the linestrings because that's where this error occurs. Once I have the polygons, I also go back and relate them to the attributed rows in the original tables (of which there are several).<br><br></div><div>That said, I'm trying out the translate and cutting out multi, but I am still using an array in my actual code for a reason. Namely, I need to be able to do this with <i>different</i> sets of geometries. These geometries come as the result of selecting polygons from 3 or 4 different tables, each having completely disparate sets of attributes. So basically, I need to be able to use an arbitrary query to get the
group of polygons to be passed into a reusable overlay process. As a result, a function is a natural fit. I opted to use a normal function that takes an array. The only other option I can think of is some kind of aggregate function, which I didn't investigate doing as I'm not sure whether that might be more or less reliable. If you think an aggregate would be better or know of a better way to accomplish that, I'm all ears.<br></div><span><div style="font-size:12.8px"> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:12.8px">Of course reducing the number of useless points before union make it 10 times faster .<br><br>DRoP TABLE IF EXISTS unioned_poly ; <br>CREATE TABLE unioned_poly AS <br>SELECT ST_Union( <br> ST_Buffer(<br> ST_MakePolygon(<br> ST_ExteriorRing(<br> ST_SImplifyPreserveTopology(<br> geom<br> ,10<br> )<br> )<br> )<br> ,1 ) <br> )<br>FROM unique_polygon<br>GROUP BY geom_set_id<br></div><div style="font-size:12.8px">(17 sec)<br></div></div></blockquote><div><br></div></span><div>10 meters is a lot of precision to lose. That's over a tenth of a football field. I might be willing to simplify by 1 meter at most, but even that's a little big for my taste. My client and users expect to see the same shapes that got input come back out.<br><br></div><div>Thanks again to everyone who took a look.<br></div></div></div></div></div>
<br></div></div><span class="">_______________________________________________<br>
postgis-users mailing list<br>
<a href="mailto:postgis-users@lists.osgeo.org" target="_blank">postgis-users@lists.osgeo.org</a><br>
<a href="http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users" target="_blank">http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users</a><br></span></blockquote></div><br></div>
</blockquote></div><br></div>