<html>
<head>
<style type="text/css">
<!--
body { margin-left: 4px; margin-right: 4px; font-variant: normal; margin-top: 4px; margin-bottom: 1px; line-height: normal }
p { margin-top: 0; margin-bottom: 0 }
-->
</style>
</head>
<body style="margin-left: 4px; margin-right: 4px; margin-top: 4px; margin-bottom: 1px">
<p style="margin-bottom: 0; margin-top: 0">
<font face="Comic Sans MS" size="3">Rene,</font> </p>
<br>
<p style="margin-bottom: 0; margin-top: 0">
<font face="Comic Sans MS" size="3">> how could we standardize for those future uses?</font> </p>
<br>
<p style="margin-bottom: 0; margin-top: 0">
<font face="Comic Sans MS" size="3">I was thinking more along the lines of a standard file size more than anything. Nto all deliverable are even able to accomplish capturing a whole contract in a single file, so if even a separation into more than one file is needed, why not come up with some form of sizing standard, I realize that there are uses other than those that I have, but some form of data tiling seems appropriate with this discussion.</font> </p>
<br>
<p style="margin-bottom: 0; margin-top: 0">
<font face="Comic Sans MS" size="3">bobb</font> </p>
<br>
<p style="margin-bottom: 0; margin-top: 0">
<br>
<br>
>>> "René A. Enguehard" <ahugenerd@gmail.com> wrote:<br> </p>
<div style="margin-right: 0; margin-left: 15px; margin-top: 0; border-left: solid 1px #050505; margin-bottom: 0; background-color: #f3f3f3; padding-left: 7px">
<p style="margin-bottom: 0; margin-top: 0">
I agree, primarily because I just got a dataset from the city that was a<br>5Gb raster. I know hardrive space is cheap and so is processing power<br>but still, it took literally hours to get anything meaningful out of it.<br>Picking a more appropriate resolution, better compression and eventually<br>switching file formats would have helped immensely but wasn't done since<br>the prevailing attitude is that bigger is better. This attitude is<br>really the same as in the programming world, where programs keep getting<br>slower and slower (in terms of time complexity) but it's deemed "okay"<br>since computers are also getting faster.<br><br>I don't think this attitude is going to change any time soon though and<br>making some form of standard would simply not work. How could we<br>standardize what resolution and compression we should be using on<br>specific datasets for specific applications? There are uses we haven't<br>even thought up yet, how could we standardize for those future uses?<br><br>Just my 0.02$<br>R<br><br>Bob Basques wrote:<br>><br>> All,<br>><br>><br>> Ok, I'm probably going to get someone irritated, but here goes . . .<br>><br>><br>> Why not approach this from the other end of the spectrum and work at<br>> making the original files smaller.  Work with the providers to make<br>> the images smaller in the first place, or at least come up with a<br>> maximum practical size to work with, I mean if this is the only (or<br>> biggest reason) for implementing JP2, then getting folks to make the<br>> smaller deliverables seems like a better long term approach.<br>><br>><br>> Here's my reasoning, we're never (ever?) going to hit the top end on<br>> how big files ever get, resolution just keeps going up and up, so<br>> there is always going to be some upper limit that will need to be<br>> breached somehow.  Working out a proper method for segregating the<br>> data up front (dare I say it), as some sort of standard (which can be<br>> adjusted as time passes) will make everything work nicely, then all<br>> will work with available tools when they are available, if tools to<br>> handle larger datasets become available, and the community feels there<br>> is a reason/need that these new larger files need to be handled, then<br>> they get to change the standard.<br>><br>><br>> bobb<br>><br>><br>><br>><br>><br>><br>><br>> >>> "Fawcett, David" <David.Fawcett@state.mn.us> wrote:<br>><br>><br>> I realize that there are likely not a large number of people who have<br>> the expertise and experience to write this kind of code.<br>><br>> Is this a project that should be shopped around for funding?  Google<br>> Summer of Code?  A grant from our ~benevolent overlord Google?  Some<br>> other foundation or org interested in open data formats?<br>><br>> David.<br>> -----Original Message-----<br>> From: discuss-bounces@lists.osgeo.org<br>> [mailto:discuss-bounces@lists.osgeo.org] On Behalf Of Michael P. Gerlek<br>> Sent: Thursday, August 20, 2009 4:36 PM<br>> To: OSGeo Discussions<br>> Subject: RE: [OSGeo-Discuss] Open File Formats and Proprietary<br>> Algorithms<br>> <snip><br>><br>><br>> > Do you know why there hasn't been a broader adoption of JP2?<br>><br>> Not through lack of trying on my part :-)<br>><br>> I think the two biggest reasons are:<br>><br>> (1) The algorithms for handling large images in memory really are rocket<br>> science, and no one in the FOSS community has gotten the "itch"<br>> sufficiently bad enough to go and do the work needed inside the existing<br>> open source packages.  Hopefully someday someone will.<br>><br>><br>> _______________________________________________<br>> Discuss mailing list<br>> Discuss@lists.osgeo.org <a href="http://lists.osgeo.org/mailman/listinfo/discuss">http://lists.osgeo.org/mailman/listinfo/discuss</a><br>> _______________________________________________<br>> Discuss mailing list<br>> Discuss@lists.osgeo.org<br>> <a href="http://lists.osgeo.org/mailman/listinfo/discuss">http://lists.osgeo.org/mailman/listinfo/discuss</a><br>><br>> ------------------------------------------------------------------------<br>><br>> _______________________________________________<br>> Discuss mailing list<br>> Discuss@lists.osgeo.org<br>> <a href="http://lists.osgeo.org/mailman/listinfo/discuss">http://lists.osgeo.org/mailman/listinfo/discuss</a><br>>  <br><br>_______________________________________________<br>Discuss mailing list<br>Discuss@lists.osgeo.org<br><a href="http://lists.osgeo.org/mailman/listinfo/discuss">http://lists.osgeo.org/mailman/listinfo/discuss</a><br>
</p>
</div>
</body>
</html>