[OSGeo-Discuss] Open File Formats and Proprietary Algorithms

"René A. Enguehard" ahugenerd at gmail.com
Mon Aug 24 08:04:55 PDT 2009


I agree, primarily because I just got a dataset from the city that was a 
5Gb raster. I know hardrive space is cheap and so is processing power 
but still, it took literally hours to get anything meaningful out of it. 
Picking a more appropriate resolution, better compression and eventually 
switching file formats would have helped immensely but wasn't done since 
the prevailing attitude is that bigger is better. This attitude is 
really the same as in the programming world, where programs keep getting 
slower and slower (in terms of time complexity) but it's deemed "okay" 
since computers are also getting faster.

I don't think this attitude is going to change any time soon though and 
making some form of standard would simply not work. How could we 
standardize what resolution and compression we should be using on 
specific datasets for specific applications? There are uses we haven't 
even thought up yet, how could we standardize for those future uses?

Just my 0.02$
R

Bob Basques wrote:
>
> All,
>
>
> Ok, I'm probably going to get someone irritated, but here goes . . .
>
>
> Why not approach this from the other end of the spectrum and work at 
> making the original files smaller.  Work with the providers to make 
> the images smaller in the first place, or at least come up with a 
> maximum practical size to work with, I mean if this is the only (or 
> biggest reason) for implementing JP2, then getting folks to make the 
> smaller deliverables seems like a better long term approach.
>
>
> Here's my reasoning, we're never (ever?) going to hit the top end on 
> how big files ever get, resolution just keeps going up and up, so 
> there is always going to be some upper limit that will need to be 
> breached somehow.  Working out a proper method for segregating the 
> data up front (dare I say it), as some sort of standard (which can be 
> adjusted as time passes) will make everything work nicely, then all 
> will work with available tools when they are available, if tools to 
> handle larger datasets become available, and the community feels there 
> is a reason/need that these new larger files need to be handled, then 
> they get to change the standard.
>
>
> bobb
>
>
>
>
>
>
>
> >>> "Fawcett, David" <David.Fawcett at state.mn.us> wrote:
>
>
> I realize that there are likely not a large number of people who have
> the expertise and experience to write this kind of code. 
>
> Is this a project that should be shopped around for funding?  Google
> Summer of Code?  A grant from our ~benevolent overlord Google?  Some
> other foundation or org interested in open data formats? 
>
> David.
> -----Original Message-----
> From: discuss-bounces at lists.osgeo.org
> [mailto:discuss-bounces at lists.osgeo.org] On Behalf Of Michael P. Gerlek
> Sent: Thursday, August 20, 2009 4:36 PM
> To: OSGeo Discussions
> Subject: RE: [OSGeo-Discuss] Open File Formats and Proprietary
> Algorithms
> <snip>
>
>
> > Do you know why there hasn't been a broader adoption of JP2?
>
> Not through lack of trying on my part :-)
>
> I think the two biggest reasons are:
>
> (1) The algorithms for handling large images in memory really are rocket
> science, and no one in the FOSS community has gotten the "itch"
> sufficiently bad enough to go and do the work needed inside the existing
> open source packages.  Hopefully someday someone will.
>
>
> _______________________________________________
> Discuss mailing list
> Discuss at lists.osgeo.org http://lists.osgeo.org/mailman/listinfo/discuss
> _______________________________________________
> Discuss mailing list
> Discuss at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/discuss
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Discuss mailing list
> Discuss at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/discuss
>   




More information about the Discuss mailing list