[OpenLayers-Dev] How to handle large datasets?

joe.tito joetito1 at gmail.com
Mon Aug 29 10:25:01 EDT 2011


My company is trying to expand on our current application to handle large
data sets. Right now our application comfortably handles 2000-3000 points,
but we'd like to expand our capabilities to handle at LEAST 20,000 points.
More would be nice, but our minimum requirement is 20,000. 

*So far we have have:*
1) Created our own simplified WFS that only pulls points within the current
map extent (simulates what a BBOX Strategy does). 
2) Enabled a Clustering Strategy to simplify viewing 2,000 or so points at
any one time. 

My question to everyone is, how have others solved this problem? We have a
couple server side ideas to "pre-cluster" our data in mind, but we want to
see how others have tackled this problem before we attack it. If anyone has
links to other relevant posts or website, please share!

--
View this message in context: http://osgeo-org.1803224.n2.nabble.com/How-to-handle-large-datasets-tp6737804p6737804.html
Sent from the OpenLayers Dev mailing list archive at Nabble.com.


More information about the Dev mailing list