[Benchmarking] data block caching technique

Andrea Aime aaime at opengeo.org
Mon Sep 6 16:03:49 EDT 2010


On Mon, Sep 6, 2010 at 9:24 PM, Liujian (LJ) Qian <LJ.Qian at oracle.com> wrote:
> Hi,
> This is just my opinion on how to improve future benchmarks. I think in
> order for the tests to be more real-world like, we need to make sure the
> total amount of hot data  that went through the rendering pipe is much
> bigger than the amount of available physical memory.  So for instance on a
> 8GB box we should be rendering 16GB of raw data  for a full run. My
> guestimate for this year's vector working set (from those 2152 query
> windows) is around 2GB, which as some team discovered can be coerced into
> memory cache (at the OS level) with some conscious efforts from its map
> server.

My guess is that it's actually around 6GB. If it was only 2GB everyone
would be CPU bound, or not? It would also explains why reading just a bit
more keeps the server in disk bound mode instead of eventually breaking
it free.

Cheers
Andrea


More information about the Benchmarking mailing list