[Benchmarking] data block caching technique

Liujian (LJ) Qian LJ.Qian at oracle.com
Mon Sep 6 16:16:19 EDT 2010


  Or that.  Note however most blocks (as cached by the FS) will probably 
contain data that does not belong to the working set?   In other words, 
I assume the shapefiles do not always store geographically adjacent 
records in the same or neighboring blocks; but I don't really know about 
the clustered-ness within the .shp and .dbf files.


LJ


On 9/6/2010 4:03 PM, Andrea Aime wrote:
> On Mon, Sep 6, 2010 at 9:24 PM, Liujian (LJ) Qian<LJ.Qian at oracle.com>  wrote:
>> Hi,
>> This is just my opinion on how to improve future benchmarks. I think in
>> order for the tests to be more real-world like, we need to make sure the
>> total amount of hot data  that went through the rendering pipe is much
>> bigger than the amount of available physical memory.  So for instance on a
>> 8GB box we should be rendering 16GB of raw data  for a full run. My
>> guestimate for this year's vector working set (from those 2152 query
>> windows) is around 2GB, which as some team discovered can be coerced into
>> memory cache (at the OS level) with some conscious efforts from its map
>> server.
> My guess is that it's actually around 6GB. If it was only 2GB everyone
> would be CPU bound, or not? It would also explains why reading just a bit
> more keeps the server in disk bound mode instead of eventually breaking
> it free.
>
> Cheers
> Andrea
> _______________________________________________
> Benchmarking mailing list
> Benchmarking at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/benchmarking



More information about the Benchmarking mailing list