Page Speed Optimization Libraries
1.6.29.3
|
#include "cache_batcher.h"
Public Member Functions | |
CacheBatcher (CacheInterface *cache, AbstractMutex *mutex, Statistics *statistics) | |
Does not take ownership of the cache. Takes ownership of the mutex. | |
virtual void | Get (const GoogleString &key, Callback *callback) |
virtual void | Put (const GoogleString &key, SharedString *value) |
virtual void | Delete (const GoogleString &key) |
virtual GoogleString | Name () const |
virtual bool | IsBlocking () const |
int | last_batch_size () const |
void | set_max_queue_size (size_t n) |
for testing | |
void | set_max_parallel_lookups (size_t n) |
int | Pending () |
This is used to help synchronize tests. | |
virtual bool | IsHealthy () const |
virtual void | ShutDown () |
Static Public Member Functions | |
static void | InitStats (Statistics *statistics) |
static GoogleString | FormatName (StringPiece cache, int parallelism, int max) |
Static Public Attributes | |
static const int | kDefaultMaxParallelLookups = 1 |
static const size_t | kDefaultMaxQueueSize = 1000 |
Batches up cache lookups to exploit implementations that have MultiGet support. A fixed limit of outstanding cache lookups are passed through as single-key Gets when received to avoid adding latency. Above that, the keys & callbacks are queued until one of the outstanding Gets completes. When that occurs, the queued requests are passed as a single MultiGet request.
There is also a maximum queue size. If Gets stream in faster than they are completed and the queue overflows, then we respond with a fast kNotFound.
Note that this class is designed for use with an asynchronous cache implementation. To use this with a blocking cache implementation, please wrap the blocking cache in an AsyncCache.
static void net_instaweb::CacheBatcher::InitStats | ( | Statistics * | statistics | ) | [static] |
Startup-time (pre-construction) initialization of statistics variables so the correct-sized shared memory can be constructed in the root Apache process.
virtual bool net_instaweb::CacheBatcher::IsBlocking | ( | ) | const [inline, virtual] |
Note: CacheBatcher cannot do any batching if given a blocking cache, however it is still functional so pass on the bit.
const int net_instaweb::CacheBatcher::kDefaultMaxParallelLookups = 1 [static] |
We are willing to only do a bounded number of parallel lookups. Note that this is independent of the number of keys in each lookup.
By setting the default at 1, we get maximum batching and minimize the number of parallel lookups we do. Note that independent of this count, there is already substantial lookup parallelism because each Apache process has its own batcher, and there can be multiple Apache servers talking to the same cache.
Further, the load-tests performed while developing this feature indicated that the best value was '1'.
const size_t net_instaweb::CacheBatcher::kDefaultMaxQueueSize = 1000 [static] |
We batch up cache lookups until outstanding ones are complete. However, we bound the queue size in order to avoid exhausting memory. When the thread queues are saturated, we drop the requests, calling the callback immediately with kNotFound.