Page Speed Optimization Libraries
1.2.24.1
|
#include "cache_batcher.h"
Public Member Functions | |
CacheBatcher (CacheInterface *cache, AbstractMutex *mutex, Statistics *statistics) | |
Takes ownership of the cache and mutex. | |
virtual void | Get (const GoogleString &key, Callback *callback) |
virtual void | Put (const GoogleString &key, SharedString *value) |
virtual void | Delete (const GoogleString &key) |
virtual const char * | Name () const |
The name of this CacheInterface -- used for logging and debugging. | |
virtual bool | IsBlocking () const |
int | last_batch_size () const |
void | set_max_queue_size (size_t n) |
for testing | |
void | set_max_parallel_lookups (size_t n) |
int | Pending () |
This is used to help synchronize tests. | |
virtual bool | IsHealthy () const |
virtual void | ShutDown () |
Static Public Member Functions | |
static void | InitStats (Statistics *statistics) |
Static Public Attributes | |
static const int | kDefaultMaxParallelLookups = 1 |
static const size_t | kDefaultMaxQueueSize = 1000 |
Batches up cache lookups to exploit implementations that have MultiGet support. A fixed limit of outstanding cache lookups are passed through as single-key Gets when received to avoid adding latency. Above that, the keys & callbacks are queued until one of the outstanding Gets completes. When that occurs, the queued requests are passed as a single MultiGet request.
There is also a maximum queue size. If Gets stream in faster than they are completed and the queue overflows, then we respond with a fast kNotFound.
Note that this class is designed for use with an asynchronous cache implementation. To use this with a blocking cache implementation, please wrap the blocking cache in an AsyncCache.
virtual void net_instaweb::CacheBatcher::Get | ( | const GoogleString & | key, |
Callback * | callback | ||
) | [virtual] |
Initiates a cache fetch, calling callback->ValidateCandidate() and then callback->Done(state) when done.
Note: implementations should normally invoke the callback via ValidateAndReportResult, which will combine ValidateCandidate() and Done() together properly.
Implements net_instaweb::CacheInterface.
static void net_instaweb::CacheBatcher::InitStats | ( | Statistics * | statistics | ) | [static] |
Startup-time (pre-construction) initialization of statistics variables so the correct-sized shared memory can be constructed in the root Apache process.
virtual bool net_instaweb::CacheBatcher::IsBlocking | ( | ) | const [inline, virtual] |
Note: CacheBatcher cannot do any batching if given a blocking cache, however it is still functional so pass on the bit.
Implements net_instaweb::CacheInterface.
virtual bool net_instaweb::CacheBatcher::IsHealthy | ( | ) | const [inline, virtual] |
Returns true if the cache is in a healthy state. Memory and file-based caches can simply return 'true'. But for server-based caches, it is handy to be able to query to see whether it is in a good state. It should be safe to call this frequently -- the implementation shouldn't do much more than check a bool flag under mutex.
Implements net_instaweb::CacheInterface.
virtual void net_instaweb::CacheBatcher::Put | ( | const GoogleString & | key, |
SharedString * | value | ||
) | [virtual] |
Puts a value into the cache. The value that is passed in is not modified, but the SharedString is passed by non-const pointer because its reference count is bumped.
Implements net_instaweb::CacheInterface.
virtual void net_instaweb::CacheBatcher::ShutDown | ( | ) | [virtual] |
Stops all cache activity. Further Put/Delete calls will be dropped, and MultiGet/Get will call the callback with kNotFound immediately. Note there is no Enable(); once the cache is stopped it is stopped forever. This function is intended for use during process shutdown.
Implements net_instaweb::CacheInterface.
const int net_instaweb::CacheBatcher::kDefaultMaxParallelLookups = 1 [static] |
We are willing to only do a bounded number of parallel lookups. Note that this is independent of the number of keys in each lookup.
By setting the default at 1, we get maximum batching and minimize the number of parallel lookups we do. Note that independent of this count, there is already substantial lookup parallelism because each Apache process has its own batcher, and there can be multiple Apache servers talking to the same cache.
Further, the load-tests performed while developing this feature indicated that the best value was '1'.
const size_t net_instaweb::CacheBatcher::kDefaultMaxQueueSize = 1000 [static] |
We batch up cache lookups until outstanding ones are complete. However, we bound the queue size in order to avoid exhausting memory. When the thread queues are saturated, we drop the requests, calling the callback immediately with kNotFound.