f7ddfe17a3
This speeds things up by ~2x. The vast majority of the time is now spent in `LruCache` moving things around the linked lists. We do this via two things: 1. Don't create a deferred per-key during bulk set operations in `DeferredCache`. Instead, only create them if a subsequent caller asks for the key. 2. Add a bulk lookup API to `DeferredCache` rather than use a loop. |
||
---|---|---|
.. | ||
__init__.py | ||
cached_call.py | ||
deferred_cache.py | ||
descriptors.py | ||
dictionary_cache.py | ||
expiringcache.py | ||
lrucache.py | ||
response_cache.py | ||
stream_change_cache.py | ||
treecache.py | ||
ttlcache.py |