Commit Graph

63 Commits

Author SHA1 Message Date
V02460 affcc2cc36 Fix LruCache callback deduplication (#6213) 2019-11-07 09:43:51 +00:00
Erik Johnston e6c7e239ef Update docstring 2019-10-29 11:48:30 +00:00
Erik Johnston d0d8a22c13 Quick fix to ensure cache descriptors always return deferreds 2019-10-28 13:33:04 +00:00
Amber Brown 864f144543
Fix up some typechecking (#6150)
* type checking fixes

* changelog
2019-10-02 05:29:01 -07:00
Richard van der Hoff 618bd1ee76
Fix some error cases in the caching layer. (#5749)
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.

This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
2019-07-25 15:59:45 +01:00
Richard van der Hoff 418635e68a
Add a prometheus metric for active cache lookups. (#5750)
* Add a prometheus metric for active cache lookups.

* changelog
2019-07-24 11:33:13 +01:00
Amber Brown 4806651744
Replace returnValue with return (#5736) 2019-07-23 23:00:55 +10:00
Amber Brown 463b072b12
Move logging utilities out of the side drawer of util/ and into logging/ (#5606) 2019-07-04 00:07:04 +10:00
Amber Brown 32e7c9e7f2
Run Black. (#5482) 2019-06-20 19:32:02 +10:00
Amber Brown b37c472419
Rename async to async_helpers because `async` is a keyword on Python 3.7 (#3678) 2018-08-10 23:50:21 +10:00
Richard van der Hoff a8cbce0ced fix invalidation 2018-07-27 16:17:17 +01:00
Richard van der Hoff f102c05856 Rewrite cache list decorator
Because it was complicated and annoyed me. I suspect this will be more
efficient too.
2018-07-27 13:47:04 +01:00
Amber Brown 49af402019 run isort 2018-07-09 16:09:20 +10:00
Erik Johnston 042eedfa2b Add hacky cache factor override system 2018-06-04 15:39:28 +01:00
Amber Brown c936a52a9e
Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (#3307) 2018-05-31 19:03:47 +10:00
Amber Brown debff7ae09
Merge pull request #3281 from NotAFile/py3-six-isinstance
remaining isintance fixes
2018-05-30 12:44:46 +10:00
Adrian Tschira dd068ca979 remaining isintance fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:55:08 +02:00
Amber Brown 85ba83eb51 fixes 2018-05-22 16:28:23 -05:00
Amber Brown df9f72d9e5 replacing portions 2018-05-21 19:47:37 -05:00
Richard van der Hoff 01afc563c3 Fix overzealous cache invalidation
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
2018-04-05 16:24:04 +01:00
Richard van der Hoff bc496df192 report metrics on number of cache evictions 2018-02-05 15:34:01 +00:00
Erik Johnston b5e8d529e6 Define CACHE_SIZE_FACTOR once 2017-07-04 09:56:44 +01:00
Erik Johnston bd7bb5df71 Pull out if statement from for loop 2017-05-22 15:12:19 +01:00
Erik Johnston e3417a06e2 Update list cache to handle one arg case
We update the normal cache descriptors to handle caches with a single
argument specially so that the key wasn't a 1-tuple. We need to update
the cache list to be aware of this.
2017-05-22 15:04:42 +01:00
Erik Johnston ffad4fe35b Don't update event cache hit ratio from get_joined_users
Otherwise the hit ration of plain get_events gets completely skewed by
calls to get_joined_users* functions.
2017-05-08 16:06:17 +01:00
Erik Johnston d2d8ed4884 Optimise caches with single key 2017-05-04 14:18:46 +01:00
Erik Johnston 119cb9bbcf Reduce cache size by not storing deferreds
Currently the cache descriptors store deferreds rather than raw values,
this is a simple way of triggering only one database hit and sharing the
result if two callers attempt to get the same value.

However, there are a few caches that simply store a mapping from string
to string (or int). These caches can have a large number of entries,
under the assumption that each entry is small. However, the size of a
deferred (specifically the size of ObservableDeferred) is signigicantly
larger than that of the raw value, 2kb vs 32b.

This PR therefore changes the cache descriptors to store the raw values
rather than the deferreds.

As a side effect cached storage function now either return a deferred or
the actual value, as the cached list decriptor already does. This is
fine as we always end up just yield'ing on the returned value
eventually, which handles that case correctly.
2017-04-25 10:23:11 +01:00
Erik Johnston 4d17add8de Remove unused instance variable 2017-03-31 09:38:27 +01:00
Erik Johnston 6194a64ae9 Doc new instance variables 2017-03-30 14:19:10 +01:00
Erik Johnston 014fee93b3 Manually calculate cache key as getcallargs is expensive
This is because getcallargs recomputes the getargspec, amongst other
things, which we don't need to do as its already been done
2017-03-30 14:14:46 +01:00
Erik Johnston 86780a8bc3 Don't convert to deferreds when not necessary 2017-03-30 14:14:36 +01:00
Richard van der Hoff f9b4bb05e0 Fix the logcontext handling in the cache wrappers (#2077)
The cache wrappers had a habit of leaking the logcontext into the reactor while
the lookup function was running, and then not restoring it correctly when the
lookup function had completed. It's all the fault of
`preserve_context_over_{fn,deferred}` which are basically a bit broken.
2017-03-30 13:22:24 +01:00
Richard van der Hoff 95f21c7a66 Fix caching of remote servers' signature keys
The `@cached` decorator on `KeyStore._get_server_verify_key` was missing
its `num_args` parameter, which meant that it was returning the wrong key for
any server which had more than one recorded key.

By way of a fix, change the default for `num_args` to be *all* arguments. To
implement that, factor out a common base class for `CacheDescriptor` and `CacheListDescriptor`.
2017-03-22 15:11:30 +00:00
Erik Johnston 6b61060b51 Comment 2017-02-02 14:47:15 +00:00
Erik Johnston 9efcc3f3be Comment 2017-02-02 13:50:22 +00:00
Erik Johnston 37b4c7d8a9 Fix typo in return type 2017-01-17 14:43:32 +00:00
Erik Johnston d6c75cb7c2 Rename and comment tree_to_leaves_iterator 2017-01-17 11:47:03 +00:00
Erik Johnston f85b6ca494 Speed up cache size calculation
Instead of calculating the size of the cache repeatedly, which can take
a long time now that it can use a callback, instead cache the size and
update that on insertion and deletion.

This requires changing the cache descriptors to have two caches, one for
pending deferreds and the other for the actual values. There's no reason
to evict from the pending deferreds as they won't take up any more
memory.
2017-01-17 11:18:13 +00:00
Erik Johnston 2fae34bd2c Optionally measure size of cache by sum of length of values 2017-01-13 17:46:17 +00:00
Erik Johnston 45fd2c8942 Ensure invalidation list does not grow unboundedly 2016-08-19 16:09:16 +01:00
Erik Johnston c0d7d9d642 Rename to on_invalidate 2016-08-19 15:13:58 +01:00
Erik Johnston dc76a3e909 Make cache_context an explicit option 2016-08-19 15:02:38 +01:00
Erik Johnston ba214a5e32 Remove lru option 2016-08-19 14:17:11 +01:00
Erik Johnston 4161ff2fc4 Add concept of cache contexts 2016-08-19 14:17:07 +01:00
Erik Johnston 3b096c5f5c Merge branch 'erikj/cache_perf' of github.com:matrix-org/synapse into develop 2016-06-03 12:00:33 +01:00
Erik Johnston 58a224a651 Pull out update_results_dict 2016-06-03 11:47:07 +01:00
Erik Johnston 73c7112433 Change CacheMetrics to be quicker
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
2016-06-03 11:26:52 +01:00
Erik Johnston e043ede4a2 Small optimisation to CacheListDescriptor 2016-06-03 11:19:22 +01:00
Erik Johnston 597013caa5 Make cachedList go a bit faster 2016-06-03 11:13:29 +01:00
Mark Haines 87f2dec8d4 Make the cache objects be per instance rather than being global 2016-04-06 13:08:05 +01:00