Commit Graph

200 Commits

Author SHA1 Message Date
Eric Eastwood 0a00b7ff14
Update black, and run auto formatting over the codebase (#9381)
- Update black version to the latest
 - Run black auto formatting over the codebase
    - Run autoformatting according to [`docs/code_style.md
`](80d6dc9783/docs/code_style.md)
 - Update `code_style.md` docs around installing black to use the correct version
2021-02-16 22:32:34 +00:00
Richard van der Hoff 3b754aea27
Clean up caching/locking of OIDC metadata load (#9362)
Ensure that we lock correctly to prevent multiple concurrent metadata load
requests, and generally clean up the way we construct the metadata cache.
2021-02-16 16:27:38 +00:00
Patrick Cloke 1b4d5d6acf
Empty iterables should count towards cache usage. (#9028) 2021-01-06 12:33:20 -05:00
Richard van der Hoff cbc82aa09f
Implement and use an @lru_cache decorator (#8595)
We don't always need the full power of a DeferredCache.
2020-10-30 11:43:17 +00:00
Richard van der Hoff b28aaeb3a5
Optimise CacheDescriptor (#8594)
don't bother constricting a CacheContext unless we need one.
2020-10-21 22:57:45 +01:00
Richard van der Hoff c13820bcee fix failure case 2020-10-21 18:54:53 +01:00
Richard van der Hoff 2b3af01791 optimise DeferredCache.set 2020-10-21 17:55:53 +01:00
Richard van der Hoff 1f4269700c Push some deferred wrangling down into DeferredCache 2020-10-21 15:39:25 +01:00
Richard van der Hoff 96e7d3c4a0
Fix 'LruCache' object has no attribute '_on_resize' (#8591)
We need to make sure we are readu for the `set_cache_factor` callback.
2020-10-19 21:13:50 +01:00
Richard van der Hoff 903d11c43a
Add `DeferredCache.get_immediate` method (#8568)
* Add `DeferredCache.get_immediate` method

A bunch of things that are currently calling `DeferredCache.get` are only
really interested in the result if it's completed. We can optimise and simplify
this case.

* Remove unused 'default' parameter to DeferredCache.get()

* another get_immediate instance
2020-10-19 15:00:12 +01:00
Richard van der Hoff 97647b33c2
Replace DeferredCache with LruCache where possible (#8563)
Most of these uses don't need a full-blown DeferredCache; LruCache is lighter and more appropriate.
2020-10-19 12:20:29 +01:00
Richard van der Hoff 6d7b22041d review comments 2020-10-16 16:25:15 +01:00
Richard van der Hoff 995cc615a0
Apply suggestions from code review
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2020-10-16 16:14:42 +01:00
Richard van der Hoff 0ec0bc3886 type annotations for LruCache 2020-10-16 15:56:39 +01:00
Richard van der Hoff 3ee17585cd
Make LruCache register its own metrics (#8561)
rather than have everything that instantiates an LruCache manage metrics
separately, have LruCache do it itself.
2020-10-16 15:51:57 +01:00
Richard van der Hoff 8075504a60
Enable mypy for synapse.util.caches (#8547)
This seemed to entail dragging in a type stub for SortedList.
2020-10-15 11:44:39 +01:00
Richard van der Hoff 4182bb812f move DeferredCache into its own module 2020-10-14 23:38:14 +01:00
Richard van der Hoff 9f87da0a84 Rename Cache->DeferredCache 2020-10-14 23:38:14 +01:00
Richard van der Hoff 7eff59ec91 Add some more type annotations to Cache 2020-10-14 23:38:14 +01:00
Patrick Cloke 1781bbe319
Add type hints to response cache. (#8507) 2020-10-09 11:35:11 -04:00
Patrick Cloke aec294ee0d
Use slots in attrs classes where possible (#8296)
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
2020-09-14 12:50:06 -04:00
Patrick Cloke c619253db8
Stop sub-classing object (#8249) 2020-09-04 06:54:56 -04:00
Erik Johnston 208e1d3eb3
Fix typing for `@cached` wrapped functions (#8240)
This requires adding a mypy plugin to fiddle with the type signatures a bit.
2020-09-03 15:38:32 +01:00
Patrick Cloke d294f0e7e1
Remove the unused inlineCallbacks code-paths in the caching code (#8119) 2020-08-19 07:09:07 -04:00
Patrick Cloke 4e874ed593
Remove unnecessary maybeDeferred calls (#8044) 2020-08-07 09:44:48 -04:00
Patrick Cloke 38e1fac886
Fix some spelling mistakes / typos. (#7811) 2020-07-09 09:52:58 -04:00
Dagfinn Ilmari Mannsåker a3f11567d9
Replace all remaining six usage with native Python 3 equivalents (#7704) 2020-06-16 08:51:47 -04:00
Patrick Cloke bd6dc17221
Replace iteritems/itervalues/iterkeys with native versions. (#7692) 2020-06-15 07:03:36 -04:00
Erik Johnston eefc6b3a0d
Don't apply cache factor to event cache. (#7578)
This is already correctly done when we instansiate the cache, but wasn't
when it got reloaded (which always happens at least once on startup).
2020-05-27 12:04:37 +01:00
Richard van der Hoff d4676910c9 remove miscellaneous PY2 code 2020-05-15 19:37:41 +01:00
Amber Brown 7cb8b4bc67
Allow configuration of Synapse's cache without using synctl or environment variables (#6391) 2020-05-11 18:45:23 +01:00
Erik Johnston f9073893af Speed up fetching device lists changes in sync.
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
2020-05-05 17:40:29 +01:00
Richard van der Hoff 13683a3a22
Extend StreamChangeCache to support multiple entities per stream ID (#7303)
First some background: StreamChangeCache is used to keep track of what "entities" have 
changed since a given stream ID. So for example, we might use it to keep track of when the last
to-device message for a given user was received [1], and hence whether we need to pull any to-device messages from the database on a sync [2].

Now, it turns out that StreamChangeCache didn't support more than one thing being changed at
a given stream_id (this was part of the problem with #7206). However, it's entirely valid to send
to-device messages to more than one user at a time.

As it turns out, this did in fact work, because *some* methods of StreamChangeCache coped
ok with having multiple things changing on the same stream ID, and it seems we never actually
use the methods which don't work on the stream change caches where we allow multiple
changes at the same stream ID. But that feels horribly fragile, hence: let's update
StreamChangeCache to properly support this, and add some typing and some more tests while
we're at it.

[1]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L301
[2]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L47-L51
2020-04-22 13:45:40 +01:00
Richard van der Hoff 0f8f02bc39
On catchup, process each row with its own stream id (#7286)
Other parts of the code (such as the StreamChangeCache) assume that there will
not be multiple changes with the same stream id.

This code was introduced in #7024, and I hope this fixes #7206.
2020-04-20 11:43:29 +01:00
Erik Johnston ed630ea17c
Reduce amount of logging at INFO level. (#6862)
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.

Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
2020-02-06 13:31:05 +00:00
Hubert Chathi cb2db17994
look up cross-signing keys from the DB in bulk (#6486) 2019-12-12 12:03:28 -05:00
Erik Johnston f166a8d1f5 Remove SnapshotCache in favour of ResponseCache 2019-12-09 13:42:49 +00:00
V02460 affcc2cc36 Fix LruCache callback deduplication (#6213) 2019-11-07 09:43:51 +00:00
Andrew Morgan 54fef094b3
Remove usage of deprecated logger.warn method from codebase (#6271)
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
2019-10-31 10:23:24 +00:00
Erik Johnston e6c7e239ef Update docstring 2019-10-29 11:48:30 +00:00
Erik Johnston d0d8a22c13 Quick fix to ensure cache descriptors always return deferreds 2019-10-28 13:33:04 +00:00
Amber Brown 864f144543
Fix up some typechecking (#6150)
* type checking fixes

* changelog
2019-10-02 05:29:01 -07:00
Erik Johnston 17e1e80726 Retry well-known lookup before expiry.
This gives a bit of a grace period where we can attempt to refetch a
remote `well-known`, while still using the cached result if that fails.

Hopefully this will make the well-known resolution a bit more torelant
of failures, rather than it immediately treating failures as "no result"
and caching that for an hour.
2019-08-13 16:20:38 +01:00
Richard van der Hoff 618bd1ee76
Fix some error cases in the caching layer. (#5749)
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.

This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
2019-07-25 15:59:45 +01:00
Richard van der Hoff 418635e68a
Add a prometheus metric for active cache lookups. (#5750)
* Add a prometheus metric for active cache lookups.

* changelog
2019-07-24 11:33:13 +01:00
Amber Brown 4806651744
Replace returnValue with return (#5736) 2019-07-23 23:00:55 +10:00
Amber Brown 463b072b12
Move logging utilities out of the side drawer of util/ and into logging/ (#5606) 2019-07-04 00:07:04 +10:00
Andrew Morgan ef8c62758c
Prevent multiple upgrades on the same room at once (#5051)
Closes #4583

Does slightly less than #5045, which prevented a room from being upgraded multiple times, one after another. This PR still allows that, but just prevents two from happening at the same time.

Mostly just to mitigate the fact that servers are slow and it can take a moment for the room upgrade to actually complete. We don't want people sending another request to upgrade the room when really they just thought the first didn't go through.
2019-06-25 14:19:21 +01:00
Amber Brown 32e7c9e7f2
Run Black. (#5482) 2019-06-20 19:32:02 +10:00
Richard van der Hoff bc5f6e1797
Add a caching layer to .well-known responses (#4516) 2019-01-30 10:55:25 +00:00