* Accept a Sequence of events in synapse.appservice
This avoids some casts/ignores in the tests I'm about to fixup. It seems
that `List[Mock]` is not a subtype of `List[EventBase]`, but
`Sequence[Mock]` is a subtype of `Sequence[EventBase]`. So presumably
`Mock` is considered a subtype of anything, much like `Any`.
* make tests.appservice.test_scheduler pass mypy
* Extra hints in tests.appservice.test_scheduler
* Extra hints in tests.appservice.test_api
* Extra hints in tests.appservice.test_appservice
* Disallow untyped defs
* Changelog
Co-authored-by: Brad Murray <brad@beeper.com>
Co-authored-by: Nick Barrett <nick@beeper.com>
Copy the suppress_edits push rule from Beeper to implement MSC3958.
9415a1284b/rust/src/push/base_rules.rs (L98-L114)
Ensure that the list of servers in a partial state room always contains
the server we joined off.
Also refactor `get_partial_state_servers_at_join` to return `None` when
the given room is no longer partial stated, to explicitly indicate when
the room has partial state. Otherwise it's not clear whether an empty
list means that the room has full state, or the room is partial stated,
but the server we joined off told us that there are no servers in the
room.
Signed-off-by: Sean Quah <seanq@matrix.org>
Since pyo3-log is initialized very early in the Python start-up
it caches the state of the loggers before they're fully initialized
(and thus are essentially disabled). Whenever we reload the
logging configuration we now also tell pyo3-log to discard
any cached logging configuration it has; it will refetch the
current logging configuration from Python at the next point
it logs.
This fixes Rust log lines not appearing in the homeserver logs.
If a sync request does not need to calculate per-room entries &
is not generating presence & is not generating device list data
(e.g. during initial sync) avoid the expensive calculation of room
specific data.
This is a micro-optimisation for clients syncing simply to receive
to-device information.
This expands the previous optimisation from being only for initial
sync to being for all sync requests.
It also inverts some of the logic to be inclusive instead of exclusive.
The `parse_enum` helper pulls an enum value from the query string
(by delegating down to the parse_string helper with values generated
from the enum).
This is used to pull out "f" and "b" in most places and then we thread
the resulting Direction enum throughout more code.
The previous assumption was that the stream_id column was unique
(for a room ID, receipt type, user ID tuple), but this turned out to be
incorrect.
Now find the max stream ID, then map this back to a database-specific
row identifier and delete other rows which match the (room ID, receipt type,
user ID) tuple, but *not* the row ID.
`run_in_background` calls re-use the current logging context. When they
are not awaited, they can complete after the current logging context has
been marked as finished, which leads to log spam. Use
`run_as_background_process` instead.
Fixes one of the instances of #13090.
Signed-off-by: Sean Quah <seanq@matrix.org>
#14910 fixed the regression introduced by #13873 where sqlite database
migrations would no longer run inside a transaction. However, it
committed the transaction before Synapse updated its bookkeeping of
which migrations have been run, which means that migrations may be run
again after they have completed successfully.
Leave the transaction open at the end of `executescript`, to restore the
old, correct behaviour. Also make the PostgreSQL behaviour consistent
with SQLite.
Fixes#14909.
Signed-off-by: Sean Quah <seanq@matrix.org>
* Better test for bad values in power levels events
The previous test only checked that Synapse didn't raise an exception,
but didn't check that we had correctly interpreted the value of the
dodgy power level.
It also conflated two things: bad room notification levels, and bad user
levels. There _is_ logic for converting the latter to integers, but we
should test it separately.
* Check we ignore types that don't convert to int
* Handle `None` values in `notifications.room`
* Changelog
* Also test that bad values are rejected by event auth
* Docstring
* linter scripttttttttt
* Test boolean values in PL content
* Reject boolean power levels
* Changelog
* Perfer `type(x) is int` to `isinstance(x, int)`
This covered all additional instances I could see where `x` was
user-controlled.
The remaining cases are
```
$ rg -s 'isinstance.*[^_]int'
tests/replication/_base.py
576: if isinstance(obj, int):
synapse/util/caches/stream_change_cache.py
136: assert isinstance(stream_pos, int)
214: assert isinstance(stream_pos, int)
246: assert isinstance(stream_pos, int)
267: assert isinstance(stream_pos, int)
synapse/replication/tcp/external_cache.py
133: if isinstance(result, int):
synapse/metrics/__init__.py
100: if isinstance(calls, (int, float)):
synapse/handlers/appservice.py
262: assert isinstance(new_token, int)
synapse/config/_util.py
62: if isinstance(p, int):
```
which cover metrics, logic related to `jsonschema`, and replication and
data streams. AFAICS these are all internal to Synapse
* Changelog
* Better test for bad values in power levels events
The previous test only checked that Synapse didn't raise an exception,
but didn't check that we had correctly interpreted the value of the
dodgy power level.
It also conflated two things: bad room notification levels, and bad user
levels. There _is_ logic for converting the latter to integers, but we
should test it separately.
* Check we ignore types that don't convert to int
* Handle `None` values in `notifications.room`
* Changelog
* Also test that bad values are rejected by event auth
* Docstring
* linter scripttttttttt
MSC3952 defines push rules which searches for mentions in a list of
Matrix IDs in the event body, instead of searching the entire event
body for display name / local part.
This is implemented behind an experimental configuration flag and
does not yet implement the backwards compatibility pieces of the MSC.
The `/relations` endpoint was not properly handle "live tokens"
(i.e sync tokens), to do this properly we abstract the code that
`/messages` has and re-use it.
* Batch look-ups to see if rooms are partial stated.
* Fix issues found in linting.
* Fix typo.
* Apply suggestions from code review
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Clarify comments.
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Also improve the cache size while we're at it
* is_partial_state_rooms -> is_partial_state_room_batched
* Run `black`
* Improve annotation for `simple_select_many_batch`
* Fix is_partial_state_room_batched impl
* Okay, _actually_ fix impl
* Update description.
* Update synapse/storage/databases/main/room.py
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
* Run black.
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
On startup, the `_device_list_id_gen` stream id generator is initialized
using the maximum stream id seen in a list of tables. When we started
populating the `device_list_remote_pending` table in #13913, we forgot
to add it to the aforementioned list of tables, so the stream id
generator can hand out old stream ids after a restart. The end result is
that Synapse can fail to handle device list update EDUs after a restart
when a partial state join is in progress.
Add the `device_list_remote_pending` table to the list of tables to
consider when initializing the `_device_list_id_gen` stream id generator.
Signed-off-by: Sean Quah <seanq@matrix.org>
Destination was being used incorrectly (a single destination instead
of a list of destinations was being passed).
This also updates some of the types in the area to not use Collection[str],
which is a footgun.
* Bump the client-side timeout for /state
to allow faster joins resyncs the chance to complete for large rooms.
We have seen this fair poorly (~90s for Matrix HQ's /state) in testing,
causing the resync to advance to another HS who hasn't seen our join yet.
* Changelog
* Milliseconds!!!!
#13873 introduced a regression which causes sqlite database migrations
to no longer run inside a transaction. Wrap them in a transaction again,
to avoid database corruption when migrations are interrupted.
Fixes#14909.
Signed-off-by: Sean Quah <seanq@matrix.org>
* Request partial joins by default
This is a little sloppy, but we are trying to gain confidence in faster
joins in the upcoming RC.
Admins can still opt out by adding the following to their Synapse
config:
```yaml
experimental:
faster_joins: false
```
We may revert this change before the release proper, depending on how
testing in the wild goes.
* Changelog
* Try to fix the backfill test failures
* Upgrade notes
* Postgres compat?
* Allow `AbstractSet` in `StrCollection`
Or else frozensets are excluded. This will be useful in an upcoming
commit where I plan to change a function that accepts `List[str]` to
accept `StrCollection` instead.
* `rooms_to_exclude` -> `rooms_to_exclude_globally`
I am about to make use of this exclusion mechanism to exclude rooms for
a specific user and a specific sync. This rename helps to clarify the
distinction between the global config and the rooms to exclude for a
specific sync.
* Better function names for internal sync methods
* Track a list of excluded rooms on SyncResultBuilder
I plan to feed a list of partially stated rooms for this sync to ignore
* Exclude partial state rooms during eager sync
using the mechanism established in the previous commit
* Track un-partial-state stream in sync tokens
So that we can work out which rooms have become fully-stated during a
given sync period.
* Fix mutation of `@cached` return value
This was fouling up a complement test added alongside this PR.
Excluding a room would mean the set of forgotten rooms in the cache
would be extended. This means that room could be erroneously considered
forgotten in the future.
Introduced in #12310, Synapse 1.57.0. I don't think this had any
user-visible side effects (until now).
* SyncResultBuilder: track rooms to force as newly joined
Similar plan as before. We've omitted rooms from certain sync responses;
now we establish the mechanism to reintroduce them into future syncs.
* Read new field, to present rooms as newly joined
* Force un-partial-stated rooms to be newly-joined
for eager incremental syncs only, provided they're still fully stated
* Notify user stream listeners to wake up long polling syncs
* Changelog
* Typo fix
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Unnecessary list cast
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Rephrase comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Another comment
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Fixup merge(?)
* Poke notifier when receiving un-partial-stated msg over replication
* Fixup merge whoops
Thanks MV :)
Co-authored-by: Mathieu Velen <mathieuv@matrix.org>
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
* Faster joins: Update room stats and user directory on workers when done
When finishing a partial state join to a room, we update the current
state of the room without persisting additional events. Workers receive
notice of the current state update over replication, but neglect to wake
the room stats and user directory updaters, which then get incidentally
triggered the next time an event is persisted or an unrelated event
persister sends out a stream position update.
We wake the room stats and user directory updaters at the appropriate
time in this commit.
Part of #12814 and #12815.
Signed-off-by: Sean Quah <seanq@matrix.org>
* fixup comment
Signed-off-by: Sean Quah <seanq@matrix.org>
* Enable Complement tests for Faster Remote Room Joins on worker-mode
* (dangerous) Add an override to allow Complement to use FRRJ under workers
* Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
* Fix race where we didn't send out replication notification
* MORE HACKS
* Fix get_un_partial_stated_rooms_token to take instance_name
* Fix bad merge
* Remove warning
* Correctly advance un_partial_stated_room_stream
* Fix merge
* Add another notify_replication
* Fixups
* Create a separate ReplicationNotifier
* Fix test
* Fix portdb
* Create a separate ReplicationNotifier
* Fix test
* Fix portdb
* Fix presence test
* Newsfile
* Apply suggestions from code review
* Update changelog.d/14752.misc
Co-authored-by: Erik Johnston <erik@matrix.org>
* lint
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
* Avoid clearing out forward extremities when doing a second remote join
When joining a restricted room where the local homeserver does not have
a user able to issue invites, we perform a second remote join. We want
to avoid clearing out forward extremities in this case because the
forward extremities we have are up to date and clearing out forward
extremities creates a window in which the room can get bricked if
Synapse crashes.
Signed-off-by: Sean Quah <seanq@matrix.org>
* Do a full join when doing a second remote join into a full state room
We cannot persist a partial state join event into a joined full state
room, so we perform a full state join for such rooms instead. As a
future optimization, we could always perform a partial state join and
compute or retrieve the full state ourselves if necessary.
Signed-off-by: Sean Quah <seanq@matrix.org>
* Add lock around partial state flag for rooms
Signed-off-by: Sean Quah <seanq@matrix.org>
* Preserve partial state info when doing a second partial state join
Signed-off-by: Sean Quah <seanq@matrix.org>
* Add newsfile
* Add a TODO(faster_joins) marker
Signed-off-by: Sean Quah <seanq@matrix.org>
Now that we wait for stream positions whenever we do a HTTP replication
hit, we need to be less brutal in the case where we do timeout (as we
have bugs around this).
Currently, we will try to start a new partial state sync every time we
perform a remote join, which is undesirable if there is already one
running for a given room.
We intend to perform remote joins whenever additional local users wish
to join a partial state room, so let's ensure that we do not start more
than one concurrent partial state sync for any given room.
------------------------------------------------------------------------
There is a race condition where the homeserver leaves a room and later
rejoins while the partial state sync from the previous membership is
still running. There is no guarantee that the previous partial state
sync will process the latest join, so we restart it if needed.
Signed-off-by: Sean Quah <seanq@matrix.org>
* Change Documentation to have v10 as default room version
* Change Default Room version to 10
* Add changelog entry for default room version swap
* Add changelog entry for v10 default room version in docs
* Clarify doc changelog entry
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
* Improve Documentation changes.
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
* Update Changelog entry to have correct format
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
* Update Spec Version to 1.5
* Only need 1 changelog.
* Fix test.
* Update "Changed in" line
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Patrick Cloke <patrickc@matrix.org>
Serving partial join responses is no longer experimental. They will only be served under the stable identifier if the the undocumented config flag experimental.msc3706_enabled is set to true.
Synapse continues to request a partial join only if the undocumented config flag experimental.faster_joins is set to true; this setting remains present and unaffected.
We were incorrectly checking if the *local* token had been advanced, rather than the token for the remote instance.
In practice, I don't think this has caused any bugs due to where we use `wait_for_stream_position`, as critically we don't use it on instances that also write to the given streams (and so the local token will lag behind all remote tokens).