Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes

This commit is contained in:
Erik Johnston 2023-05-12 11:09:10 +01:00
commit 7c74fe01d3
159 changed files with 2342 additions and 850 deletions

View File

@ -22,7 +22,7 @@ jobs:
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v1
uses: matrix-org/netlify-pr-preview@v2
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}

View File

@ -107,14 +107,15 @@ jobs:
uses: dtolnay/rust-toolchain@1.58.1
- uses: Swatinem/rust-cache@v2
# NB: I have two concerns with this action:
# 1. We occasionally see odd mypy problems that aren't reproducible
# locally with clean caches. I suspect some dodgy caching behaviour.
# 2. The action uses GHA machinery that's deprecated
# (https://github.com/AustinScola/mypy-cache-github-action/issues/277)
# It may be simpler to use actions/cache ourselves to restore .mypy_cache.
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
uses: AustinScola/mypy-cache-github-action@df56268388422ee282636ee2c7a9cc55ec644a41
uses: actions/cache@v3
with:
path: |
.mypy_cache
key: mypy-cache-${{ github.context.sha }}
restore-keys: mypy-cache-
- name: Run mypy
run: poetry run mypy

View File

@ -1,3 +1,9 @@
Synapse 1.83.0 (2023-05-09)
===========================
No significant changes since 1.83.0rc1.
Synapse 1.83.0rc1 (2023-05-02)
==============================

8
Cargo.lock generated
View File

@ -323,18 +323,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.160"
version = "1.0.162"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb2f3770c8bce3bcda7e149193a069a0f4365bda1fa5cd88e03bca26afc1216c"
checksum = "71b2f6e1ab5c2b98c05f0f35b236b22e8df7ead6ffbf51d7808da7f8817e7ab6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.160"
version = "1.0.162"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "291a097c63d8497e00160b166a967a4a79c64f3facdd01cbd7502231688d77df"
checksum = "a2a0814352fd64b58489904a44ea8d90cb1a91dcb6b4f5ebabc32c8318e93cb6"
dependencies = [
"proc-macro2",
"quote",

1
changelog.d/15025.misc Normal file
View File

@ -0,0 +1 @@
Use oEmbed to generate URL previews for YouTube Shorts.

View File

@ -0,0 +1 @@
Add an option to prevent media downloads from configured domains.

View File

@ -0,0 +1 @@
Add `forget_rooms_on_leave` config option to automatically forget rooms when users leave them or are removed from them.

View File

@ -0,0 +1 @@
Add redis TLS configuration options.

1
changelog.d/15437.misc Normal file
View File

@ -0,0 +1 @@
Make the `thread_id` column on `event_push_actions`, `event_push_actions_staging`, and `event_push_summary` non-null.

1
changelog.d/15470.misc Normal file
View File

@ -0,0 +1 @@
Create new `Client` for use with HTTP Replication between workers. Contributed by Jason Little.

1
changelog.d/15491.misc Normal file
View File

@ -0,0 +1 @@
Remove need for `worker_replication_*` based settings in worker configuration yaml by placing this data directly on the `instance_map` instead.

1
changelog.d/15509.misc Normal file
View File

@ -0,0 +1 @@
Bump pyicu from 2.10.2 to 2.11.

View File

@ -0,0 +1 @@
Add a config option to delay push notifications by a random amount, to discourage time-based profiling.

1
changelog.d/15522.misc Normal file
View File

@ -0,0 +1 @@
Remove references to supporting per-user flag for [MSC2654](https://github.com/matrix-org/matrix-spec-proposals/pull/2654) (#15522).

1
changelog.d/15523.bugfix Normal file
View File

@ -0,0 +1 @@
Don't fail on federation over TOR where SRV queries are not supported. Contributed by Zdzichu.

1
changelog.d/15527.misc Normal file
View File

@ -0,0 +1 @@
Don't use a trusted key server when running the demo scripts.

View File

@ -0,0 +1 @@
Stabilize support for [MSC2659](https://github.com/matrix-org/matrix-spec-proposals/pull/2659): application service ping endpoint. Contributed by Tulir @ Beeper.

1
changelog.d/15529.misc Normal file
View File

@ -0,0 +1 @@
Speed up rebuilding of the user directory for local users.

1
changelog.d/15531.misc Normal file
View File

@ -0,0 +1 @@
Speed up deleting of old rows in `event_push_actions`.

1
changelog.d/15532.misc Normal file
View File

@ -0,0 +1 @@
Install the `xmlsec` and `mdbook` packages and switch back to the upstream [cachix/devenv](https://github.com/cachix/devenv) repo in the nix development environment.

1
changelog.d/15533.misc Normal file
View File

@ -0,0 +1 @@
Install the `xmlsec` and `mdbook` packages and switch back to the upstream [cachix/devenv](https://github.com/cachix/devenv) repo in the nix development environment.

1
changelog.d/15534.misc Normal file
View File

@ -0,0 +1 @@
Implement [MSC3987](https://github.com/matrix-org/matrix-spec-proposals/pull/3987) by removing `"dont_notify"` from the list of actions in default push rules.

1
changelog.d/15535.misc Normal file
View File

@ -0,0 +1 @@
Move various module API callback registration methods to a dedicated class.

View File

@ -0,0 +1 @@
Implement [MSC4009](https://github.com/matrix-org/matrix-spec-proposals/pull/4009) to expand the supported characters in Matrix IDs.

1
changelog.d/15539.misc Normal file
View File

@ -0,0 +1 @@
Proxy `/user/devices` federation queries to application services for [MSC3984](https://github.com/matrix-org/matrix-spec-proposals/pull/3984).

1
changelog.d/15542.misc Normal file
View File

@ -0,0 +1 @@
Factor out an `is_mine_server_name` method.

1
changelog.d/15543.misc Normal file
View File

@ -0,0 +1 @@
Allow running Complement tests using [podman](https://podman.io/) by adding a `PODMAN` environment variable to `scripts-dev/complement.sh`.

1
changelog.d/15544.doc Normal file
View File

@ -0,0 +1 @@
Clarify documentation of the "Create or modify account" Admin API.

1
changelog.d/15545.misc Normal file
View File

@ -0,0 +1 @@
Install the `xmlsec` and `mdbook` packages and switch back to the upstream [cachix/devenv](https://github.com/cachix/devenv) repo in the nix development environment.

1
changelog.d/15548.misc Normal file
View File

@ -0,0 +1 @@
Bump serde from 1.0.160 to 1.0.162.

1
changelog.d/15549.misc Normal file
View File

@ -0,0 +1 @@
Bump types-setuptools from 67.6.0.5 to 67.7.0.1.

1
changelog.d/15550.misc Normal file
View File

@ -0,0 +1 @@
Bump sentry-sdk from 1.19.1 to 1.22.1.

1
changelog.d/15551.misc Normal file
View File

@ -0,0 +1 @@
Bump ruff from 0.0.259 to 0.0.265.

1
changelog.d/15552.misc Normal file
View File

@ -0,0 +1 @@
Bump hiredis from 2.2.2 to 2.2.3.

1
changelog.d/15553.misc Normal file
View File

@ -0,0 +1 @@
Bump types-requests from 2.29.0.0 to 2.30.0.0.

1
changelog.d/15554.bugfix Normal file
View File

@ -0,0 +1 @@
Experimental support for [MSC4010](https://github.com/matrix-org/matrix-spec-proposals/pull/4010) which rejects setting the `"m.push_rules"` via account data.

1
changelog.d/15555.bugfix Normal file
View File

@ -0,0 +1 @@
Experimental support for [MSC4010](https://github.com/matrix-org/matrix-spec-proposals/pull/4010) which rejects setting the `"m.push_rules"` via account data.

1
changelog.d/15560.doc Normal file
View File

@ -0,0 +1 @@
Fix path to the `statistics/database/rooms` admin API in documentation.

1
changelog.d/15562.misc Normal file
View File

@ -0,0 +1 @@
Declare unstable support for [MSC3391](https://github.com/matrix-org/matrix-spec-proposals/pull/3391) under `/_matrix/client/versions` if the experimental implementation is enabled.

1
changelog.d/15566.bugfix Normal file
View File

@ -0,0 +1 @@
Require at least poetry-core v1.2.0.

1
changelog.d/15567.docker Normal file
View File

@ -0,0 +1 @@
Add pkg-config package to Stage 0 to be able to build Dockerfile on ppc64le architecture.

View File

@ -0,0 +1 @@
Print full error and stack-trace of any exception that occurs during startup/initialization.

1
changelog.d/15570.misc Normal file
View File

@ -0,0 +1 @@
Allow `pip install` to use setuptools_rust 1.6.0 when building Synapse.

1
changelog.d/15576.misc Normal file
View File

@ -0,0 +1 @@
Deal with upcoming Github Actions deprecations.

View File

@ -70,6 +70,10 @@ redis:
port: 6379
# dbid: <redis_logical_db_id>
# password: <secret_password>
# use_tls: True
# certificate_file: <path_to_certificate>
# private_key_file: <path_to_private_key>
# ca_file: <path_to_ca_certificate>
```
This assumes that your Redis service is called `redis` in your Docker Compose file.

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.83.0) stable; urgency=medium
* New Synapse release 1.83.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 May 2023 18:13:37 +0200
matrix-synapse-py3 (1.83.0~rc1) stable; urgency=medium
* New Synapse release 1.83.0rc1.

View File

@ -46,7 +46,7 @@ for port in 8080 8081 8082; do
echo ''
# Warning, this heredoc depends on the interaction of tabs and spaces.
# Please don't accidentaly bork me with your fancy settings.
# Please don't accidentally bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port
# This overides some of the default settings above
@ -80,12 +80,8 @@ for port in 8080 8081 8082; do
echo "tls_certificate_path: \"$DIR/$port/localhost:$port.tls.crt\""
echo "tls_private_key_path: \"$DIR/$port/localhost:$port.tls.key\""
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server'
echo 'trusted_key_servers:'
echo ' - server_name: "matrix.org"'
echo ' accept_keys_insecurely: true'
echo ''
# Request keys directly from servers contacted over federation
echo 'trusted_key_servers: []'
# Allow the servers to communicate over localhost.
allow_list=$(cat <<-ALLOW_LIST

View File

@ -37,7 +37,7 @@ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential curl git libffi-dev libssl-dev \
build-essential curl git libffi-dev libssl-dev pkg-config \
&& rm -rf /var/lib/apt/lists/*
# Install rust and ensure its in the PATH.

View File

@ -6,10 +6,6 @@
worker_app: "{{ app }}"
worker_name: "{{ name }}"
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: {{ port }}

View File

@ -69,6 +69,9 @@ import yaml
from jinja2 import Environment, FileSystemLoader
MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
MAIN_PROCESS_INSTANCE_NAME = "main"
MAIN_PROCESS_LOCALHOST_ADDRESS = "127.0.0.1"
MAIN_PROCESS_REPLICATION_PORT = 9093
# A simple name used as a placeholder in the WORKERS_CONFIG below. This will be replaced
# during processing with the name of the worker.
@ -719,8 +722,8 @@ def generate_worker_files(
# shared config file.
listeners = [
{
"port": 9093,
"bind_address": "127.0.0.1",
"port": MAIN_PROCESS_REPLICATION_PORT,
"bind_address": MAIN_PROCESS_LOCALHOST_ADDRESS,
"type": "http",
"resources": [{"names": ["replication"]}],
}
@ -870,6 +873,14 @@ def generate_worker_files(
workers_in_use = len(requested_worker_types) > 0
# If there are workers, add the main process to the instance_map too.
if workers_in_use:
instance_map = shared_config.setdefault("instance_map", {})
instance_map[MAIN_PROCESS_INSTANCE_NAME] = {
"host": MAIN_PROCESS_LOCALHOST_ADDRESS,
"port": MAIN_PROCESS_REPLICATION_PORT,
}
# Shared homeserver config
convert(
"/conf/shared.yaml.j2",

View File

@ -1,10 +1,12 @@
# Experimental Features API
This API allows a server administrator to enable or disable some experimental features on a per-user
basis. Currently supported features are [msc3026](https://github.com/matrix-org/matrix-spec-proposals/pull/3026): busy
presence state enabled, [msc2654](https://github.com/matrix-org/matrix-spec-proposals/pull/2654): enable unread counts,
[msc3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client, and [msc3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967): do not require
basis. The currently supported features are:
- [MSC3026](https://github.com/matrix-org/matrix-spec-proposals/pull/3026): busy
presence state enabled
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client
- [MSC3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967): do not require
UIA when first uploading cross-signing keys.
@ -19,7 +21,7 @@ provide a body containing the user id and listing the features to enable/disable
{
"features": {
"msc3026":true,
"msc2654":true
"msc3881":true
}
}
```
@ -46,7 +48,6 @@ user like so:
{
"features": {
"msc3026": true,
"msc2654": true,
"msc3881": false,
"msc3967": false
}

View File

@ -100,7 +100,7 @@ it should be enough to get a rough idea of where database disk space is going.
The API is:
```
GET /_synapse/admin/v1/statistics/statistics/database/rooms
GET /_synapse/admin/v1/statistics/database/rooms
```
A response body like the following is returned:

View File

@ -62,7 +62,7 @@ URL parameters:
- `user_id`: fully-qualified user id: for example, `@user:server.com`.
## Create or modify Account
## Create or modify account
This API allows an administrator to create or modify a user account with a
specific `user_id`.
@ -78,28 +78,29 @@ with a body of:
```json
{
"password": "user_password",
"displayname": "User",
"logout_devices": false,
"displayname": "Alice Marigold",
"avatar_url": "mxc://example.com/abcde12345",
"threepids": [
{
"medium": "email",
"address": "<user_mail_1>"
"address": "alice@example.com"
},
{
"medium": "email",
"address": "<user_mail_2>"
"address": "alice@domain.org"
}
],
"external_ids": [
{
"auth_provider": "<provider1>",
"external_id": "<user_id_provider_1>"
"auth_provider": "example",
"external_id": "12345"
},
{
"auth_provider": "<provider2>",
"external_id": "<user_id_provider_2>"
"auth_provider": "example2",
"external_id": "abc54321"
}
],
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false,
"user_type": null
@ -112,41 +113,51 @@ Returns HTTP status code:
URL parameters:
- `user_id`: fully-qualified user id: for example, `@user:server.com`.
- `user_id` - A fully-qualified user id. For example, `@user:server.com`.
Body parameters:
- `password` - string, optional. If provided, the user's password is updated and all
- `password` - **string**, optional. If provided, the user's password is updated and all
devices are logged out, unless `logout_devices` is set to `false`.
- `logout_devices` - bool, optional, defaults to `true`. If set to false, devices aren't
- `logout_devices` - **bool**, optional, defaults to `true`. If set to `false`, devices aren't
logged out even when `password` is provided.
- `displayname` - string, optional, defaults to the value of `user_id`.
- `threepids` - array, optional, allows setting the third-party IDs (email, msisdn)
- `medium` - string. Kind of third-party ID, either `email` or `msisdn`.
- `address` - string. Value of third-party ID.
belonging to a user.
- `external_ids` - array, optional. Allow setting the identifier of the external identity
provider for SSO (Single sign-on). Details in the configuration manual under the
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
- `auth_provider` - string. ID of the external identity provider. Value of `idp_id`
in the homeserver configuration. Note that no error is raised if the provided
value is not in the homeserver configuration.
- `external_id` - string, user ID in the external identity provider.
- `avatar_url` - string, optional, must be a
- `displayname` - **string**, optional. If set to an empty string (`""`), the user's display name
will be removed.
- `avatar_url` - **string**, optional. Must be a
[MXC URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
- `admin` - bool, optional, defaults to `false`.
- `deactivated` - bool, optional. If unspecified, deactivation state will be left
unchanged on existing accounts and set to `false` for new accounts.
A user cannot be erased by deactivating with this API. For details on
deactivating users see [Deactivate Account](#deactivate-account).
- `user_type` - string or null, optional. If provided, the user type will be
adjusted. If `null` given, the user type will be cleared. Other
allowed options are: `bot` and `support`.
If set to an empty string (`""`), the user's avatar is removed.
- `threepids` - **array**, optional. If provided, the user's third-party IDs (email, msisdn) are
entirely replaced with the given list. Each item in the array is an object with the following
fields:
- `medium` - **string**, required. The type of third-party ID, either `email` or `msisdn` (phone number).
- `address` - **string**, required. The third-party ID itself, e.g. `alice@example.com` for `email` or
`447470274584` (for a phone number with country code "44") and `19254857364` (for a phone number
with country code "1") for `msisdn`.
Note: If a threepid is removed from a user via this option, Synapse will also attempt to remove
that threepid from any identity servers it is aware has a binding for it.
- `external_ids` - **array**, optional. Allow setting the identifier of the external identity
provider for SSO (Single sign-on). More details are in the configuration manual under the
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
- `auth_provider` - **string**, required. The unique, internal ID of the external identity provider.
The same as `idp_id` from the homeserver configuration. Note that no error is raised if the
provided value is not in the homeserver configuration.
- `external_id` - **string**, required. An identifier for the user in the external identity provider.
When the user logs in to the identity provider, this must be the unique ID that they map to.
- `admin` - **bool**, optional, defaults to `false`. Whether the user is a homeserver administrator,
granting them access to the Admin API, among other things.
- `deactivated` - **bool**, optional. If unspecified, deactivation state will be left unchanged.
If the user already exists then optional parameters default to the current value.
Note: the `password` field must also be set if both of the following are true:
- `deactivated` is set to `false` and the user was previously deactivated (you are reactivating this user)
- Users are allowed to set their password on this homeserver (both `password_config.enabled` and
`password_config.localdb_enabled` config options are set to `true`).
Users' passwords are wiped upon account deactivation, hence the need to set a new one here.
In order to re-activate an account `deactivated` must be set to `false`. If
users do not login via single-sign-on, a new `password` must be provided.
Note: a user cannot be erased with this API. For more details on
deactivating and erasing users see [Deactivate Account](#deactivate-account).
- `user_type` - **string** or null, optional. If not provided, the user type will be
not be changed. If `null` is given, the user type will be cleared.
Other allowed options are: `bot` and `support`.
## List Accounts

View File

@ -346,6 +346,7 @@ The above will run a monolithic (single-process) Synapse with SQLite as the data
A safe example would be `WORKER_TYPES="federation_inbound, federation_sender, synchrotron"`.
See the [worker documentation](../workers.md) for additional information on workers.
- Passing `ASYNCIO_REACTOR=1` as an environment variable to use the Twisted asyncio reactor instead of the default one.
- Passing `PODMAN=1` will use the [podman](https://podman.io/) container runtime, instead of docker.
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh

View File

@ -1,10 +1,6 @@
worker_app: synapse.app.generic_worker
worker_name: generic_worker1
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8083

View File

@ -88,6 +88,84 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.84.0
## Deprecation of `worker_replication_*` configuration settings
When using workers,
* `worker_replication_host`
* `worker_replication_http_port`
* `worker_replication_http_tls`
can now be removed from individual worker YAML configuration ***if*** you add the main process to the `instance_map` in the shared YAML configuration,
using the name `main`.
### Before:
Shared YAML
```yaml
instance_map:
generic_worker1:
host: localhost
port: 5678
tls: false
```
Worker YAML
```yaml
worker_app: synapse.app.generic_worker
worker_name: generic_worker1
worker_replication_host: localhost
worker_replication_http_port: 3456
worker_replication_http_tls: false
worker_listeners:
- type: http
port: 1234
resources:
- names: [client, federation]
- type: http
port: 5678
resources:
- names: [replication]
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
```
### After:
Shared YAML
```yaml
instance_map:
main:
host: localhost
port: 3456
tls: false
generic_worker1:
host: localhost
port: 5678
tls: false
```
Worker YAML
```yaml
worker_app: synapse.app.generic_worker
worker_name: generic_worker1
worker_listeners:
- type: http
port: 1234
resources:
- names: [client, federation]
- type: http
port: 5678
resources:
- names: [replication]
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
```
Notes:
* `tls` is optional but mirrors the functionality of `worker_replication_http_tls`
# Upgrading to v1.81.0
## Application service path & authentication deprecations

View File

@ -1768,6 +1768,30 @@ Example configuration:
max_image_pixels: 35M
```
---
### `prevent_media_downloads_from`
A list of domains to never download media from. Media from these
domains that is already downloaded will not be deleted, but will be
inaccessible to users. This option does not affect admin APIs trying
to download/operate on media.
This will not prevent the listed domains from accessing media themselves.
It simply prevents users on this server from downloading media originating
from the listed servers.
This will have no effect on media originating from the local server.
This only affects media downloaded from other Matrix servers, to
block domains from URL previews see [`url_preview_url_blacklist`](#url_preview_url_blacklist).
Defaults to an empty list (nothing blocked).
Example configuration:
```yaml
prevent_media_downloads_from:
- evil.example.org
- evil2.example.org
```
---
### `dynamic_thumbnails`
Whether to generate new thumbnails on the fly to precisely match
@ -3442,6 +3466,9 @@ This option has a number of sub-options. They are as follows:
user has unread messages in. Defaults to true, meaning push clients will see the number of
rooms with unread messages in them. Set to false to instead send the number
of unread messages.
* `jitter_delay`: Delays push notifications by a random amount up to the given
duration. Useful for mitigating timing attacks. Optional, defaults to no
delay. _Added in Synapse 1.84.0._
Example configuration:
```yaml
@ -3449,6 +3476,7 @@ push:
enabled: true
include_content: false
group_unread_count_by_room: false
jitter_delay: "10s"
```
---
## Rooms
@ -3695,6 +3723,16 @@ default_power_level_content_override:
trusted_private_chat: null
public_chat: null
```
---
### `forget_rooms_on_leave`
Set to true to automatically forget rooms for users when they leave them, either
normally or via a kick or ban. Defaults to false.
Example configuration:
```yaml
forget_rooms_on_leave: false
```
---
## Opentracing
@ -3846,15 +3884,20 @@ federation_sender_instances:
### `instance_map`
When using workers this should be a map from [`worker_name`](#worker_name) to the
HTTP replication listener of the worker, if configured.
HTTP replication listener of the worker, if configured, and to the main process.
Each worker declared under [`stream_writers`](../../workers.md#stream-writers) needs
a HTTP replication listener, and that listener should be included in the `instance_map`.
(The main process also needs an HTTP replication listener, but it should not be
listed in the `instance_map`.)
The main process also needs an entry on the `instance_map`, and it should be listed under
`main` **if even one other worker exists**. Ensure the port matches with what is declared
inside the `listener` block for a `replication` listener.
Example configuration:
```yaml
instance_map:
main:
host: localhost
port: 8030
worker1:
host: localhost
port: 8034
@ -3938,9 +3981,16 @@ This setting has the following sub-options:
localhost and 6379
* `password`: Optional password if configured on the Redis instance.
* `dbid`: Optional redis dbid if needs to connect to specific redis logical db.
* `use_tls`: Whether to use tls connection. Defaults to false.
* `certificate_file`: Optional path to the certificate file
* `private_key_file`: Optional path to the private key file
* `ca_file`: Optional path to the CA certificate file. Use this one or:
* `ca_path`: Optional path to the folder containing the CA certificate file
_Added in Synapse 1.78.0._
_Changed in Synapse 1.84.0: Added use\_tls, certificate\_file, private\_key\_file, ca\_file and ca\_path attributes_
Example configuration:
```yaml
redis:
@ -3949,6 +3999,10 @@ redis:
port: 6379
password: <secret_password>
dbid: <dbid>
#use_tls: True
#certificate_file: <path_to_the_certificate_file>
#private_key_file: <path_to_the_private_key_file>
#ca_file: <path_to_the_ca_certificate_file>
```
---
## Individual worker configuration
@ -3986,6 +4040,7 @@ worker_name: generic_worker1
```
---
### `worker_replication_host`
*Deprecated as of version 1.84.0. Place `host` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
The HTTP replication endpoint that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
@ -3997,6 +4052,7 @@ worker_replication_host: 127.0.0.1
```
---
### `worker_replication_http_port`
*Deprecated as of version 1.84.0. Place `port` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
The HTTP replication port that it should talk to on the main Synapse process.
The main Synapse process defines this with a `replication` resource in
@ -4008,6 +4064,7 @@ worker_replication_http_port: 9093
```
---
### `worker_replication_http_tls`
*Deprecated as of version 1.84.0. Place `tls` under `main` entry on the [`instance_map`](#instance_map) in your shared yaml configuration instead.*
Whether TLS should be used for talking to the HTTP replication port on the main
Synapse process.
@ -4033,9 +4090,9 @@ A worker can handle HTTP requests. To do so, a `worker_listeners` option
must be declared, in the same way as the [`listeners` option](#listeners)
in the shared config.
Workers declared in [`stream_writers`](#stream_writers) will need to include a
`replication` listener here, in order to accept internal HTTP requests from
other workers.
Workers declared in [`stream_writers`](#stream_writers) and [`instance_map`](#instance_map)
will need to include a `replication` listener here, in order to accept internal HTTP
requests from other workers.
Example configuration:
```yaml

View File

@ -87,12 +87,18 @@ shared configuration file.
### Shared configuration
Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an
Normally, only a few changes are needed to make an existing configuration
file suitable for use with workers:
* First, you need to enable an
["HTTP replication listener"](usage/configuration/config_documentation.md#listeners)
for the main process; and secondly, you need to enable
[redis-based replication](usage/configuration/config_documentation.md#redis).
Optionally, a [shared secret](usage/configuration/config_documentation.md#worker_replication_secret)
for the main process
* Secondly, you need to enable
[redis-based replication](usage/configuration/config_documentation.md#redis)
* You will need to add an [`instance_map`](usage/configuration/config_documentation.md#instance_map)
with the `main` process defined, as well as the relevant connection information from
it's HTTP `replication` listener (defined in step 1 above). Note that the `host` defined
is the address the worker needs to look for the `main` process at, not necessarily the same address that is bound to.
* Optionally, a [shared secret](usage/configuration/config_documentation.md#worker_replication_secret)
can be used to authenticate HTTP traffic between workers. For example:
```yaml
@ -111,6 +117,11 @@ worker_replication_secret: ""
redis:
enabled: true
instance_map:
main:
host: 'localhost'
port: 9093
```
See the [configuration manual](usage/configuration/config_documentation.md)
@ -130,13 +141,13 @@ In the config file for each worker, you must specify:
* The type of worker ([`worker_app`](usage/configuration/config_documentation.md#worker_app)).
The currently available worker applications are listed [below](#available-worker-applications).
* A unique name for the worker ([`worker_name`](usage/configuration/config_documentation.md#worker_name)).
* The HTTP replication endpoint that it should talk to on the main synapse process
([`worker_replication_host`](usage/configuration/config_documentation.md#worker_replication_host) and
[`worker_replication_http_port`](usage/configuration/config_documentation.md#worker_replication_http_port)).
* If handling HTTP requests, a [`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners) option
with an `http` listener.
* **Synapse 1.72 and older:** if handling the `^/_matrix/client/v3/keys/upload` endpoint, the HTTP URI for
the main process (`worker_main_http_uri`). This config option is no longer required and is ignored when running Synapse 1.73 and newer.
* **Synapse 1.83 and older:** The HTTP replication endpoint that the worker should talk to on the main synapse process
([`worker_replication_host`](usage/configuration/config_documentation.md#worker_replication_host) and
[`worker_replication_http_port`](usage/configuration/config_documentation.md#worker_replication_http_port)). If using Synapse 1.84 and newer, these are not needed if `main` is defined on the [shared configuration](#shared-configuration) `instance_map`
For example:
@ -417,11 +428,14 @@ effects of bursts of events from that bridge on events sent by normal users.
Additionally, the writing of specific streams (such as events) can be moved off
of the main process to a particular worker.
To enable this, the worker must have a
[HTTP `replication` listener](usage/configuration/config_documentation.md#listeners) configured,
have a [`worker_name`](usage/configuration/config_documentation.md#worker_name)
To enable this, the worker must have:
* An [HTTP `replication` listener](usage/configuration/config_documentation.md#listeners) configured,
* Have a [`worker_name`](usage/configuration/config_documentation.md#worker_name)
and be listed in the [`instance_map`](usage/configuration/config_documentation.md#instance_map)
config. The same worker can handle multiple streams, but unless otherwise documented,
config.
* Have the main process declared on the [`instance_map`](usage/configuration/config_documentation.md#instance_map) as well.
Note: The same worker can handle multiple streams, but unless otherwise documented,
each stream can only have a single writer.
For example, to move event persistence off to a dedicated worker, the shared
@ -429,6 +443,9 @@ configuration would include:
```yaml
instance_map:
main:
host: localhost
port: 8030
event_persister1:
host: localhost
port: 8034

View File

@ -8,16 +8,16 @@
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1682534083,
"narHash": "sha256-lBgFaLNHRQtD3InZbBXzIS8HgZUgcPJ6jiqGa4FJPrk=",
"owner": "anoadragon453",
"lastModified": 1683102061,
"narHash": "sha256-kOphT6V0uQUlFNBP3GBjs7DAU7fyZGGqCs9ue1gNY6E=",
"owner": "cachix",
"repo": "devenv",
"rev": "9694bd0a845dd184d4468cc3d3461089aace787a",
"rev": "ff1f29e41756553174d596cafe3a9fa77595100b",
"type": "github"
},
"original": {
"owner": "anoadragon453",
"ref": "anoa/fix_languages_python",
"owner": "cachix",
"ref": "main",
"repo": "devenv",
"type": "github"
}

View File

@ -50,11 +50,7 @@
# Output a development shell for x86_64/aarch64 Linux/Darwin (MacOS).
systems.url = "github:nix-systems/default";
# A development environment manager built on Nix. See https://devenv.sh.
# This is temporarily overridden to a fork that fixes a quirk between
# devenv's service and python language features. This can be removed
# when https://github.com/cachix/devenv/pull/559 is merged upstream.
devenv.url = "github:anoadragon453/devenv/anoa/fix_languages_python";
#devenv.url = "github:cachix/devenv/main";
devenv.url = "github:cachix/devenv/main";
# Rust toolchains and rust-analyzer nightly.
fenix = {
url = "github:nix-community/fenix";
@ -97,9 +93,13 @@
# Native dependencies for unit tests (SyTest also requires OpenSSL).
openssl
xmlsec
# Native dependencies for running Complement.
olm
# For building the Synapse documentation website.
mdbook
];
# Install Python and manage a virtualenv with Poetry.

244
poetry.lock generated
View File

@ -629,101 +629,101 @@ typing-extensions = {version = ">=3.7.4.3", markers = "python_version < \"3.8\""
[[package]]
name = "hiredis"
version = "2.2.2"
version = "2.2.3"
description = "Python wrapper for hiredis"
category = "main"
optional = true
python-versions = ">=3.7"
files = [
{file = "hiredis-2.2.2-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:ba6123ff137275e2f4c31fc74b93813fcbb79160d43f5357163e09638c7743de"},
{file = "hiredis-2.2.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:d995846acc8e3339fb7833cd19bf6f3946ff5157c8488a4df9c51cd119a36870"},
{file = "hiredis-2.2.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:82f869ca44bcafa37cd71cfa1429648fa354d6021dcd72f03a2f66bcb339c546"},
{file = "hiredis-2.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa90a5ee7a7f30c3d72d3513914b8f51f953a71b8cbd52a241b6db6685e55645"},
{file = "hiredis-2.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:01e2e588392b5fdcc3a6aa0eb62a2eb2a142f829082fa4c3354228029d3aa1ce"},
{file = "hiredis-2.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5dac177a6ab8b4eb4d5e74978c29eef7cc9eef14086f814cb3893f7465578044"},
{file = "hiredis-2.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cb992e3f9753c5a0c637f333c2010d1ad702aebf2d730ee4d484f32b19bae97"},
{file = "hiredis-2.2.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e61c22fda5fc25d31bbced24a8322d33c5cb8cad9ba698634c16edb5b3e79a91"},
{file = "hiredis-2.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9873898e26e50cd41415e9d1ea128bfdb60eb26abb4f5be28a4500fd7834dc0c"},
{file = "hiredis-2.2.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2c18b00a382546e19bcda8b83dcca5b6e0dbc238d235723434405f48a18e8f77"},
{file = "hiredis-2.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:8c3a6998f6f88d7ca4d082fd26525074df13162b274d7c64034784b6fdc56666"},
{file = "hiredis-2.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:0fc1f9a9791d028b2b8afa318ccff734c7fc8861d37a04ca9b3d27c9b05f9718"},
{file = "hiredis-2.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5f2cfd323f83985f2bed6ed013107873275025af270485b7d04c338bfb47bd14"},
{file = "hiredis-2.2.2-cp310-cp310-win32.whl", hash = "sha256:55c7e9a9e05f8c0555bfba5c16d98492f8b6db650e56d0c35cc28aeabfc86020"},
{file = "hiredis-2.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:eaff526c2fed31c971b0fa338a25237ae5513550ef75d0b85b9420ec778cca45"},
{file = "hiredis-2.2.2-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:688b9b7458b4f3f452fea6ed062c04fa1fd9a69d9223d95c6cb052581aba553b"},
{file = "hiredis-2.2.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:544d52fde3a8dac7854673eac20deca05214758193c01926ffbb0d57c6bf4ffe"},
{file = "hiredis-2.2.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:990916e8b0b4eedddef787e73549b562f8c9e73a7fea82f9b8ff517806774ad0"},
{file = "hiredis-2.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:10dc34854e9acfb3e7cc4157606e2efcb497b1c6fca07bd6c3be34ae5e413f13"},
{file = "hiredis-2.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c446a2007985ae49c2ecd946dd819dea72b931beb5f647ba08655a1a1e133fa8"},
{file = "hiredis-2.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:02b9f928dc6cd43ed0f0ffc1c75fb209fb180f004b7e2e19994805f998d247aa"},
{file = "hiredis-2.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a355aff8dfa02ebfe67f0946dd706e490bddda9ea260afac9cdc43942310c53"},
{file = "hiredis-2.2.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:831461abe5b63e73719621a5f31d8fc175528a05dc09d5a8aa8ef565d6deefa4"},
{file = "hiredis-2.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:75349f7c8f77eb0fd33ede4575d1e5b0a902a8176a436bf03293d7fec4bd3894"},
{file = "hiredis-2.2.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1eb39b34d15220095dc49ad1e1082580d35cd3b6d9741def52988b5075e4ff03"},
{file = "hiredis-2.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:a9b306f4e870747eea8b008dcba2e9f1e4acd12b333a684bc1cc120e633a280e"},
{file = "hiredis-2.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:03dfb4ab7a2136ce1be305592553f102e1bd91a96068ab2778e3252aed20d9bc"},
{file = "hiredis-2.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d8bc89c7e33fecb083a199ade0131a34d20365a8c32239e218da57290987ca9a"},
{file = "hiredis-2.2.2-cp311-cp311-win32.whl", hash = "sha256:ed44b3c711cecde920f238ac35f70ac08744f2079b6369655856e43944464a72"},
{file = "hiredis-2.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:2e2f0ce3e8ab1314a52f562386220f6714fd24d7968a95528135ad04e88cc741"},
{file = "hiredis-2.2.2-cp37-cp37m-macosx_10_12_x86_64.whl", hash = "sha256:e7e61ab75b851aac2d6bc634d03738a242a6ef255a44178437b427c5ebac0a87"},
{file = "hiredis-2.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9eb14339e399554bb436cc4628e8aaa3943adf7afcf34aba4cbd1e3e6b9ec7ec"},
{file = "hiredis-2.2.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4ec57886f20f4298537cb1ab9dbda98594fb8d7c724c5fbf9a4b55329fd4a63"},
{file = "hiredis-2.2.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a89f5afb9827eab07b9c8c585cd4dc95e5232c727508ae2c935d09531abe9e33"},
{file = "hiredis-2.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3645590b9234cafd21c8ecfbf252ad9aa1d67629f4bdc98ba3627f48f8f7b5aa"},
{file = "hiredis-2.2.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99350e89f52186146938bdba0b9c6cd68802c20346707d6ca8366f2d69d89b2f"},
{file = "hiredis-2.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b5d290f3d8f7a05c4adbe6c355055b87c7081bfa1eccd1ae5491216307ee5f53"},
{file = "hiredis-2.2.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:c95be6f20377d5995ef41a98314542e194d2dc9c2579d8f130a1aea78d48fd42"},
{file = "hiredis-2.2.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:e4e2da61a04251121cb551f569c3250e6e27e95f2a80f8351c36822eda1f5d2b"},
{file = "hiredis-2.2.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:ac7f8d68826f95a3652e44b0c12bfa74d3aa6531d47d5dbe6a2fbfc7979bc20f"},
{file = "hiredis-2.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:359e662324318baadb768d3c4ade8c4bdcfbb313570eb01e15d75dc5db781815"},
{file = "hiredis-2.2.2-cp37-cp37m-win32.whl", hash = "sha256:fd0ca35e2cf44866137cbb5ae7e439fab18a0b0e0e1cf51d45137622d59ec012"},
{file = "hiredis-2.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:c9488ffb10acc6b121c498875278b0a6715d193742dc92d21a281712169ac06d"},
{file = "hiredis-2.2.2-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:1570fe4f93bc1ea487fb566f2b863fd0ed146f643a4ea31e4e07036db9e0c7f8"},
{file = "hiredis-2.2.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:8753c561b37cccbda7264c9b4486e206a6318c18377cd647beb3aa41a15a6beb"},
{file = "hiredis-2.2.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a06d0dd84f10be6b15a92edbca2490b64917280f66d8267c63de99b6550308ad"},
{file = "hiredis-2.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40ff3f1ec3a4046732e9e41df08dcb1a559847196755d295d43e32528aae39e6"},
{file = "hiredis-2.2.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c24d856e13c02bd9d28a189e47be70cbba6f2c2a4bd85a8cc98819db9e7e3e06"},
{file = "hiredis-2.2.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4ee9fe7cef505e8d925c70bebcc16bfab12aa7af922f948346baffd4730f7b00"},
{file = "hiredis-2.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03ab1d545794bb0e09f3b1e2c8b3adcfacd84f6f2d402bfdcd441a98c0e9643c"},
{file = "hiredis-2.2.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:14dfccf4696d75395c587a5dafafb4f7aa0a5d55309341d10bc2e7f1eaa20771"},
{file = "hiredis-2.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:2ddc573809ca4374da1b24b48604f34f3d5f0911fcccfb1c403ff8d8ca31c232"},
{file = "hiredis-2.2.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:24301ca2bf9b2f843b4c3015c90f161798fa3bbc5b95fd494785751b137dbbe2"},
{file = "hiredis-2.2.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:b083a69e158138ffa95740ff6984d328259387b5596908021b3ccb946469ff66"},
{file = "hiredis-2.2.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:8e16dc949cc2e9c5fbcd08de05b5fb61b89ff65738d772863c5c96248628830e"},
{file = "hiredis-2.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:674f296c3c89cb53f97aa9ba2508d3f360ad481b9e0c0e3a59b342a15192adaf"},
{file = "hiredis-2.2.2-cp38-cp38-win32.whl", hash = "sha256:20ecbf87aac4f0f33f9c55ae15cb73b485d256c57518c590b7d0c9c152150632"},
{file = "hiredis-2.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:b11960237a3025bf248135e5b497dc4923e83d137eb798fbfe78b40d57c4b156"},
{file = "hiredis-2.2.2-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:18103090b8eda9c529830e26594e88b0b1472055785f3ed29b8adc694d03862a"},
{file = "hiredis-2.2.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:d1acb7c957e5343303b3862947df3232dc7395da320b3b9ae076dfaa56ad59dc"},
{file = "hiredis-2.2.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4997f55e1208af95a8fbd0fa187b04c672fcec8f66e49b9ab7fcc45cc1657dc4"},
{file = "hiredis-2.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:449e18506d22af40977abd0f5a8979f57f88d4562fe591478a3438d76a15133d"},
{file = "hiredis-2.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a32a4474f7a4abdea954f3365608edee3f90f1de9fa05b81d214d4cad04c718a"},
{file = "hiredis-2.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e86c800c6941698777fc58419216a66a7f76504f1cea72381d2ee206888e964d"},
{file = "hiredis-2.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c73aa295c5369135247ff63aa1fbb116067485d0506cd787cc0c868e72bbee55"},
{file = "hiredis-2.2.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e10a66680023bd5c5a3d605dae0844e3dde60eac5b79e39f51395a2aceaf634"},
{file = "hiredis-2.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:03ab760fc96e0c5d36226eb727f30645bf6a53c97f14bfc0a4d0401bfc9b8af7"},
{file = "hiredis-2.2.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:855d258e7f1aee3d7fbd5b1dc87790b1b5016e23d369a97b934a25ae7bc0171f"},
{file = "hiredis-2.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:ccc33d87866d213f84f857a98f69c13f94fbf99a3304e328869890c9e49c8d65"},
{file = "hiredis-2.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:339af17bb9817f8acb127247c79a99cad63db6738c0fb2aec9fa3d4f35d2a250"},
{file = "hiredis-2.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:57f73aa04d0b70ff436fb35fa7ea2b796aa7addbd7ebb8d1aa1f3d1b3e4439f1"},
{file = "hiredis-2.2.2-cp39-cp39-win32.whl", hash = "sha256:e97d4e650b8d933a1229f341db92b610fc52b8d752490235977b63b81fbbc2cb"},
{file = "hiredis-2.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:8d43a7bba66a800279e33229a206861be09c279e261eaa8db4824e59465f4848"},
{file = "hiredis-2.2.2-pp37-pypy37_pp73-macosx_10_12_x86_64.whl", hash = "sha256:632d79fd02b03e8d9fbaebbe40bfe34b920c5d0a9c0ef6270752e0db85208175"},
{file = "hiredis-2.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a5fefac31c84143782ec1ebc323c04e733a6e4bfebcef9907a34e47a465e648"},
{file = "hiredis-2.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5155bc1710df8e21aa48c9b2f4d4e13e4987e1efff363a1ef9c84fae2cc6c145"},
{file = "hiredis-2.2.2-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2f220b71235d2deab1b4b22681c8aee444720d973b80f1b86a4e2a85f6bcf1e1"},
{file = "hiredis-2.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:f1f1efbe9cc29a3af39cf7eed27225f951aed3f48a1149c7fb74529fb5ab86d4"},
{file = "hiredis-2.2.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1f1c44242c18b1f02e6d1162f133d65d00e09cc10d9165dccc78662def72abc2"},
{file = "hiredis-2.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e0f444d9062f7e487ef42bab2fb2e290f1704afcbca48ad3ec23de63eef0fda"},
{file = "hiredis-2.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac15e7e1efca51b4695e540c80c328accb352c9608da7c2df82d1fa1a3c539ef"},
{file = "hiredis-2.2.2-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20cfbc469400669a5999aa34ccba3872a1e34490ec3d5c84e8c0752c27977b7c"},
{file = "hiredis-2.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:bae004a0b978bf62e38d0eef5ab9156f8101d01167b3ca7054bd0994b773e917"},
{file = "hiredis-2.2.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:a1ce725542133dbdda9e8704867ef52651886bd1ef568c6fd997a27404381985"},
{file = "hiredis-2.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6e6ea7532221c97fa6d79f7d19d452cd9d1141d759c54279cc4774ce24728f13"},
{file = "hiredis-2.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7114961ed78d708142f6c6eb1d2ed65dc3da4b5ae8a4660ad889dd7fc891971"},
{file = "hiredis-2.2.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1b084fbc3e69f99865242f8e1ccd4ea2a34bf6a3983d015d61133377526c0ce2"},
{file = "hiredis-2.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2d1ba0799f3487294f72b2157944d5c3a4fb33c99e2d495d63eab98c7ec7234b"},
{file = "hiredis-2.2.2.tar.gz", hash = "sha256:9c270bd0567a9c60673284e000132f603bb4ecbcd707567647a68f85ef45c4d4"},
{file = "hiredis-2.2.3-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:9a1a80a8fa767f2fdc3870316a54b84fe9fc09fa6ab6a2686783de6a228a4604"},
{file = "hiredis-2.2.3-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3f006c28c885deb99b670a5a66f367a175ab8955b0374029bad7111f5357dcd4"},
{file = "hiredis-2.2.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ffaf841546905d90ff189de7397aa56413b1ce5e54547f17a98f0ebf3a3b0a3b"},
{file = "hiredis-2.2.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1cadb0ac7ba3babfd804e425946bec9717b320564a1390f163a54af9365a720a"},
{file = "hiredis-2.2.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:33bc4721632ef9708fa44e5df0066053fccc8e65410a2c48573192517a533b48"},
{file = "hiredis-2.2.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:227c5b4bcb60f89008c275d596e4a7b6625a6b3c827b8a66ae582eace7051f71"},
{file = "hiredis-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:61995eb826009d99ed8590747bc0da683a5f4fbb4faa8788166bf3810845cd5c"},
{file = "hiredis-2.2.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f969edc851efe23010e0f53a64269f2629a9364135e9ec81c842e8b2277d0c1"},
{file = "hiredis-2.2.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d27e560eefb57914d742a837f1da98d3b29cb22eff013c8023b7cf52ae6e051d"},
{file = "hiredis-2.2.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:3759f4789ae1913b7df278dfc9e8749205b7a106f888cd2903d19461e24a7697"},
{file = "hiredis-2.2.3-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:c6cb613148422c523945cdb8b6bed617856f2602fd8750e33773ede2616e55d5"},
{file = "hiredis-2.2.3-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:1d274d5c511dfc03f83f997d3238eaa9b6ee3f982640979f509373cced891e98"},
{file = "hiredis-2.2.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:3b7fe075e91b9d9cff40eba4fb6a8eff74964d3979a39be9a9ef58b1b4cb3604"},
{file = "hiredis-2.2.3-cp310-cp310-win32.whl", hash = "sha256:77924b0d32fd1f493d3df15d9609ddf9d94c31a364022a6bf6b525ce9da75bea"},
{file = "hiredis-2.2.3-cp310-cp310-win_amd64.whl", hash = "sha256:dcb0569dd5bfe6004658cd0f229efa699a3169dcb4f77bd72e188adda302063d"},
{file = "hiredis-2.2.3-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:d115790f18daa99b5c11a506e48923b630ef712e9e4b40482af942c3d40638b8"},
{file = "hiredis-2.2.3-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c3b8be557e08b234774925622e196f0ee36fe4eab66cd19df934d3efd8f3743"},
{file = "hiredis-2.2.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f5446068197b35a11ccc697720c41879c8657e2e761aaa8311783aac84cef20"},
{file = "hiredis-2.2.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa17a3b22b3726d54d7af20394f65d4a1735a842a4e0f557dc67a90f6965c4bc"},
{file = "hiredis-2.2.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7df645b6b7800e8b748c217fbd6a4ca8361bcb9a1ae6206cc02377833ec8a1aa"},
{file = "hiredis-2.2.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2fb9300959a0048138791f3d68359d61a788574ec9556bddf1fec07f2dbc5320"},
{file = "hiredis-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d7e459fe7313925f395148d36d9b7f4f8dac65be06e45d7af356b187cef65fc"},
{file = "hiredis-2.2.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8eceffca3941775b646cd585cd19b275d382de43cc3327d22f7c75d7b003d481"},
{file = "hiredis-2.2.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b17baf702c6e5b4bb66e1281a3efbb1d749c9d06cdb92b665ad81e03118f78fc"},
{file = "hiredis-2.2.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4e43e2b5acaad09cf48c032f7e4926392bb3a3f01854416cf6d82ebff94d5467"},
{file = "hiredis-2.2.3-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:a7205497d7276a81fe92951a29616ef96562ed2f91a02066f72b6f93cb34b40e"},
{file = "hiredis-2.2.3-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:126623b03c31cb6ac3e0d138feb6fcc36dd43dd34fc7da7b7a0c38b5d75bc896"},
{file = "hiredis-2.2.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:071c5814b850574036506a8118034f97c3cbf2fe9947ff45a27b07a48da56240"},
{file = "hiredis-2.2.3-cp311-cp311-win32.whl", hash = "sha256:d1be9e30e675f5bc1cb534633324578f6f0944a1bcffe53242cf632f554f83b6"},
{file = "hiredis-2.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:b9a7c987e161e3c58f992c63b7e26fea7fe0777f3b975799d23d65bbb8cb5899"},
{file = "hiredis-2.2.3-cp37-cp37m-macosx_10_12_x86_64.whl", hash = "sha256:f2dcb8389fa3d453927b1299f46bdb38473c293c8269d5c777d33ea0e526b610"},
{file = "hiredis-2.2.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2df98f5e071320c7d84e8bd07c0542acdd0a7519307fc31774d60e4b842ec4f"},
{file = "hiredis-2.2.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:61a72e4a523cdfc521762137559c08dfa360a3caef63620be58c699d1717dac1"},
{file = "hiredis-2.2.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c9b9e5bde7030cae83aa900b5bd660decc65afd2db8c400f3c568c815a47ca2a"},
{file = "hiredis-2.2.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cd2614f17e261f72efc2f19f5e5ff2ee19e2296570c0dcf33409e22be30710de"},
{file = "hiredis-2.2.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:46525fbd84523cac75af5bf524bc74aaac848beaf31b142d2df8a787d9b4bbc4"},
{file = "hiredis-2.2.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d1a4ce40ba11da9382c14da31f4f9e88c18f7d294f523decd0fadfb81f51ad18"},
{file = "hiredis-2.2.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:5cda592405bbd29d53942e0389dc3fa77b49c362640210d7e94a10c14a677d4d"},
{file = "hiredis-2.2.3-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:5e6674a017629284ef373b50496d9fb1a89b85a20a7fa100ecd109484ec748e5"},
{file = "hiredis-2.2.3-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:e62ec131816c6120eff40dffe43424e140264a15fa4ab88c301bd6a595913af3"},
{file = "hiredis-2.2.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:17e938d9d3ee92e1adbff361706f1c36cc60eeb3e3eeca7a3a353eae344f4c91"},
{file = "hiredis-2.2.3-cp37-cp37m-win32.whl", hash = "sha256:95d2305fd2a7b179cacb48b10f618872fc565c175f9f62b854e8d1acac3e8a9e"},
{file = "hiredis-2.2.3-cp37-cp37m-win_amd64.whl", hash = "sha256:8f9dbe12f011a9b784f58faecc171d22465bb532c310bd588d769ba79a59ef5a"},
{file = "hiredis-2.2.3-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:5a4bcef114fc071d5f52c386c47f35aae0a5b43673197b9288a15b584da8fa3a"},
{file = "hiredis-2.2.3-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:232d0a70519865741ba56e1dfefd160a580ae78c30a1517bad47b3cf95a3bc7d"},
{file = "hiredis-2.2.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9076ce8429785c85f824650735791738de7143f61f43ae9ed83e163c0ca0fa44"},
{file = "hiredis-2.2.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ec58fb7c2062f835595c12f0f02dcda76d0eb0831423cc191d1e18c9276648de"},
{file = "hiredis-2.2.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7f2b34a6444b8f9c1e9f84bd2c639388e5d14f128afd14a869dfb3d9af893aa2"},
{file = "hiredis-2.2.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:818dfd310aa1020a13cd08ee48e116dd8c3bb2e23b8161f8ac4df587dd5093d7"},
{file = "hiredis-2.2.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96d9ea6c8d4cbdeee2e0d43379ce2881e4af0454b00570677c59f33f2531cd38"},
{file = "hiredis-2.2.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1eadbcd3de55ac42310ff82550d3302cb4efcd4e17d76646a17b6e7004bb42b"},
{file = "hiredis-2.2.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:477c34c4489666dc73cb5e89dafe2617c3e13da1298917f73d55aac4696bd793"},
{file = "hiredis-2.2.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:14824e457e4f5cda685c3345d125da13949bcf3bb1c88eb5d248c8d2c3dee08f"},
{file = "hiredis-2.2.3-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:9cd32326dfa6ce87edf754153b0105aca64486bebe93b9600ccff74fa0b224df"},
{file = "hiredis-2.2.3-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:51341e70b467004dcbec3a6ce8c478d2d6241e0f6b01e4c56764afd5022e1e9d"},
{file = "hiredis-2.2.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2443659c76b226267e2a04dbbb21bc2a3f91aa53bdc0c22964632753ae43a247"},
{file = "hiredis-2.2.3-cp38-cp38-win32.whl", hash = "sha256:4e3e3e31423f888d396b1fc1f936936e52af868ac1ec17dd15e3eeba9dd4de24"},
{file = "hiredis-2.2.3-cp38-cp38-win_amd64.whl", hash = "sha256:20f509e3a1a20d6e5f5794fc37ceb21f70f409101fcfe7a8bde783894d51b369"},
{file = "hiredis-2.2.3-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:d20891e3f33803b26d54c77fd5745878497091e33f4bbbdd454cf6e71aee8890"},
{file = "hiredis-2.2.3-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:50171f985e17970f87d5a29e16603d1e5b03bdbf5c2691a37e6c912942a6b657"},
{file = "hiredis-2.2.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9944a2cac25ffe049a7e89f306e11b900640837d1ef38d9be0eaa4a4e2b73a52"},
{file = "hiredis-2.2.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a5c8019ff94988d56eb49b15de76fe83f6b42536d76edeb6565dbf7fe14b973"},
{file = "hiredis-2.2.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a286ded34eb16501002e3713b3130c987366eee2ba0d58c33c72f27778e31676"},
{file = "hiredis-2.2.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b3e974ad15eb32b1f537730dea70b93a4c3db7b026de3ad2b59da49c6f7454d"},
{file = "hiredis-2.2.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:08415ea74c1c29b9d6a4ca3dd0e810dc1af343c1d1d442e15ba133b11ab5be6a"},
{file = "hiredis-2.2.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7e17d04ea58ab8cf3f2dc52e875db16077c6357846006780086fff3189fb199d"},
{file = "hiredis-2.2.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6ccdcb635dae85b006592f78e32d97f4bc7541cb27829d505f9c7fefcef48298"},
{file = "hiredis-2.2.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:69536b821dd1bc78058a6e7541743f8d82bf2d981b91280b14c4daa6cdc7faba"},
{file = "hiredis-2.2.3-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:3753df5f873d473f055e1f8837bfad0bd3b277c86f3c9bf058c58f14204cd901"},
{file = "hiredis-2.2.3-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:6f88cafe46612b6fa68e6dea49e25bebf160598bba00101caa51cc8c1f18d597"},
{file = "hiredis-2.2.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:33ee3ea5cad3a8cb339352cd230b411eb437a2e75d7736c4899acab32056ccdb"},
{file = "hiredis-2.2.3-cp39-cp39-win32.whl", hash = "sha256:b4f3d06dc16671b88a13ae85d8ca92534c0b637d59e49f0558d040a691246422"},
{file = "hiredis-2.2.3-cp39-cp39-win_amd64.whl", hash = "sha256:4f674e309cd055ee7a48304ceb8cf43265d859faf4d7d01d270ce45e976ae9d3"},
{file = "hiredis-2.2.3-pp37-pypy37_pp73-macosx_10_12_x86_64.whl", hash = "sha256:8f280ab4e043b089777b43b4227bdc2035f88da5072ab36588e0ccf77d45d058"},
{file = "hiredis-2.2.3-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:15c2a551f3b8a26f7940d6ee10b837810201754b8d7e6f6b1391655370882c5a"},
{file = "hiredis-2.2.3-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60c4e3c258eafaab21b174b17270a0cc093718d61cdbde8c03f85ec4bf835343"},
{file = "hiredis-2.2.3-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc36a9dded458d4e37492fe3e619c6c83caae794d26ad925adbce61d592f8428"},
{file = "hiredis-2.2.3-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:4ed68a3b1ccb4313d2a42546fd7e7439ad4745918a48b6c9bcaa61e1e3e42634"},
{file = "hiredis-2.2.3-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3bf4b5bae472630c229518e4a814b1b68f10a3d9b00aeaec45f1a330f03a0251"},
{file = "hiredis-2.2.3-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33a94d264e6e12a79d9bb8af333b01dc286b9f39c99072ab5fef94ce1f018e17"},
{file = "hiredis-2.2.3-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3fa6811a618653164f918b891a0fa07052bd71a799defa5c44d167cac5557b26"},
{file = "hiredis-2.2.3-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:af33f370be90b48bbaf0dab32decbdcc522b1fa95d109020a963282086518a8e"},
{file = "hiredis-2.2.3-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:b9953d87418ac228f508d93898ab572775e4d3b0eeb886a1a7734553bcdaf291"},
{file = "hiredis-2.2.3-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5e7bb4dd524f50b71c20ef5a12bd61da9b463f8894b18a06130942fe31509881"},
{file = "hiredis-2.2.3-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89a258424158eb8b3ed9f65548d68998da334ef155d09488c5637723eb1cd697"},
{file = "hiredis-2.2.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f4a65276f6ecdebe75f2a53f578fbc40e8d2860658420d5e0611c56bbf5054c"},
{file = "hiredis-2.2.3-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:334f2738700b20faa04a0d813366fb16ed17287430a6b50584161d5ad31ca6d7"},
{file = "hiredis-2.2.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d194decd9608f11c777946f596f31d5aacad13972a0a87829ae1e6f2d26c1885"},
{file = "hiredis-2.2.3.tar.gz", hash = "sha256:e75163773a309e56a9b58165cf5a50e0f84b755f6ff863b2c01a38918fe92daa"},
]
[[package]]
@ -1973,13 +1973,13 @@ plugins = ["importlib-metadata"]
[[package]]
name = "pyicu"
version = "2.10.2"
version = "2.11"
description = "Python extension wrapping the ICU C++ API"
category = "main"
optional = true
python-versions = "*"
files = [
{file = "PyICU-2.10.2.tar.gz", hash = "sha256:0c3309eea7fab6857507ace62403515b60fe096cbfb4f90d14f55ff75c5441c1"},
{file = "PyICU-2.11.tar.gz", hash = "sha256:3ab531264cfe9132b3d2ac5d708da9a4649d25f6e6813730ac88cf040a08a844"},
]
[[package]]
@ -2323,29 +2323,29 @@ jupyter = ["ipywidgets (>=7.5.1,<9)"]
[[package]]
name = "ruff"
version = "0.0.259"
version = "0.0.265"
description = "An extremely fast Python linter, written in Rust."
category = "dev"
optional = false
python-versions = ">=3.7"
files = [
{file = "ruff-0.0.259-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:f3938dc45e2a3f818e9cbd53007265c22246fbfded8837b2c563bf0ebde1a226"},
{file = "ruff-0.0.259-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:22e1e35bf5f12072cd644d22afd9203641ccf258bc14ff91aa1c43dc14f6047d"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2fb20e89e85d147c85caa807707a1488bccc1f3854dc3d53533e89b52a0c5ff"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:49e903bcda19f6bb0725a962c058eb5d61f40d84ef52ed53b61939b69402ab4e"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:71f0ef1985e9a6696fa97da8459917fa34bdaa2c16bd33bd5edead585b7d44f7"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:7cfef26619cba184d59aa7fa17b48af5891d51fc0b755a9bc533478a10d4d066"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:79b02fa17ec1fd8d306ae302cb47fb614b71e1f539997858243769bcbe78c6d9"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:428507fb321b386dda70d66cd1a8aa0abf51d7c197983d83bb9e4fa5ee60300b"},
{file = "ruff-0.0.259-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c5fbaea9167f1852757f02133e5daacdb8c75b3431343205395da5b10499927a"},
{file = "ruff-0.0.259-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:40ae87f2638484b7e8a7567b04a7af719f1c484c5bf132038b702bb32e1f6577"},
{file = "ruff-0.0.259-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:29e2b77b7d5da6a7dd5cf9b738b511355c5734ece56f78e500d4b5bffd58c1a0"},
{file = "ruff-0.0.259-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5b3c1beacf6037e7f0781d4699d9a2dd4ba2462f475be5b1f45cf84c4ba3c69d"},
{file = "ruff-0.0.259-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:daaea322e7e85f4c13d82be9536309e1c4b8b9851bb0cbc7eeb15d490fd46bf9"},
{file = "ruff-0.0.259-py3-none-win32.whl", hash = "sha256:38704f151323aa5858370a2f792e122cc25e5d1aabe7d42ceeab83da18f0b456"},
{file = "ruff-0.0.259-py3-none-win_amd64.whl", hash = "sha256:aa9449b898287e621942cc71b9327eceb8f0c357e4065fecefb707ef2d978df8"},
{file = "ruff-0.0.259-py3-none-win_arm64.whl", hash = "sha256:e4f39e18702de69faaaee3969934b92d7467285627f99a5b6ecd55a7d9f5d086"},
{file = "ruff-0.0.259.tar.gz", hash = "sha256:8b56496063ab3bfdf72339a5fbebb8bd46e5c5fee25ef11a9f03b208fa0562ec"},
{file = "ruff-0.0.265-py3-none-macosx_10_7_x86_64.whl", hash = "sha256:30ddfe22de6ce4eb1260408f4480bbbce998f954dbf470228a21a9b2c45955e4"},
{file = "ruff-0.0.265-py3-none-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl", hash = "sha256:a11bd0889e88d3342e7bc514554bb4461bf6cc30ec115821c2425cfaac0b1b6a"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2a9b38bdb40a998cbc677db55b6225a6c4fadcf8819eb30695e1b8470942426b"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a8b44a245b60512403a6a03a5b5212da274d33862225c5eed3bcf12037eb19bb"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b279fa55ea175ef953208a6d8bfbcdcffac1c39b38cdb8c2bfafe9222add70bb"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:5028950f7af9b119d43d91b215d5044976e43b96a0d1458d193ef0dd3c587bf8"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4057eb539a1d88eb84e9f6a36e0a999e0f261ed850ae5d5817e68968e7b89ed9"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d586e69ab5cbf521a1910b733412a5735936f6a610d805b89d35b6647e2a66aa"},
{file = "ruff-0.0.265-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa17b13cd3f29fc57d06bf34c31f21d043735cc9a681203d634549b0e41047d1"},
{file = "ruff-0.0.265-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:9ac13b11d9ad3001de9d637974ec5402a67cefdf9fffc3929ab44c2fcbb850a1"},
{file = "ruff-0.0.265-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:62a9578b48cfd292c64ea3d28681dc16b1aa7445b7a7709a2884510fc0822118"},
{file = "ruff-0.0.265-py3-none-musllinux_1_2_i686.whl", hash = "sha256:d0f9967f84da42d28e3d9d9354cc1575f96ed69e6e40a7d4b780a7a0418d9409"},
{file = "ruff-0.0.265-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:1d5a8de2fbaf91ea5699451a06f4074e7a312accfa774ad9327cde3e4fda2081"},
{file = "ruff-0.0.265-py3-none-win32.whl", hash = "sha256:9e9db5ccb810742d621f93272e3cc23b5f277d8d00c4a79668835d26ccbe48dd"},
{file = "ruff-0.0.265-py3-none-win_amd64.whl", hash = "sha256:f54facf286103006171a00ce20388d88ed1d6732db3b49c11feb9bf3d46f90e9"},
{file = "ruff-0.0.265-py3-none-win_arm64.whl", hash = "sha256:c78470656e33d32ddc54e8482b1b0fc6de58f1195586731e5ff1405d74421499"},
{file = "ruff-0.0.265.tar.gz", hash = "sha256:53c17f0dab19ddc22b254b087d1381b601b155acfa8feed514f0d6a413d0ab3a"},
]
[[package]]
@ -2382,19 +2382,19 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "1.19.1"
version = "1.22.1"
description = "Python client for Sentry (https://sentry.io)"
category = "main"
optional = true
python-versions = "*"
files = [
{file = "sentry-sdk-1.19.1.tar.gz", hash = "sha256:7ae78bd921981a5010ab540d6bdf3b793659a4db8cccf7f16180702d48a80d84"},
{file = "sentry_sdk-1.19.1-py2.py3-none-any.whl", hash = "sha256:885a11c69df23e53eb281d003b9ff15a5bdfa43d8a2a53589be52104a1b4582f"},
{file = "sentry-sdk-1.22.1.tar.gz", hash = "sha256:052dff5069c6f0d836ee014323576824a9b40836fc003fb12489a1f19c60a3c9"},
{file = "sentry_sdk-1.22.1-py2.py3-none-any.whl", hash = "sha256:c6c6946f8c927adb00af1c5ab6921df38775b2199b9003816d5935a1310352d5"},
]
[package.dependencies]
certifi = "*"
urllib3 = {version = ">=1.26.11", markers = "python_version >= \"3.6\""}
urllib3 = {version = ">=1.26.11,<2.0.0", markers = "python_version >= \"3.6\""}
[package.extras]
aiohttp = ["aiohttp (>=3.5)"]
@ -3109,29 +3109,29 @@ files = [
[[package]]
name = "types-requests"
version = "2.29.0.0"
version = "2.30.0.0"
description = "Typing stubs for requests"
category = "dev"
optional = false
python-versions = "*"
files = [
{file = "types-requests-2.29.0.0.tar.gz", hash = "sha256:c86f4a955d943d2457120dbe719df24ef0924e11177164d10a0373cf311d7b4d"},
{file = "types_requests-2.29.0.0-py3-none-any.whl", hash = "sha256:4cf6e323e856c779fbe8815bb977a5bf5d6c5034713e4c17ff2a9a20610f5b27"},
{file = "types-requests-2.30.0.0.tar.gz", hash = "sha256:dec781054324a70ba64430ae9e62e7e9c8e4618c185a5cb3f87a6738251b5a31"},
{file = "types_requests-2.30.0.0-py3-none-any.whl", hash = "sha256:c6cf08e120ca9f0dc4fa4e32c3f953c3fba222bcc1db6b97695bce8da1ba9864"},
]
[package.dependencies]
types-urllib3 = "<1.27"
types-urllib3 = "*"
[[package]]
name = "types-setuptools"
version = "67.6.0.5"
version = "67.7.0.1"
description = "Typing stubs for setuptools"
category = "dev"
optional = false
python-versions = "*"
files = [
{file = "types-setuptools-67.6.0.5.tar.gz", hash = "sha256:3a708e66c7bdc620e4d0439f344c750c57a4340c895a4c3ed2d0fc4ae8eb9962"},
{file = "types_setuptools-67.6.0.5-py3-none-any.whl", hash = "sha256:dae5a4a659dbb6dba57773440f6e2dbdd8ef282dc136a174a8a59bd33d949945"},
{file = "types-setuptools-67.7.0.1.tar.gz", hash = "sha256:980a2651b2b019809817e1585071596b87fbafcb54433ff3b12445461db23790"},
{file = "types_setuptools-67.7.0.1-py3-none-any.whl", hash = "sha256:471a4ecf6984ffada63ffcfa884bfcb62718bd2d1a1acf8ee5513ec99789ed5e"},
]
[[package]]
@ -3427,4 +3427,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.7.1"
content-hash = "102eed4faa13eab195555ea070f235acd1e3f0ff9cf028afcac6c51b3e409071"
content-hash = "ef3a16dd66177f7141239e1a2d3e07cc14c08f1e4e0c5127184d022bc062da52"

View File

@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.83.0rc1"
version = "1.83.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@ -311,7 +311,7 @@ all = [
# We pin black so that our tests don't start failing on new releases.
isort = ">=5.10.1"
black = ">=22.3.0"
ruff = "0.0.259"
ruff = "0.0.265"
# Typechecking
mypy = "*"
@ -368,7 +368,7 @@ furo = ">=2022.12.7,<2024.0.0"
# system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.0.0,<=1.5.0", "setuptools_rust>=1.3,<=1.5.2"]
requires = ["poetry-core>=1.2.0,<=1.5.0", "setuptools_rust>=1.3,<=1.6.0"]
build-backend = "poetry.core.masonry.api"

View File

@ -57,7 +57,7 @@ pub const BASE_PREPEND_OVERRIDE_RULES: &[PushRule] = &[PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.master"),
priority_class: 5,
conditions: Cow::Borrowed(&[]),
actions: Cow::Borrowed(&[Action::DontNotify]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: false,
}];
@ -88,7 +88,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
pattern: Cow::Borrowed("m.notice"),
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},
@ -122,7 +122,7 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
pattern: Cow::Borrowed("m.room.member"),
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},

View File

@ -140,7 +140,7 @@ impl PushRuleEvaluator {
/// name.
///
/// Returns the set of actions, if any, that match (filtering out any
/// `dont_notify` actions).
/// `dont_notify` and `coalesce` actions).
pub fn run(
&self,
push_rules: &FilteredPushRules,
@ -198,8 +198,9 @@ impl PushRuleEvaluator {
let actions = push_rule
.actions
.iter()
// Filter out "dont_notify" actions, as we don't store them.
.filter(|a| **a != Action::DontNotify)
// Filter out "dont_notify" and "coalesce" actions, as we don't store them
// (since they result in no action by the pushers).
.filter(|a| **a != Action::DontNotify && **a != Action::Coalesce)
.cloned()
.collect();

View File

@ -164,11 +164,13 @@ impl PushRule {
/// The "action" Synapse should perform for a matching push rule.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Action {
DontNotify,
Notify,
Coalesce,
SetTweak(SetTweak),
// Legacy actions that should be understood, but are equivalent to no-ops.
DontNotify,
Coalesce,
// An unrecognized custom action.
Unknown(Value),
}

View File

@ -11,6 +11,11 @@
# filepath of a local Complement checkout or by setting the COMPLEMENT_REF
# environment variable to pull a different branch or commit.
#
# To use the 'podman' command instead 'docker', set the PODMAN environment
# variable. Example:
#
# PODMAN=1 ./complement.sh
#
# By default Synapse is run in monolith mode. This can be overridden by
# setting the WORKERS environment variable.
#
@ -30,7 +35,6 @@
# Exit if a line returns a non-zero exit code
set -e
# Helper to emit annotations that collapse portions of the log in GitHub Actions
echo_if_github() {
if [[ -n "$GITHUB_WORKFLOW" ]]; then
@ -100,6 +104,16 @@ done
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# Determine whether to use the docker or podman container runtime.
if [ -n "$PODMAN" ]; then
export CONTAINER_RUNTIME=podman
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
export BUILDAH_FORMAT=docker
export COMPLEMENT_HOSTNAME_RUNNING_COMPLEMENT=host.containers.internal
else
export CONTAINER_RUNTIME=docker
fi
# Change to the repository root
cd "$(dirname $0)/.."
@ -126,16 +140,16 @@ if [ -n "$use_editable_synapse" ]; then
editable_mount="$(realpath .):/editable-src:z"
if [ -n "$rebuild_editable_synapse" ]; then
unset skip_docker_build
elif docker inspect complement-synapse-editable &>/dev/null; then
elif $CONTAINER_RUNTIME inspect complement-synapse-editable &>/dev/null; then
# complement-synapse-editable already exists: see if we can still use it:
# - The Rust module must still be importable; it will fail to import if the Rust source has changed.
# - The Poetry lock file must be the same (otherwise we assume dependencies have changed)
# First set up the module in the right place for an editable installation.
docker run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
if (docker run --rm -v $editable_mount --entrypoint 'python' complement-synapse-editable -c 'import synapse.synapse_rust' \
&& docker run --rm -v $editable_mount --entrypoint 'diff' complement-synapse-editable --brief /editable-src/poetry.lock /poetry.lock.bak); then
if ($CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'python' complement-synapse-editable -c 'import synapse.synapse_rust' \
&& $CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'diff' complement-synapse-editable --brief /editable-src/poetry.lock /poetry.lock.bak); then
skip_docker_build=1
else
echo "Editable Synapse image is stale. Will rebuild."
@ -149,25 +163,25 @@ if [ -z "$skip_docker_build" ]; then
# Build a special image designed for use in development with editable
# installs.
docker build -t synapse-editable \
$CONTAINER_RUNTIME build -t synapse-editable \
-f "docker/editable.Dockerfile" .
docker build -t synapse-workers-editable \
$CONTAINER_RUNTIME build -t synapse-workers-editable \
--build-arg FROM=synapse-editable \
-f "docker/Dockerfile-workers" .
docker build -t complement-synapse-editable \
$CONTAINER_RUNTIME build -t complement-synapse-editable \
--build-arg FROM=synapse-workers-editable \
-f "docker/complement/Dockerfile" "docker/complement"
# Prepare the Rust module
docker run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
else
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
docker build -t matrixdotorg/synapse \
$CONTAINER_RUNTIME build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
@ -175,12 +189,12 @@ if [ -z "$skip_docker_build" ]; then
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
docker build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
$CONTAINER_RUNTIME build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
docker build -t complement-synapse \
$CONTAINER_RUNTIME build -t complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"

View File

@ -39,7 +39,7 @@ class AuthBlocking:
self._mau_limits_reserved_threepids = (
hs.config.server.mau_limits_reserved_threepids
)
self._server_name = hs.hostname
self._is_mine_server_name = hs.is_mine_server_name
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
async def check_auth_blocking(
@ -77,7 +77,7 @@ class AuthBlocking:
if requester:
if requester.authenticated_entity.startswith("@"):
user_id = requester.authenticated_entity
elif requester.authenticated_entity == self._server_name:
elif self._is_mine_server_name(requester.authenticated_entity):
# We never block the server from doing actions on behalf of
# users.
return

View File

@ -257,6 +257,7 @@ class AccountDataTypes:
DIRECT: Final = "m.direct"
IGNORED_USER_LIST: Final = "m.ignored_user_list"
TAG: Final = "m.tag"
PUSH_RULES: Final = "m.push_rules"
class HistoryVisibility:

View File

@ -108,10 +108,10 @@ class Codes(str, Enum):
USER_AWAITING_APPROVAL = "ORG.MATRIX.MSC3866_USER_AWAITING_APPROVAL"
AS_PING_URL_NOT_SET = "FI.MAU.MSC2659_URL_NOT_SET"
AS_PING_BAD_STATUS = "FI.MAU.MSC2659_BAD_STATUS"
AS_PING_CONNECTION_TIMEOUT = "FI.MAU.MSC2659_CONNECTION_TIMEOUT"
AS_PING_CONNECTION_FAILED = "FI.MAU.MSC2659_CONNECTION_FAILED"
AS_PING_URL_NOT_SET = "M_URL_NOT_SET"
AS_PING_BAD_STATUS = "M_BAD_STATUS"
AS_PING_CONNECTION_TIMEOUT = "M_CONNECTION_TIMEOUT"
AS_PING_CONNECTION_FAILED = "M_CONNECTION_FAILED"
# Attempt to send a second annotation with the same event type & annotation key
# MSC2677

View File

@ -21,6 +21,7 @@ import socket
import sys
import traceback
import warnings
from textwrap import indent
from typing import (
TYPE_CHECKING,
Any,
@ -64,7 +65,6 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
from synapse.crypto import context_factory
from synapse.events.presence_router import load_legacy_presence_router
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite
from synapse.logging.context import PreserveLoggingContext
@ -73,6 +73,9 @@ from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.module_api.callbacks.spamchecker_callbacks import load_legacy_spam_checkers
from synapse.module_api.callbacks.third_party_event_rules_callbacks import (
load_legacy_third_party_event_rules,
)
from synapse.types import ISynapseReactor
from synapse.util import SYNAPSE_VERSION
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
@ -210,8 +213,12 @@ def handle_startup_exception(e: Exception) -> NoReturn:
# Exceptions that occur between setting up the logging and forking or starting
# the reactor are written to the logs, followed by a summary to stderr.
logger.exception("Exception during startup")
error_string = "".join(traceback.format_exception(e))
indented_error_string = indent(error_string, " ")
quit_with_error(
f"Error during initialisation:\n {e}\nThere may be more information in the logs."
f"Error during initialisation:\n{indented_error_string}\nThere may be more information in the logs."
)

View File

@ -326,7 +326,7 @@ class ApplicationServiceApi(SimpleHttpClient):
assert service.hs_token is not None
await self.post_json_get_json(
uri=f"{service.url}{APP_SERVICE_UNSTABLE_PREFIX}/fi.mau.msc2659/ping",
uri=f"{service.url}{APP_SERVICE_PREFIX}/ping",
post_json={"transaction_id": txn_id},
headers={"Authorization": [f"Bearer {service.hs_token}"]},
)

View File

@ -189,9 +189,6 @@ class ExperimentalConfig(Config):
# MSC3967: Do not require UIA when first uploading cross signing keys
self.msc3967_enabled = experimental.get("msc3967_enabled", False)
# MSC2659: Application service ping endpoint
self.msc2659_enabled = experimental.get("msc2659_enabled", False)
# MSC3981: Recurse relations
self.msc3981_recurse_relations = experimental.get(
"msc3981_recurse_relations", False
@ -199,3 +196,11 @@ class ExperimentalConfig(Config):
# MSC3970: Scope transaction IDs to devices
self.msc3970_enabled = experimental.get("msc3970_enabled", False)
# MSC4009: E.164 Matrix IDs
self.msc4009_e164_mxids = experimental.get("msc4009_e164_mxids", False)
# MSC4010: Do not allow setting m.push_rules account data.
self.msc4010_push_rules_account_data = experimental.get(
"msc4010_push_rules_account_data", False
)

View File

@ -42,11 +42,17 @@ class PushConfig(Config):
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email") or {}
redact_content = push_config.get("redact_content")
email_push_config = config.get("email") or {}
redact_content = email_push_config.get("redact_content")
if redact_content is not None:
print(
"The 'email.redact_content' option is deprecated: "
"please set push.include_content instead"
)
self.push_include_content = not redact_content
# Whether to apply a random delay to outbound push.
self.push_jitter_delay_ms = None
push_jitter_delay = push_config.get("jitter_delay", None)
if push_jitter_delay:
self.push_jitter_delay_ms = self.parse_duration(push_jitter_delay)

View File

@ -35,3 +35,9 @@ class RedisConfig(Config):
self.redis_port = redis_config.get("port", 6379)
self.redis_dbid = redis_config.get("dbid", None)
self.redis_password = redis_config.get("password")
self.redis_use_tls = redis_config.get("use_tls", False)
self.redis_certificate = redis_config.get("certificate_file", None)
self.redis_private_key = redis_config.get("private_key_file", None)
self.redis_ca_file = redis_config.get("ca_file", None)
self.redis_ca_path = redis_config.get("ca_path", None)

View File

@ -137,6 +137,10 @@ class ContentRepositoryConfig(Config):
self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M"))
self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M"))
self.prevent_media_downloads_from = config.get(
"prevent_media_downloads_from", []
)
self.media_store_path = self.ensure_directory(
config.get("media_store_path", "media_store")
)

View File

@ -75,3 +75,7 @@ class RoomConfig(Config):
% preset
)
# We validate the actual overrides when we try to apply them.
# When enabled, users will forget rooms when they leave them, either via a
# leave, kick or ban.
self.forget_on_leave = config.get("forget_rooms_on_leave", False)

View File

@ -39,6 +39,19 @@ The '%s' configuration option is deprecated and will be removed in a future
Synapse version. Please use ``%s: name_of_worker`` instead.
"""
_MISSING_MAIN_PROCESS_INSTANCE_MAP_DATA = """
Missing data for a worker to connect to main process. Please include '%s' in the
`instance_map` declared in your shared yaml configuration, or optionally(as a deprecated
solution) in every worker's yaml as various `worker_replication_*` settings as defined
in workers documentation here:
`https://matrix-org.github.io/synapse/latest/workers.html#worker-configuration`
"""
# This allows for a handy knob when it's time to change from 'master' to
# something with less 'history'
MAIN_PROCESS_INSTANCE_NAME = "master"
# Use this to adjust what the main process is known as in the yaml instance_map
MAIN_PROCESS_INSTANCE_MAP_NAME = "main"
logger = logging.getLogger(__name__)
@ -161,27 +174,15 @@ class WorkerConfig(Config):
raise ConfigError("worker_log_config must be a string")
self.worker_log_config = worker_log_config
# The host used to connect to the main synapse
self.worker_replication_host = config.get("worker_replication_host", None)
# The port on the main synapse for TCP replication
if "worker_replication_port" in config:
raise ConfigError(DIRECT_TCP_ERROR, ("worker_replication_port",))
# The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port")
# The tls mode on the main synapse for HTTP replication endpoint.
# For backward compatibility this defaults to False.
self.worker_replication_http_tls = config.get(
"worker_replication_http_tls", False
)
# The shared secret used for authentication when connecting to the main synapse.
self.worker_replication_secret = config.get("worker_replication_secret", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.instance_name = self.worker_name or "master"
self.instance_name = self.worker_name or MAIN_PROCESS_INSTANCE_NAME
# FIXME: Remove this check after a suitable amount of time.
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
@ -215,12 +216,55 @@ class WorkerConfig(Config):
)
# A map from instance name to host/port of their HTTP replication endpoint.
# Check if the main process is declared. Inject it into the map if it's not,
# based first on if a 'main' block is declared then on 'worker_replication_*'
# data. If both are available, default to instance_map. The main process
# itself doesn't need this data as it would never have to talk to itself.
instance_map: Dict[str, Any] = config.get("instance_map", {})
if instance_map and self.instance_name is not MAIN_PROCESS_INSTANCE_NAME:
# The host used to connect to the main synapse
main_host = config.get("worker_replication_host", None)
# The port on the main synapse for HTTP replication endpoint
main_port = config.get("worker_replication_http_port")
# The tls mode on the main synapse for HTTP replication endpoint.
# For backward compatibility this defaults to False.
main_tls = config.get("worker_replication_http_tls", False)
# For now, accept 'main' in the instance_map, but the replication system
# expects 'master', force that into being until it's changed later.
if MAIN_PROCESS_INSTANCE_MAP_NAME in instance_map:
instance_map[MAIN_PROCESS_INSTANCE_NAME] = instance_map[
MAIN_PROCESS_INSTANCE_MAP_NAME
]
del instance_map[MAIN_PROCESS_INSTANCE_MAP_NAME]
# This is the backwards compatibility bit that handles the
# worker_replication_* bits using setdefault() to not overwrite anything.
elif main_host is not None and main_port is not None:
instance_map.setdefault(
MAIN_PROCESS_INSTANCE_NAME,
{
"host": main_host,
"port": main_port,
"tls": main_tls,
},
)
else:
# If we've gotten here, it means that the main process is not on the
# instance_map and that not enough worker_replication_* variables
# were declared in the worker's yaml.
raise ConfigError(
_MISSING_MAIN_PROCESS_INSTANCE_MAP_DATA
% MAIN_PROCESS_INSTANCE_MAP_NAME
)
self.instance_map: Dict[
str, InstanceLocationConfig
] = parse_and_validate_mapping(
config.get("instance_map", {}),
InstanceLocationConfig,
)
] = parse_and_validate_mapping(instance_map, InstanceLocationConfig)
# Map from type of streams to source, c.f. WriterLocations.
writers = config.get("stream_writers") or {}

View File

@ -173,7 +173,7 @@ class Keyring:
process_batch_callback=self._inner_fetch_key_requests,
)
self._hostname = hs.hostname
self._is_mine_server_name = hs.is_mine_server_name
# build a FetchKeyResult for each of our own keys, to shortcircuit the
# fetcher.
@ -277,7 +277,7 @@ class Keyring:
# If we are the originating server, short-circuit the key-fetch for any keys
# we already have
if verify_request.server_name == self._hostname:
if self._is_mine_server_name(verify_request.server_name):
for key_id in verify_request.key_ids:
if key_id in self._local_verify_keys:
found_keys[key_id] = self._local_verify_keys[key_id]

View File

@ -49,7 +49,7 @@ class FederationBase:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.server_name = hs.hostname
self._is_mine_server_name = hs.is_mine_server_name
self.keyring = hs.get_keyring()
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self.store = hs.get_datastores().main

View File

@ -854,7 +854,7 @@ class FederationClient(FederationBase):
for destination in destinations:
# We don't want to ask our own server for information we don't have
if destination == self.server_name:
if self._is_mine_server_name(destination):
continue
try:
@ -1536,7 +1536,7 @@ class FederationClient(FederationBase):
self, destinations: Iterable[str], room_id: str, event_dict: JsonDict
) -> None:
for destination in destinations:
if destination == self.server_name:
if self._is_mine_server_name(destination):
continue
try:

View File

@ -129,6 +129,7 @@ class FederationServer(FederationBase):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.server_name = hs.hostname
self.handler = hs.get_federation_handler()
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self._federation_event_handler = hs.get_federation_event_handler()
@ -942,7 +943,7 @@ class FederationServer(FederationBase):
authorising_server = get_domain_from_id(
event.content[EventContentFields.AUTHORISING_USER]
)
if authorising_server != self.server_name:
if not self._is_mine_server_name(authorising_server):
raise SynapseError(
400,
f"Cannot authorise request from resident server: {authorising_server}",

View File

@ -68,6 +68,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.is_mine_server_name = hs.is_mine_server_name
# We may have multiple federation sender instances, so we need to track
# their positions separately.
@ -198,7 +199,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
key: Optional[Hashable] = None,
) -> None:
"""As per FederationSender"""
if destination == self.server_name:
if self.is_mine_server_name(destination):
logger.info("Not sending EDU to ourselves")
return

View File

@ -362,6 +362,7 @@ class FederationSender(AbstractFederationSender):
self.clock = hs.get_clock()
self.is_mine_id = hs.is_mine_id
self.is_mine_server_name = hs.is_mine_server_name
self._presence_router: Optional["PresenceRouter"] = None
self._transaction_manager = TransactionManager(hs)
@ -766,7 +767,7 @@ class FederationSender(AbstractFederationSender):
domains = [
d
for d in domains_set
if d != self.server_name
if not self.is_mine_server_name(d)
and self._federation_shard_config.should_handle(self._instance_name, d)
]
if not domains:
@ -832,7 +833,7 @@ class FederationSender(AbstractFederationSender):
assert self.is_mine_id(state.user_id)
for destination in destinations:
if destination == self.server_name:
if self.is_mine_server_name(destination):
continue
if not self._federation_shard_config.should_handle(
self._instance_name, destination
@ -860,7 +861,7 @@ class FederationSender(AbstractFederationSender):
content: content of EDU
key: clobbering key for this edu
"""
if destination == self.server_name:
if self.is_mine_server_name(destination):
logger.info("Not sending EDU to ourselves")
return
@ -897,7 +898,7 @@ class FederationSender(AbstractFederationSender):
queue.send_edu(edu)
def send_device_messages(self, destination: str, immediate: bool = True) -> None:
if destination == self.server_name:
if self.is_mine_server_name(destination):
logger.warning("Not sending device update to ourselves")
return
@ -919,7 +920,7 @@ class FederationSender(AbstractFederationSender):
might have come back.
"""
if destination == self.server_name:
if self.is_mine_server_name(destination):
logger.warning("Not waking up ourselves")
return

View File

@ -58,9 +58,9 @@ class TransportLayerClient:
"""Sends federation HTTP requests to other servers"""
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.client = hs.get_federation_http_client()
self._faster_joins_enabled = hs.config.experimental.faster_joins_enabled
self._is_mine_server_name = hs.is_mine_server_name
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
@ -235,7 +235,7 @@ class TransportLayerClient:
transaction.transaction_id,
)
if transaction.destination == self.server_name:
if self._is_mine_server_name(transaction.destination):
raise RuntimeError("Transport layer cannot send to itself!")
# FIXME: This is only used by the tests. The actual json sent is

View File

@ -57,6 +57,7 @@ class Authenticator:
self._clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self._is_mine_server_name = hs.is_mine_server_name
self.store = hs.get_datastores().main
self.federation_domain_whitelist = (
hs.config.federation.federation_domain_whitelist
@ -100,7 +101,9 @@ class Authenticator:
json_request["signatures"].setdefault(origin, {})[key] = sig
# if the origin_server sent a destination along it needs to match our own server_name
if destination is not None and destination != self.server_name:
if destination is not None and not self._is_mine_server_name(
destination
):
raise AuthenticationError(
HTTPStatus.UNAUTHORIZED,
"Destination mismatch in auth header",

View File

@ -212,7 +212,7 @@ class AuthHandler:
self._password_enabled_for_login = hs.config.auth.password_enabled_for_login
self._password_enabled_for_reauth = hs.config.auth.password_enabled_for_reauth
self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
self._third_party_rules = hs.get_third_party_event_rules()
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.

View File

@ -39,11 +39,11 @@ class DeactivateAccountHandler:
self._profile_handler = hs.get_profile_handler()
self.user_directory_handler = hs.get_user_directory_handler()
self._server_name = hs.hostname
self._third_party_rules = hs.get_third_party_event_rules()
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
# Flag that indicates whether the process to part users from rooms is running
self._user_parter_running = False
self._third_party_rules = hs.get_third_party_event_rules()
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
# Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do).

View File

@ -75,10 +75,14 @@ class DeviceWorkerHandler:
self.store = hs.get_datastores().main
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
self._appservice_handler = hs.get_application_service_handler()
self._state_storage = hs.get_storage_controllers().state
self._auth_handler = hs.get_auth_handler()
self.server_name = hs.hostname
self._msc3852_enabled = hs.config.experimental.msc3852_enabled
self._query_appservices_for_keys = (
hs.config.experimental.msc3984_appservice_key_query
)
self.device_list_updater = DeviceListWorkerUpdater(hs)
@ -328,6 +332,30 @@ class DeviceWorkerHandler:
user_id, "self_signing"
)
# Check if the application services have any results.
if self._query_appservices_for_keys:
# Query the appservice for all devices for this user.
query: Dict[str, Optional[List[str]]] = {user_id: None}
# Query the appservices for any keys.
appservice_results = await self._appservice_handler.query_keys(query)
# Merge results, overriding anything from the database.
appservice_devices = appservice_results.get("device_keys", {}).get(
user_id, {}
)
# Filter the database results to only those devices that the appservice has
# *not* responded with.
devices = [d for d in devices if d["device_id"] not in appservice_devices]
# Append the appservice response by wrapping each result in another dictionary.
devices.extend(
{"device_id": device_id, "keys": device}
for device_id, device in appservice_devices.items()
)
# TODO Handle cross-signing keys.
return {
"user_id": user_id,
"stream_id": stream_id,

View File

@ -52,7 +52,9 @@ class DirectoryHandler:
self.config = hs.config
self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.require_membership = hs.config.server.require_membership_for_aliases
self.third_party_event_rules = hs.get_third_party_event_rules()
self._third_party_event_rules = (
hs.get_module_api_callbacks().third_party_event_rules
)
self.server_name = hs.hostname
self.federation = hs.get_federation_client()
@ -503,7 +505,7 @@ class DirectoryHandler:
# Check if publishing is blocked by a third party module
allowed_by_third_party_rules = (
await (
self.third_party_event_rules.check_visibility_can_be_modified(
self._third_party_event_rules.check_visibility_can_be_modified(
room_id, visibility
)
)

View File

@ -29,7 +29,7 @@ from synapse.event_auth import (
)
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap, StrCollection, get_domain_from_id
from synapse.types import StateMap, StrCollection
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -47,6 +47,7 @@ class EventAuthHandler:
self._store = hs.get_datastores().main
self._state_storage_controller = hs.get_storage_controllers().state
self._server_name = hs.hostname
self._is_mine_id = hs.is_mine_id
async def check_auth_rules_from_context(
self,
@ -247,7 +248,7 @@ class EventAuthHandler:
if not await self.is_user_in_rooms(allowed_rooms, user_id):
# If this is a remote request, the user might be in an allowed room
# that we do not know about.
if get_domain_from_id(user_id) != self._server_name:
if not self._is_mine_id(user_id):
for room_id in allowed_rooms:
if not await self._store.is_host_joined(room_id, self._server_name):
raise SynapseError(

View File

@ -141,6 +141,7 @@ class FederationHandler:
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.is_mine_id = hs.is_mine_id
self.is_mine_server_name = hs.is_mine_server_name
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self.event_creation_handler = hs.get_event_creation_handler()
self.event_builder_factory = hs.get_event_builder_factory()
@ -169,7 +170,9 @@ class FederationHandler:
self._room_backfill = Linearizer("room_backfill")
self.third_party_event_rules = hs.get_third_party_event_rules()
self._third_party_event_rules = (
hs.get_module_api_callbacks().third_party_event_rules
)
# Tracks running partial state syncs by room ID.
# Partial state syncs currently only run on the main process, so it's okay to
@ -451,7 +454,7 @@ class FederationHandler:
for dom in domains:
# We don't want to ask our own server for information we don't have
if dom == self.server_name:
if self.is_mine_server_name(dom):
continue
try:
@ -1253,7 +1256,7 @@ class FederationHandler:
unpersisted_context,
) = await self.event_creation_handler.create_new_client_event(builder=builder)
event_allowed, _ = await self.third_party_event_rules.check_event_allowed(
event_allowed, _ = await self._third_party_event_rules.check_event_allowed(
event, unpersisted_context
)
if not event_allowed:

View File

@ -157,10 +157,13 @@ class FederationEventHandler:
self._get_room_member_handler = hs.get_room_member_handler
self._federation_client = hs.get_federation_client()
self._third_party_event_rules = hs.get_third_party_event_rules()
self._third_party_event_rules = (
hs.get_module_api_callbacks().third_party_event_rules
)
self._notifier = hs.get_notifier()
self._is_mine_id = hs.is_mine_id
self._is_mine_server_name = hs.is_mine_server_name
self._server_name = hs.hostname
self._instance_name = hs.get_instance_name()
@ -686,7 +689,7 @@ class FederationEventHandler:
server from invalid events (there is probably no point in trying to
re-fetch invalid events from every other HS in the room.)
"""
if dest == self._server_name:
if self._is_mine_server_name(dest):
raise SynapseError(400, "Can't backfill from self.")
events = await self._federation_client.backfill(

View File

@ -77,7 +77,6 @@ from synapse.util.metrics import measure_func
from synapse.visibility import get_effective_room_visibility_from_state
if TYPE_CHECKING:
from synapse.events.third_party_rules import ThirdPartyEventRules
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -509,8 +508,8 @@ class EventCreationHandler:
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self.third_party_event_rules: "ThirdPartyEventRules" = (
self.hs.get_third_party_event_rules()
self._third_party_event_rules = (
self.hs.get_module_api_callbacks().third_party_event_rules
)
self._block_events_without_consent_error = (
@ -1314,7 +1313,7 @@ class EventCreationHandler:
if requester:
context.app_service = requester.app_service
res, new_content = await self.third_party_event_rules.check_event_allowed(
res, new_content = await self._third_party_event_rules.check_event_allowed(
event, context
)
if res is False:

View File

@ -59,9 +59,9 @@ class ProfileHandler:
self.max_avatar_size = hs.config.server.max_avatar_size
self.allowed_avatar_mimetypes = hs.config.server.allowed_avatar_mimetypes
self.server_name = hs.config.server.server_name
self._is_mine_server_name = hs.is_mine_server_name
self._third_party_rules = hs.get_third_party_event_rules()
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
target_user = UserID.from_string(user_id)
@ -170,8 +170,8 @@ class ProfileHandler:
displayname_to_set = None
# If the admin changes the display name of a user, the requesting user cannot send
# the join event to update the displayname in the rooms.
# This must be done by the target user himself.
# the join event to update the display name in the rooms.
# This must be done by the target user themselves.
if by_admin:
requester = create_requester(
target_user,
@ -309,7 +309,7 @@ class ProfileHandler:
else:
server_name = host
if server_name == self.server_name:
if self._is_mine_server_name(server_name):
media_info = await self.store.get_local_media(media_id)
else:
media_info = await self.store.get_cached_remote_media(server_name, media_id)

View File

@ -11,14 +11,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, List, Optional, Union
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
import attr
from synapse.api.errors import SynapseError, UnrecognizedRequestError
from synapse.push.clientformat import format_push_rules_for_user
from synapse.storage.push_rule import RuleNotFoundException
from synapse.synapse_rust.push import get_base_rule_ids
from synapse.types import JsonDict
from synapse.types import JsonDict, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -115,6 +116,17 @@ class PushRulesHandler:
stream_id = self._main_store.get_max_push_rules_stream_id()
self._notifier.on_new_event("push_rules_key", stream_id, users=[user_id])
async def push_rules_for_user(
self, user: UserID
) -> Dict[str, Dict[str, List[Dict[str, Any]]]]:
"""
Push rules aren't really account data, but get formatted as such for /sync.
"""
user_id = user.to_string()
rules_raw = await self._main_store.get_push_rules_for_user(user_id)
rules = format_push_rules_for_user(user, rules_raw)
return rules
def check_actions(actions: List[Union[str, JsonDict]]) -> None:
"""Check if the given actions are spec compliant.
@ -129,6 +141,8 @@ def check_actions(actions: List[Union[str, JsonDict]]) -> None:
raise InvalidRuleException("No actions found")
for a in actions:
# "dont_notify" and "coalesce" are legacy actions. They are allowed, but
# ignored (resulting in no action from the pusher).
if a in ["notify", "dont_notify", "coalesce"]:
pass
elif isinstance(a, dict) and "set_tweak" in a:

View File

@ -15,6 +15,7 @@
import logging
from typing import TYPE_CHECKING
from synapse.api.constants import ReceiptTypes
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING:
@ -42,7 +43,7 @@ class ReadMarkerHandler:
async with self.read_marker_linearizer.queue((room_id, user_id)):
existing_read_marker = await self.store.get_account_data_for_room_and_type(
user_id, room_id, "m.fully_read"
user_id, room_id, ReceiptTypes.FULLY_READ
)
should_update = True
@ -56,5 +57,5 @@ class ReadMarkerHandler:
if should_update:
content = {"event_id": event_id}
await self.account_data_handler.add_account_data_to_room(
user_id, room_id, "m.fully_read", content
user_id, room_id, ReceiptTypes.FULLY_READ, content
)

View File

@ -46,7 +46,7 @@ from synapse.replication.http.register import (
ReplicationRegisterServlet,
)
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import RoomAlias, UserID, create_requester
from synapse.types import GUEST_USER_ID_PATTERN, RoomAlias, UserID, create_requester
from synapse.types.state import StateFilter
if TYPE_CHECKING:
@ -143,10 +143,15 @@ class RegistrationHandler:
assigned_user_id: Optional[str] = None,
inhibit_user_in_use_error: bool = False,
) -> None:
if types.contains_invalid_mxid_characters(localpart):
if types.contains_invalid_mxid_characters(
localpart, self.hs.config.experimental.msc4009_e164_mxids
):
extra_chars = (
"=_-./+" if self.hs.config.experimental.msc4009_e164_mxids else "=_-./"
)
raise SynapseError(
400,
"User ID can only contain characters a-z, 0-9, or '=_-./'",
f"User ID can only contain characters a-z, 0-9, or '{extra_chars}'",
Codes.INVALID_USERNAME,
)
@ -195,16 +200,12 @@ class RegistrationHandler:
errcode=Codes.FORBIDDEN,
)
if guest_access_token is None:
try:
int(localpart)
if guest_access_token is None and GUEST_USER_ID_PATTERN.fullmatch(localpart):
raise SynapseError(
400,
"Numeric user IDs are reserved for guest users.",
errcode=Codes.INVALID_USERNAME,
)
except ValueError:
pass
async def register_user(
self,

View File

@ -160,7 +160,9 @@ class RoomCreationHandler:
)
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self.third_party_event_rules = hs.get_third_party_event_rules()
self._third_party_event_rules = (
hs.get_module_api_callbacks().third_party_event_rules
)
async def upgrade_room(
self, requester: Requester, old_room_id: str, new_version: RoomVersion
@ -742,7 +744,7 @@ class RoomCreationHandler:
# Let the third party rules modify the room creation config if needed, or abort
# the room creation entirely with an exception.
await self.third_party_event_rules.on_create_room(
await self._third_party_event_rules.on_create_room(
requester, config, is_requester_admin=is_requester_admin
)
@ -879,7 +881,7 @@ class RoomCreationHandler:
# Check whether this visibility value is blocked by a third party module
allowed_by_third_party_rules = (
await (
self.third_party_event_rules.check_visibility_can_be_modified(
self._third_party_event_rules.check_visibility_can_be_modified(
room_id, visibility
)
)
@ -1731,7 +1733,7 @@ class RoomShutdownHandler:
self.room_member_handler = hs.get_room_member_handler()
self._room_creation_handler = hs.get_room_creation_handler()
self._replication = hs.get_replication_data_handler()
self._third_party_rules = hs.get_third_party_event_rules()
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
self.event_creation_handler = hs.get_event_creation_handler()
self.store = hs.get_datastores().main

View File

@ -16,7 +16,7 @@ import abc
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple
from synapse import types
from synapse.api.constants import (
@ -38,7 +38,10 @@ from synapse.event_auth import get_named_level, get_power_level_event
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
from synapse.logging import opentracing
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import NOT_SPAM
from synapse.types import (
JsonDict,
@ -97,7 +100,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.clock = hs.get_clock()
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self.third_party_event_rules = hs.get_third_party_event_rules()
self._third_party_event_rules = (
hs.get_module_api_callbacks().third_party_event_rules
)
self._server_notices_mxid = self.config.servernotices.server_notices_mxid
self._enable_lookup = hs.config.registration.enable_3pid_lookup
self.allow_per_room_profiles = self.config.server.allow_per_room_profiles
@ -280,9 +285,25 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
async def forget(self, user: UserID, room_id: str) -> None:
raise NotImplementedError()
user_id = user.to_string()
member = await self._storage_controllers.state.get_current_state_event(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE,
Membership.BAN,
]:
raise SynapseError(400, "User %s in room %s" % (user_id, room_id))
# In normal case this call is only required if `membership` is not `None`.
# But: After the last member had left the room, the background update
# `_background_remove_left_rooms` is deleting rows related to this room from
# the table `current_state_events` and `get_current_state_events` is `None`.
await self.store.forget(user_id, room_id)
async def ratelimit_multiple_invites(
self,
@ -1555,7 +1576,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# can't just rely on the standard ratelimiting of events.
await self._third_party_invite_limiter.ratelimit(requester)
can_invite = await self.third_party_event_rules.check_threepid_can_be_invited(
can_invite = await self._third_party_event_rules.check_threepid_can_be_invited(
medium, address, room_id
)
if not can_invite:
@ -2060,25 +2081,141 @@ class RoomMemberMasterHandler(RoomMemberHandler):
"""Implements RoomMemberHandler._user_left_room"""
user_left_room(self.distributor, target, room_id)
async def forget(self, user: UserID, room_id: str) -> None:
user_id = user.to_string()
member = await self._storage_controllers.state.get_current_state_event(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
class RoomForgetterHandler(StateDeltasHandler):
"""Forgets rooms when they are left, when enabled in the homeserver config.
For the purposes of this feature, kicks, bans and "leaves" via state resolution
weirdness are all considered to be leaves.
Derived from `StatsHandler` and `UserDirectoryHandler`.
"""
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self._hs = hs
self._store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._clock = hs.get_clock()
self._notifier = hs.get_notifier()
self._room_member_handler = hs.get_room_member_handler()
# The current position in the current_state_delta stream
self.pos: Optional[int] = None
# Guard to ensure we only process deltas one at a time
self._is_processing = False
if hs.config.worker.run_background_tasks:
self._notifier.add_replication_callback(self.notify_new_event)
# We kick this off to pick up outstanding work from before the last restart.
self._clock.call_later(0, self.notify_new_event)
def notify_new_event(self) -> None:
"""Called when there may be more deltas to process"""
if self._is_processing:
return
self._is_processing = True
async def process() -> None:
try:
await self._unsafe_process()
finally:
self._is_processing = False
run_as_background_process("room_forgetter.notify_new_event", process)
async def _unsafe_process(self) -> None:
# If self.pos is None then means we haven't fetched it from DB
if self.pos is None:
self.pos = await self._store.get_room_forgetter_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self.pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
logger.warning(
"Event stream ordering appears to have gone backwards (%i -> %i): "
"rewinding room forgetter processor",
self.pos,
room_max_stream_ordering,
)
membership = member.membership if member else None
self.pos = room_max_stream_ordering
if membership is not None and membership not in [
Membership.LEAVE,
Membership.BAN,
]:
raise SynapseError(400, "User %s in room %s" % (user_id, room_id))
if not self._hs.config.room.forget_on_leave:
# Update the processing position, so that if the server admin turns the
# feature on at a later date, we don't decide to forget every room that
# has ever been left in the past.
self.pos = self._store.get_room_max_stream_ordering()
await self._store.update_room_forgetter_stream_pos(self.pos)
return
# In normal case this call is only required if `membership` is not `None`.
# But: After the last member had left the room, the background update
# `_background_remove_left_rooms` is deleting rows related to this room from
# the table `current_state_events` and `get_current_state_events` is `None`.
await self.store.forget(user_id, room_id)
# Loop round handling deltas until we're up to date
while True:
# Be sure to read the max stream_ordering *before* checking if there are any outstanding
# deltas, since there is otherwise a chance that we could miss updates which arrive
# after we check the deltas.
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self.pos == room_max_stream_ordering:
break
logger.debug(
"Processing room forgetting %s->%s", self.pos, room_max_stream_ordering
)
(
max_pos,
deltas,
) = await self._storage_controllers.state.get_current_state_deltas(
self.pos, room_max_stream_ordering
)
logger.debug("Handling %d state deltas", len(deltas))
await self._handle_deltas(deltas)
self.pos = max_pos
# Expose current event processing position to prometheus
event_processing_positions.labels("room_forgetter").set(max_pos)
await self._store.update_room_forgetter_stream_pos(max_pos)
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None:
"""Called with the state deltas to process"""
for delta in deltas:
typ = delta["type"]
state_key = delta["state_key"]
room_id = delta["room_id"]
event_id = delta["event_id"]
prev_event_id = delta["prev_event_id"]
if typ != EventTypes.Member:
continue
if not self._hs.is_mine_id(state_key):
continue
change = await self._get_key_change(
prev_event_id,
event_id,
key_name="membership",
public_value=Membership.JOIN,
)
is_leave = change is MatchChange.now_false
if is_leave:
try:
await self._room_member_handler.forget(
UserID.from_string(state_key), room_id
)
except SynapseError as e:
if e.code == 400:
# The user is back in the room.
pass
else:
raise
def get_users_which_can_issue_invite(auth_events: StateMap[EventBase]) -> List[str]:

View File

@ -137,6 +137,3 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
await self._notify_change_client(
user_id=target.to_string(), room_id=room_id, change="left"
)
async def forget(self, target: UserID, room_id: str) -> None:
raise RuntimeError("Cannot forget rooms on workers.")

Some files were not shown because too many files have changed in this diff Show More