(instead of everywhere that writes a response. Or rather, the subset of places
which write responses where we haven't forgotten it).
This also means that we don't have to have the mysterious version_string
attribute in anything with a request handler.
Unfortunately it does mean that we have to pass the version string wherever we
instantiate a SynapseSite, which has been c&ped 150 times, but that is code
that ought to be cleaned up anyway really.
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.
Signed-off-by: Silke Hofstra <silke@slxh.eu>
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.
See #2232
Signed-off-by: Silke Hofstra <silke@slxh.eu>
This avoids the scenario where we have four different PreviewUrlResources
configured on a single app, each of which have their own caches and cache
clearing jobs.
This fixes a class of 'Unexpected logcontext' messages, which were happening
because the logcontext was somewhat arbitrarily swapping between the sentinel
and the `run` logcontext.
The empty string is a valid setting for the bind_address option, so
explicitly check for None here instead.
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
The existing content can still be downloaded. The last upload to the
matrix.org server was in January 2015, so it is probably safe to remove
the upload API.
Renames ``load_config`` to ``load_or_generate_config``
Adds a method called ``load_config`` that just loads the
config.
The main synapse.app.homeserver will continue to use
``load_or_generate_config`` to retain backwards compat.
However new worker processes can use ``load_config`` to
load the config avoiding some of the cruft needed to generate
the config.
As the new ``load_config`` method is expected to be used by new
configs it removes support for the legacy commandline overrides
that ``load_or_generate_config`` supports
synapse
This is necessary for replicating the data in synapse to be visible to a
separate service because presence and typing notifications aren't stored
in a database so won't be visible to another process.
This API can be used to either get the raw data by requesting the tables
themselves or to just receive notifications for updates by following the
streams meta-stream.
Returns updates for each table requested a JSON array of arrays with a
row for each row in the table.
Each table is prefixed by a header row with the: name of the table,
current stream_id position for the table, number of rows, number of
columns and the names of the columns.
This is followed by the rows that have been added to the server since
the requester last asked.
The API has a timeout and is hooked up to the notifier so that a slave
can long poll for updates.
Currently we store all access tokens in the DB, and fall back to that
check if we can't validate the macaroon, so our fallback works here, but
for guests, their macaroons don't get persisted, so we don't get to
find them in the database. Each restart, we generate a new ephemeral
key, so guests lose access after each server restart.
I tried to fix up the config stuff to be less insane, but gave up, so
instead I bolt on yet another piece of custom one-off insanity.
Also, add some basic tests for config generation and loading.
This is for setting up dependencies that require work on startup. This
is useful for the DataStore that wants to read a bunch from the database
before initiliazing.
This is because otherwise it won't get picked up by python packaging.
This also fixes the problem where the "static" folder was found if
synapse wasn't started from that directory.