* Configure and initialise tracer
Includes config options for the tracer and sets up JaegerClient.
* Scope manager using LogContexts
We piggy-back our tracer scopes by using log context.
The current log context gives us the current scope. If new scope is
created we create a stack of scopes in the context.
* jaeger is a dependency now
* Carrier inject and extraction for Twisted Headers
* Trace federation requests on the way in and out.
The span is created in _started_processing and closed in
_finished_processing because we need a meaningful log context.
* Create logcontext for new scope.
Instead of having a stack of scopes in a logcontext we create a new
context for a new scope if the current logcontext already has a scope.
* Remove scope from logcontext if logcontext is top level
* Disable tracer if not configured
* typo
* Remove dependence on jaeger internals
* bools
* Set service name
* :Explicitely state that the tracer is disabled
* Black is the new black
* Newsfile
* Code style
* Use the new config setup.
* Generate config.
* Copyright
* Rename config to opentracing
* Remove user whitelisting
* Empty whitelist by default
* User ConfigError instead of RuntimeError
* Use isinstance
* Use tag constants for opentracing.
* Remove debug comment and no need to explicitely record error
* Two errors a "s(c)entry"
* Docstrings!
* Remove debugging brainslip
* Homeserver Whitlisting
* Better opentracing config comment
* linting
* Inclue worker name in service_name
* Make opentracing an optional dependency
* Neater config retreival
* Clean up dummy tags
* Instantiate tracing as object instead of global class
* Inlcude opentracing as a homeserver member.
* Thread opentracing to the request level
* Reference opetnracing through hs
* Instantiate dummy opentracin g for tests.
* About to revert, just keeping the unfinished changes just in case
* Revert back to global state, commit number:
9ce4a3d906
* Use class level methods in tracerutils
* Start and stop requests spans in a place where we
have access to the authenticated entity
* Seen it, isort it
* Make sure to close the active span.
* I'm getting black and blue from this.
* Logger formatting
Co-Authored-By: Erik Johnston <erik@matrix.org>
* Outdated comment
* Import opentracing at the top
* Return a contextmanager
* Start tracing client requests from the servlet
* Return noop context manager if not tracing
* Explicitely say that these are federation requests
* Include servlet name in client requests
* Use context manager
* Move opentracing to logging/
* Seen it, isort it again!
* Ignore twisted return exceptions on context exit
* Escape the scope
* Scopes should be entered to make them useful.
* Nicer decorator names
* Just one init, init?
* Don't need to close something that isn't open
* Docs make you smarter
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
This reverts commit ce5bcefc60.
This caused:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/synapse/src/synapse/app/client_reader.py", line 32, in <module>
from synapse.replication.slave.storage import SlavedProfileStore
ImportError: cannot import name 'SlavedProfileStore' from 'synapse.replication.slave.storage' (/home/synapse/src/synapse/replication/slave/storage/__init__.py)
error starting synapse.app.client_reader('/home/synapse/config/workers/client_reader.yaml') (exit code: 1); see above for logs
```
When guest_access changes from allowed to forbidden all local guest
users should be kicked from the room. This did not happen when
revocation was received from federation on a worker.
Presumably broken in #4141
This allows registration to be handled by a worker, though the actual
write to the database still happens on master.
Note: due to the in-memory session map all registration requests must be
handled by the same worker.
* Better logging for errors on startup
* Fix "TypeError: '>' not supported" when starting without an existing
certificate
* Fix a bug where an existing certificate would be reprovisoned every day
I wanted to bring listen_tcp into line with listen_ssl in terms of returning a
list of ports, and wanted to check that was a safe thing to do - hence the
logging in `refresh_certificate`.
Also, pull the 'Synapse now listening' message up to homeserver.py, because it
was being duplicated everywhere else.
* Handle listening for ACME requests on IPv6 addresses
the weird url-but-not-actually-a-url-string doesn't handle IPv6 addresses
without extra quoting. Building a string which you are about to parse again
seems like a weird choice. Let's just use listenTCP, which is consistent with
what we do elsewhere.
* Clean up the default ACME config
make it look a bit more consistent with everything else, and tweak the defaults
to listen on port 80.
* newsfile
This allows the OpenID userinfo endpoint to be active even if the
federation resource is not active. The OpenID userinfo endpoint
is called by integration managers to verify user actions using the
client API OpenID access token. Without this verification, the
integration manager cannot know that the access token is valid.
The OpenID userinfo endpoint will be loaded in the case that either
"federation" or "openid" resource is defined. The new "openid"
resource is defaulted to active in default configuration.
Signed-off-by: Jason Robinson <jasonr@matrix.org>
For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
This is largely a precursor for the removal of the bundled webclient. The idea
is to present a page at / which reassures people that something is working, and
to give them some links for next steps.
The welcome page lives at `/_matrix/static/`, so is enabled alongside the other
`static` resources (which, in practice, means the client API is enabled). We'll
redirect to it from `/` if we have nothing better to display there.
It would be nice to have a way to disable it (in the same way that you might
disable the nginx welcome page), but I can't really think of a good way to do
that without a load of ickiness.
It's based on the work done by @krombel for #2601.
This implements both a SAML2 metadata endpoint (at
`/_matrix/saml2/metadata.xml`), and a SAML2 response receiver (at
`/_matrix/saml2/authn_response`). If the SAML2 response matches what's been
configured, we complete the SSO login flow by redirecting to the client url
(aka `RelayState` in SAML2 jargon) with a login token.
What we don't yet have is anything to build a SAML2 request and redirect the
user to the identity provider. That is left as an exercise for the reader.
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
As of #4027, we require psutil to be installed, so it should be in our
dependency list. We can also remove some of the conditional import code
introduced by #992.
Fixes#4062.
symlinks apparently break setuptools on python3 and alpine
(https://bugs.python.org/issue31940), so let's stop using a symlink and just
use the file directly.
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.
This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
We should explicitly close any db connections we open, because failing to do so
can block other transactions as per
https://github.com/matrix-org/synapse/issues/3682.
Let's also try to factor out some of the boilerplate by having server classes
define their datastore class rather than duplicating the whole of `setup`.
Turns out that the user directory handling is fairly racey as a bunch
of stuff assumes that the processing happens on master, which it doesn't
when there is a synapse.app.user_dir worker. So lets just call the
function directly until we actually get round to fixing it, since it
doesn't make the situation any worse.
Run the handlers for replication commands as background processes. This should
improve the visibility in our metrics, and reduce the number of "running db
transaction from sentinel context" warnings.
Ideally it means converting the things that fire off deferreds into the night
into things that actually return a Deferred when they are done. I've made a bit
of a stab at this, but it will probably be leaky.
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.
Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
This allows us to handle /context/ requests on the client_reader worker
without having to pull in all the various stream handlers (e.g.
precence, typing, pushers etc). The only thing the token gets used for
is pagination, and that ignores everything but the room portion of the
token.
(instead of everywhere that writes a response. Or rather, the subset of places
which write responses where we haven't forgotten it).
This also means that we don't have to have the mysterious version_string
attribute in anything with a request handler.
Unfortunately it does mean that we have to pass the version string wherever we
instantiate a SynapseSite, which has been c&ped 150 times, but that is code
that ought to be cleaned up anyway really.
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.
This is unsatisfactory for a number of reasons:
- logging on garbage collection is best-effort and may happen some time after
the error, if at all
- it can be hard to figure out where the error actually happened.
- it is logged as a scary CRITICAL error which (a) I always forget to grep for
and (b) it's not really CRITICAL if a background process we don't care about
fails.
So this is an attempt to add exception handling to everything we fire off into
the background.
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.
Signed-off-by: Silke Hofstra <silke@slxh.eu>
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.
See #2232
Signed-off-by: Silke Hofstra <silke@slxh.eu>
This avoids the scenario where we have four different PreviewUrlResources
configured on a single app, each of which have their own caches and cache
clearing jobs.