Our sentry-web containers are extremely slow-starting, it can take up to 3 minutes until the health endpoint starts responding. I have no idea what takes so long. Is there anything I can do in order to get more logging so we can try and chase this down?
This is all I see btw:
*** Operational MODE: preforking+threaded ***
spawned uWSGI master process (pid: 14)
spawned uWSGI worker 1 (pid: 18, cores: 4)
spawned uWSGI worker 2 (pid: 19, cores: 4)
spawned uWSGI worker 3 (pid: 20, cores: 4)
09:45:14 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
09:45:17 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
09:45:18 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
09:47:26 [INFO] sentry.plugins.github: apps-not-configured
WSGI app 0 (mountpoint='') ready in 162 seconds on interpreter 0x55833f7712f0 pid: 19 (default app)
09:47:32 [INFO] sentry.plugins.github: apps-not-configured
WSGI app 0 (mountpoint='') ready in 167 seconds on interpreter 0x55833f7712f0 pid: 20 (default app)
09:47:33 [INFO] sentry.plugins.github: apps-not-configured
WSGI app 0 (mountpoint='') ready in 168 seconds on interpreter 0x55833f7712f0 pid: 18 (default app)
If this is on Kubernetes and you have CPU resource limits in place, raise them.
Thanks, It is, and we can see cpu spiking during startup. I’d still like to understand what it’s doing tho (and if there’s anything we can do to tune it)
It is spawning 4 python processes to handle the incoming requests concurrently. You may wanna look into UWSGI optimizations.
spawning a uwsgi process in itself shouldn’t necessarily be heavy, it depends on what executes during startup ofc. I haven’t’ looked at the code, but I’m guessing Sentry does some very heavy lifting during startup (which in practice happens 4 times).
In a cpu-limited environment, we’re looking at several minutes. So my question is still: Can anything be done to make Sentry start faster/lighter? Are there absolute minimum recommendations in a containerized/cpu-limited environment?
Turns out Sentry does a lot of imports on start-up which is unfortunately slow on Python. I guess you can decrease the number of workers as if your load is low, even a single worker with 2 threads should be fine.
I have a similar problem in our self hosted sentry deployment. For our production setup we have the default Sentry configuration which meant for Medium scale deployment. We have a lot of events during peak hours. I find that on restarting Sentry we loose the Sentry web interface for a really long time and sometime I have to restart several times to get it to work.