Sentry 8.12.0, using the official docker container,
everything seemed to be working fine, but then all of the sudden sentry started reporting the new issues in the dashboard, clients get 200 when reporting and I can see the reports come in, but the worker fails to process them, I’m not sure if it’s related or not, but things seemed to have gone wrong once I added custom fingerprint to one of our clients.
Thanks for the info, it seems like the main issue was our server was being overwhelmed by reports causing it to fall back, and many of them started to timeout, I added a quata, and it seems like thing are back on track
I was wondering is there a document/article/diagram that shows the lifetime of a report as far as different system components are concerned? I’ve tried looking for one, but I wasn’t able to find it (besides https://docs.sentry.io/server/queue/ which is pretty light on details)
Here is what I’ve gathered and guessed,:
Report is sent by client to dashboard node
Dashboard node stores the raw event in the cache you mentioned. (Is this cache the same as what is referred to as queue/broker)?
The report is picked up from the cache by the worker node and updated in Postgres. (What I’m confused about is why the worker can pick up the cache key but not the item? are they stored in different systems and only one of those systems has TTL?).
I know that it’s an old post, but I just had the same error and the purge didn’t help me.
So here’s what I did:
I have a docker on premise configuration, where I had to add a special network (in order to see sentry in my nginx-proxy).
Using “sentry queues list” on the web instance and on the worker instance, I found out that both of them where not linked to the same redis instances.
So the solution for me was, edit your docker-compose.yml, and add links in your default conf like so: