I triggered tag deletion on onpremise sentry for some of my projects in order to get rid of some PII data.
And since then (almost 2 days now) I see an increase in the CPU (almost 100%) of the postgres backend.
I see the following query quite frequently getting scheduled in the DB:
delete from sentry_eventmapping
where id = any(array(
select id
from sentry_eventmapping
where ("group_id" = ****)
limit 100
```
I believe due to this reason, the DB is becoming unavailable and workers are not being able to write new incoming events into it. This results in an increase of the queue size and the events are getting delayed (around 3 hours in some cases since yesterday) in the UI. The queue which is constantly growing is
events.preprocess_event
What will be the best way to resolve this issue? Can I somehow stop the deletion of the tags from the worker side? I think the deletion is happening periodically in the DB.
Thanks for the clarification.
Tried to update the column. But, then in a few minutes the status column sets back to 2 automatically. Not sure what is setting that column
You might need to stop workers to prevent a race on it. Also possible we didn’t create a way to stop deletions in the version you’re using. You’d have to purge queues if that’s the case.
I had shut down the cron process off during the period. Now when I bring back the CRON to run, it starts to update the status field to 2 again.
Any ideas how I can permanently stop this even if I run the cron process?
Is sentry_scheduleddeletion present in your installation? If so you might need to remove the row from there (if it exists). I didn’t think it was in 9 though.
i found out the problem…my worker CPU was always running high…We were not limiting the number of sub processes for the worker due to which they were spawning as much they can, in our case as many cores available in the k8s node thus limiting other worker pods from getting resources
we solved it by providing the -c parameter as 4 thus limiting the number of sub processes allowed