Stop tag deletion process

I triggered tag deletion on onpremise sentry for some of my projects in order to get rid of some PII data.

And since then (almost 2 days now) I see an increase in the CPU (almost 100%) of the postgres backend.
I see the following query quite frequently getting scheduled in the DB:

 delete from sentry_eventmapping

          where  id = any(array(

                select id

               from sentry_eventmapping


         where ("group_id" = ****)

               limit 100
```

I believe due to this reason, the DB is becoming unavailable and workers are not being able to write new incoming events into it. This results in an increase of the queue size and the events are getting delayed (around 3 hours in some cases since yesterday) in the UI. The queue which is constantly growing is
events.preprocess_event

What will be the best way to resolve this issue? Can I somehow stop the deletion of the tags from the worker side? I think the deletion is happening periodically in the DB.

Its likely that if you change the TagKey row status column back to ObjectStatus.VISIBLE (which is likely a value of 0), it will abort the deletion.

thanks for the quick reply. In which DB table can I do that?

class TagKey(Model):
    """
    Stores references to available filters keys.
    """
    __core__ = False

    project_id = BoundedPositiveIntegerField(db_index=True)
    key = models.CharField(max_length=MAX_TAG_KEY_LENGTH)
    values_seen = BoundedPositiveIntegerField(default=0)
    label = models.CharField(max_length=64, null=True)
    status = BoundedPositiveIntegerField(
        choices=(
            (TagKeyStatus.VISIBLE, _('Visible')),
            (TagKeyStatus.PENDING_DELETION, _('Pending Deletion')),
            (TagKeyStatus.DELETION_IN_PROGRESS, _('Deletion in Progress')),
        ),
        default=TagKeyStatus.VISIBLE
    )

    class Meta:
        app_label = 'sentry'
        db_table = 'sentry_filterkey'
        unique_together = (('project_id', 'key'), )
    db_table = 'sentry_filterkey'

Thanks for the clarification.
Tried to update the column. But, then in a few minutes the status column sets back to 2 automatically. Not sure what is setting that column

You might need to stop workers to prevent a race on it. Also possible we didn’t create a way to stop deletions in the version you’re using. You’d have to purge queues if that’s the case.

I have purged the queues multiple times since when this problem started. The queue grows back again

If we stop the workers for some time and then do the update, then do you think when we start the workers back again, it will not update the column?

  1. Stop workers
  2. Do update
  3. Purge queue

Guaranteed this stops the deletion.

thanks a lot…that worked out well

I had shut down the cron process off during the period. Now when I bring back the CRON to run, it starts to update the status field to 2 again.
Any ideas how I can permanently stop this even if I run the cron process?

Are you certain you purged the queue?

Is sentry_scheduleddeletion present in your installation? If so you might need to remove the row from there (if it exists). I didn’t think it was in 9 though.

The queue has been purged 3 or 4 times since then.

Where can I find

sentry_scheduleddeletion

in the installation? Is that in the DB or a configuration parameter?

It’s a database table.

i found out the problem…my worker CPU was always running high…We were not limiting the number of sub processes for the worker due to which they were spawning as much they can, in our case as many cores available in the k8s node thus limiting other worker pods from getting resources

we solved it by providing the -c parameter as 4 thus limiting the number of sub processes allowed