Excessive redis key

There is a problem that the key of redis rises rapidly. In the 8g memory, there was usually only about 20k key generation, but a key of nearly 200k was created in 10 minutes. Eventually 99% of the redis memory is full. Why did so many keys suddenly accumulate? Currently we have a problem on the realy and worker side. I think these are related because they all communicate with redis. I will leave additional questions about worker, relay in the next two posts.

There is an additional analysis. The increase in memory and the increase in key coincide. The increased key is “c:1:e:S:N”, which is determined to be generated by the worker side. What does that key mean? It didn’t accumulate steadily, and it increased close to 100K~200K in 10 minutes, taking up all redis memory.

And this is, as far as I know, sentry uses python celery as an asynchronous handler to do heavy work in the background, and I know that it uses redis as MQ. If so, I think the key should be released after event processing is over, but it was not.

But one more thing we know is that sentry uses redis for query caching. If so, I think the keys currently stacked in redis may be cached queries. Is that correct that I understand?

I want to know if there is a problem with redis, which is now 100% full of memory, and what that key means.

We normally use RabbitMQ for the worker task queue bot on self-hosted we rely on Redis to not introduce another service to deal with (and Redis should be okay for low volume).

Which version of Sentry are you using?

When you get a lot of events in a spike, this kind of behavior is not unusual as fully processing events require multiple async processing steps.

i raised the problem at the question “Worker MaxRetryError - Max retires exceeded with url: /api/1/store/ (caused by newConnectionError) failed to establish a new connection [Errno-3] Temporary failure in name resolution - #3 by seungjinlee

we could solve this problem by resolving the dns error from worker.