We had a long running 8.1 install that eventually died due to DB corruption, so we set up a brand new install of 8.19 (can’t go newer due to the use of database extensions)
Previously we had a cluster of 3 redis containers with 2GB of ram, this handled all of our traffic with no issues.
With the newer version of Sentry the redis containers are using large amounts of memory (we raised the container limit to 10gb per) and they keep hitting this limit causing major instability when they crash.
Is there a way we can figure out why they are taking so much memory and are there any suggestions on how to decrease the load?
There’s not really a way to decrease usage. We store lots of data in redis, not just the queue. All time series data as well as some other stuff is stored in redis.
In theory, if you are ok with losing graphs and some other metadata that’s not strictly required, you could flushdb on them and start over. And just periodically do this.
While I don’t recommend it, there’s nothing that’d technically break from this. It’d just result in a poorer product experience.
But Sentry intentionally stores lots of information in Redis, so there’s not really a way to say “nah, don’t do that.” it’s core to the product.