Redis Error 111 connecting to redis:6379

Hi

I have following errors

  1. consumer: Cannot connect to redis://redis:6379/0: BusyLoadingError('Redis is loading
  2. Couldn’t apply scheduled task check-monitors: Redis is loading the dataset in memory
  3. BusyLoadingError /api/0/organizations/{organization_slug}/issues/
    errorRedis is loading the dataset in memory

I have only extended root partition at my system.

Before that I stopped docker containers and after that I have restarted compose with
docker-compose down && docker-compose build && docker-compose up -d

However seems like this message is filling hdd memory and apparently I will lose free space that I have added.

Also with top command on my system I can see that redis-server process is consuming a lot of RAM memory, so I have only 150 MB out of 16GB.

Is there any reason why this is happening?

I did not touch any configs.

Thanks in advance.

This simply indicates you need more resources for Redis: ERROR: LOADING Redis is loading the dataset in memory

If redis is consuming lots of memory and disk space, that means it is holding onto a lot of records. Are you ingesting a lot of events? Also what version are you on?

BYK as always thank you for your swift reply.

What I did is exactly like you proposed.

I found that article on the Internet ERROR: LOADING Redis is loading the dataset in memory, then went to the Redis container and did the following:

  1. login to the container that runs redis, in my case redis:5.0-alpine
  2. hit redis-cli and then command info
  3. saw at the end of that command info regarding keys that was around 73000.

Keyspace

db0:keys=73868
4. run command flushall

After that, I got reduced values for CPU, RAM, and swap usage. BTW, RAM 16GB, 8CPUs and 250GB of hdd.

Anyhow, after a while my resources have slowly raised up. In the end, they reached the same values as before. Also, hdd had been increased by 1GB each 2 mins.

Then I inspected whole projects in sentry and saw that one project filled sentry with huge numbers of events. That was one Kafka that is in our environment. So I removed that event from sentry and the developer stopped it from his side as well. It was around 2-5K of events each min.

After all, in my / dir I have this situation, sentry-kafka used around 120GB of logs. One example can be seen below.
1.1G ./var/lib/docker/volumes/sentry-kafka/_data/events-0/00000000000000223601.log

In talks with the developer team, they told me that on the previous Sentry version (Sentry 8.22.0) they had 3+ million events, but probably not so continuous like this was the case now.

I am on Sentry 21.2.0.dev0a01372a.

Ah, this is good. I asked because there was an important improvement regarding Redis usage for self-hosted around 20.11.0 and wanted to make sure you have that patch in.

Yup, Sentry pre v10 has a quite different architecture and doesn’t need Kafka. It also relies on async jobs a bit less (which puts more pressure on Redis for self-hosted) so this is not very surprising.

If Redis is becoming an issue here, you may consider adding in RabbitMQ as the broker for Celery. You’d still need Redis for other things but at least the pressure and the load should go quite down.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.