Worker can not get/release lock using docker setup

I’ve created a setup using docker-compose, which is running, except the workers.
All I get is an error every minute:

worker_1     | 19:36:53 [WARNING] sentry.utils.locking.lock: Failed to release <Lock: 'scheduler.process'> due to error: ResponseError('No lock at key exists at key: l:scheduler.process',)
worker_1     | Traceback (most recent call last):
worker_1     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/locking/lock.py", line 57, in release
worker_1     |     self.backend.release(self.key, self.routing_key)
worker_1     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/locking/backends/redis.py", line 55, in release
worker_1     |     delete_lock(client, (self.prefix_key(key), ), (self.uuid, ))
worker_1     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/redis.py", line 235, in call_script
worker_1     |     return script(keys, args, client)
worker_1     |   File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2694, in __call__
worker_1     |     return client.evalsha(self.sha, len(keys), *args)
worker_1     |   File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 1944, in evalsha
worker_1     |     return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
worker_1     |   File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 573, in execute_command
worker_1     |     return self.parse_response(connection, command_name, **options)
worker_1     |   File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 585, in parse_response
worker_1     |     response = connection.read_response()
worker_1     |   File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 582, in read_response
worker_1     |     raise response
worker_1     | ResponseError: No lock at key exists at key: l:scheduler.process

docker-compose.yml

version: '3.4'
# ...
x-defaults: &defaults
  restart: unless-stopped
  build: .
  depends_on:
    - redis
    - postgres
    - memcached
  env_file: .env
  environment:
    SENTRY_MEMCACHED_HOST: memcached
    SENTRY_REDIS_HOST: redis
    SENTRY_POSTGRES_HOST: postgres
  volumes:
    - sentry-data:/var/lib/sentry/files
# ...
  redis:
    restart: unless-stopped
    image: redis:3.2-alpine
  postgres:
    restart: unless-stopped
    image: postgres:9.5
    volumes:
      - sentry-postgres:/var/lib/postgresql/data
  web:
    <<: *defaults
    expose:
      - 9000
# ...
  worker:
    <<: *defaults
    command: run worker
# ...

I’m not sure exactly what you’re doing, but it sounds like your Redis might be evicting keys or isn’t working correctly?

I’m trying to run sentry as a docker container using docker-compose with the docker-compose.yml as mentioned above.

The Web UI is working well, but I’m getting the error message
Background workers haven't checked in recently. It seems that you have a backlog of 1208 tasks. Either your workers aren't running or you need more capacity.
shown in the Web UI.

Looking in the looks everything seems to work, except the worker is not getting the lock and incoming events are not processed.

How can I check if redis is working correctly?

Maybe the cron container is not working correctly too?!

cron_1       | celery beat v3.1.18 (Cipater) is starting.
cron_1       | __    -    ... __   -        _
cron_1       | Configuration ->
cron_1       |     . broker -> redis://redis:6379/0
cron_1       |     . loader -> celery.loaders.app.AppLoader
cron_1       |     . scheduler -> celery.beat.PersistentScheduler
cron_1       |     . db -> /tmp/sentry-celerybeat
cron_1       |     . logfile -> [stderr]@%INFO
cron_1       |     . maxinterval -> now (0s)
sentry_cron_1 exited with code 0
cron_1       | 07:09:14 [INFO] sentry.plugins.github: apps-not-configured

I’m not entirely sure, the docker-compose file provided is meant as an example for a system that you can debug and understand and run yourself.

I’m not sure how much memory is allocated to things, or if redis is crashing or running out of memory, etc. But the fact that it’s losing keys, means something is wrong or happening unexpectedly to your redis instance. Which means you’ll have to debug what the issue might be.

It seems to be an issue with my internal network. Removing the special network configuration and running sentry on port 9000 solves the issue. I might need to have a deeper look at it.
I’m using the nginx-proxy docker container with SSL support.

I used to have the following networks section in my docker-compose.yml

networks:
  default:
    external:
      name: nginx-proxy_default

It was indeed an issue with the network setup.

This one is working now

networks:
  nginx-proxy:
    external:
      name: nginx-proxy_default
  backend:

# ...
  web:
    <<: *defaults
    networks:
      - nginx-proxy
      - backend
    expose:
      - 9000

  cron:
    <<: *defaults
    command: run cron
    networks:
      - backend

  worker:
    <<: *defaults
    command: run worker
    networks:
      - backend
# ...