I get this error while loading sentry Background workers haven't checked in recently. This is likely an issue with your configuration or the workers aren't running.
These 3 processes are running in supervisor sentry run web -w 4 and sentry run worker -c 4 with 2 process and sentry run cron
Http response on /api/0/internal/health/
{"healthy": {"WarningStatusCheck": true, "CeleryAppVersionCheck": true, "CeleryAliveCheck": false}, "problems": [{"url": "http://sentry.mydomain.com/manage/queue/", "message": "Background workers haven't checked in recently. This is likely an issue with your configuration or the workers aren't running.", "id": "e42fb36a2766cde885e217b8e4524e99", "severity": "critical"}]}
mail.backend: 'smtp' # Use dummy if you want to disable email entirely
mail.host: 'smtp.gmail.com'
mail.port: 587
mail.username: '***'
mail.password: '***'
mail.use-tls: true
# The email address to send on behalf of
mail.from: 'Sentry <***>'
# If you'd like to configure email replies, enable this.
# mail.enable-replies: false
# When email-replies are enabled, this value is used in the Reply-To header
# mail.reply-hostname: ''
# If you're using mailgun for inbound mail, set your API key and configure a
# route to forward to /api/hooks/mailgun/inbound/
# mail.mailgun-api-key: ''
###################
# System Settings #
###################
# If this file ever becomes compromised, it's important to regenerate your a new key
# Changing this value will result in all current sessions being invalidated.
# A new key can be generated with `$ sentry config generate-secret-key`
system.secret-key: '***'
# The ``redis.clusters`` setting is used, unsurprisingly, to configure Redis
# clusters. These clusters can be then referred to by name when configuring
# backends such as the cache, digests, or TSDB backend.
redis.clusters:
default:
hosts:
0:
host: 127.0.0.1
port: 6379
################
# File storage #
################
# Uploaded media uses these `filestore` settings. The available
# backends are either `filesystem` or `s3`.
filestore.backend: 'filesystem'
filestore.options:
location: '/opt/sentry/media'
# This file is just Python, with a touch of Django which means
# you can inherit and tweak settings to your hearts content.
from sentry.conf.server import *
import os.path
CONF_ROOT = os.path.dirname(__file__)
DATABASES = {
'default': {
'ENGINE': 'sentry.db.postgres',
'NAME': 'sentry',
'USER': 'sentry',
'PASSWORD': '',
'HOST': '',
'PORT': '',
'AUTOCOMMIT': True,
'ATOMIC_REQUESTS': False,
}
}
# You should not change this setting after your database has been created
# unless you have altered all schemas first
SENTRY_USE_BIG_INTS = True
# If you're expecting any kind of real traffic on Sentry, we highly recommend
# configuring the CACHES and Redis settings
###########
# General #
###########
# Instruct Sentry that this install intends to be run by a single organization
# and thus various UI optimizations should be enabled.
SENTRY_SINGLE_ORGANIZATION = True
DEBUG = True
#########
# Cache #
#########
# Sentry currently utilizes two separate mechanisms. While CACHES is not a
# requirement, it will optimize several high throughput patterns.
# If you wish to use memcached, install the dependencies and adjust the config
# as shown:
#
# pip install python-memcached
#
# CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION': ['127.0.0.1:11211'],
# }
# }
# A primary cache is required for things such as processing events
SENTRY_CACHE = 'sentry.cache.redis.RedisCache'
#########
# Queue #
#########
# See https://docs.sentry.io/on-premise/server/queue/ for more
# information on configuring your queue broker and workers. Sentry relies
# on a Python framework called Celery to manage queues.
BROKER_URL = 'redis://localhost:6379'
###############
# Rate Limits #
###############
# Rate limits apply to notification handlers and are enforced per-project
# automatically.
SENTRY_RATELIMITER = 'sentry.ratelimits.redis.RedisRateLimiter'
##################
# Update Buffers #
##################
# Buffers (combined with queueing) act as an intermediate layer between the
# database and the storage API. They will greatly improve efficiency on large
# numbers of the same events being sent to the API in a short amount of time.
# (read: if you send any kind of real data to Sentry, you should enable buffers)
SENTRY_BUFFER = 'sentry.buffer.redis.RedisBuffer'
##########
# Quotas #
##########
# Quotas allow you to rate limit individual projects or the Sentry install as
# a whole.
SENTRY_QUOTAS = 'sentry.quotas.redis.RedisQuota'
########
# TSDB #
########
# The TSDB is used for building charts as well as making things like per-rate
# alerts possible.
SENTRY_TSDB = 'sentry.tsdb.redis.RedisTSDB'
###########
# Digests #
###########
# The digest backend powers notification summaries.
SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend'
##############
# Web Server #
##############
# If you're using a reverse SSL proxy, you should enable the X-Forwarded-Proto
# header and uncomment the following settings
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
# If you're not hosting at the root of your web server,
# you need to uncomment and set it to the path where Sentry is hosted.
# FORCE_SCRIPT_NAME = '/sentry'
SENTRY_WEB_HOST = '127.0.0.1'
SENTRY_WEB_PORT = 9000
SENTRY_WEB_OPTIONS = {
'workers': 1, # the number of web workers
# 'protocol': 'uwsgi', # Enable uwsgi protocol instead of http
}
SENTRY_BEACON = False
SENTRY_FEATURES['organizations:sso'] = True
# plugins
GITHUB_APP_ID = 'GitHub Application Client ID'
GITHUB_API_SECRET = 'GitHub Application Client Secret'
GITHUB_EXTENDED_PERMISSIONS = ['repo']
Sure. I’m using systemd on Ubuntu 16 instead of supervisord:
[Unit]
Description=Sentry Background Worker
After=network.target
[Service]
Type=simple
User=sentry
Group=sentry
WorkingDirectory=/opt/sentry
Environment=SENTRY_CONF=/opt/sentry
ExecStart=/opt/virtualenvs/sentry/bin/sentry run worker -l INFO
[Install]
WantedBy=multi-user.target
[Unit]
Description=Sentry Beat Service
After=network.target
[Service]
Type=simple
User=sentry
Group=sentry
WorkingDirectory=/opt/sentry
Environment=SENTRY_CONF=/etc/sentry
ExecStart=/opt/virtualenvs/sentry/bin/sentry run cron
[Install]
WantedBy=multi-user.target
[Unit]
Description=Sentry Main Service
After=network.target
Requires=sentry-worker.service
Requires=sentry-cron.service
[Service]
Type=simple
User=sentry
Group=sentry
WorkingDirectory=/opt/sentry
Environment=SENTRY_CONF=/opt/sentry
ExecStart=/opt/virtualenvs/sentry/bin/sentry run web
[Install]
WantedBy=multi-user.target
If I run them manually, what should I be looking for? Let’s say a task makes it to a worker, but doesn’t appear in Sentry, what does it mean? Or it doesn’t make it to a worker, what could that mean?
My Redis is 3.0.6 — latest in Ubuntu 16.04. Could that be a problem? Package versions are the same except redis-py-cluster which was not even installed. Where is it supposed to come from? Installing it just to try something out…
This is all especially difficult because failures are inconsistent: some events make it, some don’t. I was wondering if anybody axcept me and the thread starter had the same issue and could point out in the right direction.
Redis itself should be ok, but an upgrade can’t hurt.
Running the components manually will check if there is any systemd issue related to the virtualenv.
I’ve also had this issue during the initial setup, but I never encountered it again after getting everything right (and my config is similar with yours).
As a last resort you can try using 127.0.0.1 instead of localhost in sentry.conf.py.
Running Celery worker manually didn’t get me closer to the truth either. I can see all calls create a bunch of tasks each, but Sentry still claims workers haven’t checked it. Is this an issue with the web app then?
So I just installed it on a server with 2Gb of RAM and that solved it. Or it’s just a coincidence, but I can’t see any other differences between the installations.
I’m having the same issue, what are the workers checking into? is this connecting to the web api? postgres? or redis? guessing it has something todo with CeleryAliveCheck: false, how can I fix this?
So I have been searching for the same Error and think I found out why.
if you check the check the code
it checks if the key “default” is there. When it’s empty it should not be there. So by following the code, you see that you fall in the last case because the key exist and the size is 0.
That is because the check queue length does not check if exists before and consider the 0 length