Lost Issues but still counting normally in Stats

Hi! I’m using Sentry version 10.1.0.dev0, and I noticed that in the Issues panel the issues are “lost” (notice the screenshot below that the bar chart in recent hours appears to have 0 issues)


But in the Stats panel the issues are receiving normally (screenshot at https://imgur.com/hYFxaYA - i could only upload one image per topic post)
and in Admin → System → Overview the event throughput appears to also working normally
but in Admin → Queue → Global Throughput something weird happened (screenshot at https://imgur.com/KQ2zNLx)
and it appears the issue has stopped being processed from that point.

My system configuration is an Amazon EC2 t2.medium instance, with nothing but only Sentry running on it.

What I’ve done is do docker-compose down then docker-compose up -d but it didn’t work.

Please help!

Config Files:

config.yml
mail.backend: 'smtp'  # Use dummy if you want to disable email entirely
mail.host: 'email-smtp.us-east-1.amazonaws.com'
mail.port: 587
mail.username: 'OMITTED'
mail.password: 'OMITTED'
mail.use-tls: true
mail.from: 'OMITTED'

system.secret-key: 'OMITTED'

filestore.backend: 'filesystem'
filestore.options:
  location: '/data/files'
dsym.cache-path: '/data/dsym-cache'
releasefile.cache-path: '/data/releasefile-cache'

system.internal-url-prefix: 'http://web:9000'
symbolicator.enabled: true
symbolicator.options:
  url: "http://symbolicator:3021"

transaction-events.force-disable-internal-project: true

github-app.id: OMITTED
github-app.name: 'OMITTED'
github-app.client-id: 'OMITTED'
github-app.client-secret: 'OMITTED'
github-app.private-key: OMITTED
sentry.conf.py
from sentry.conf.server import *  # NOQA

DATABASES = {
    "default": {
        "ENGINE": "sentry.db.postgres",
        "NAME": "postgres",
        "USER": "postgres",
        "PASSWORD": "",
        "HOST": "postgres",
        "PORT": "",
    }
}

# You should not change this setting after your database has been created
# unless you have altered all schemas first
SENTRY_USE_BIG_INTS = True

# If you're expecting any kind of real traffic on Sentry, we highly recommend
# configuring the CACHES and Redis settings

###########
# General #
###########

# Instruct Sentry that this install intends to be run by a single organization
# and thus various UI optimizations should be enabled.
SENTRY_SINGLE_ORGANIZATION = False

SENTRY_OPTIONS["system.event-retention-days"] = int(env('SENTRY_EVENT_RETENTION_DAYS', '30'))

#########
# Redis #
#########

# Generic Redis configuration used as defaults for various things including:
# Buffers, Quotas, TSDB

SENTRY_OPTIONS["redis.clusters"] = {
    "default": {
        "hosts": {0: {"host": "redis", "password": "", "port": "6379", "db": "0"}}
    }
}

#########
# Queue #
#########

# See https://docs.getsentry.com/on-premise/server/queue/ for more
# information on configuring your queue broker and workers. Sentry relies
# on a Python framework called Celery to manage queues.

rabbitmq_host = None
if rabbitmq_host:
    BROKER_URL = "amqp://{username}:{password}@{host}/{vhost}".format(
        username="guest", password="guest", host=rabbitmq_host, vhost="/"
    )
else:
    BROKER_URL = "redis://:{password}@{host}:{port}/{db}".format(
        **SENTRY_OPTIONS["redis.clusters"]["default"]["hosts"][0]
    )


#########
# Cache #
#########

# Sentry currently utilizes two separate mechanisms. While CACHES is not a
# requirement, it will optimize several high throughput patterns.

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.memcached.MemcachedCache",
        "LOCATION": ["memcached:11211"],
        "TIMEOUT": 3600,
    }
}

# A primary cache is required for things such as processing events
SENTRY_CACHE = "sentry.cache.redis.RedisCache"

DEFAULT_KAFKA_OPTIONS = {
    "bootstrap.servers": "kafka:9092",
    "message.max.bytes": 50000000,
    "socket.timeout.ms": 1000,
}

SENTRY_EVENTSTREAM = "sentry.eventstream.kafka.KafkaEventStream"
SENTRY_EVENTSTREAM_OPTIONS = {"producer_configuration": DEFAULT_KAFKA_OPTIONS}

KAFKA_CLUSTERS["default"] = DEFAULT_KAFKA_OPTIONS

###############
# Rate Limits #
###############

# Rate limits apply to notification handlers and are enforced per-project
# automatically.

SENTRY_RATELIMITER = "sentry.ratelimits.redis.RedisRateLimiter"

##################
# Update Buffers #
##################

# Buffers (combined with queueing) act as an intermediate layer between the
# database and the storage API. They will greatly improve efficiency on large
# numbers of the same events being sent to the API in a short amount of time.
# (read: if you send any kind of real data to Sentry, you should enable buffers)

SENTRY_BUFFER = "sentry.buffer.redis.RedisBuffer"

##########
# Quotas #
##########

# Quotas allow you to rate limit individual projects or the Sentry install as
# a whole.

SENTRY_QUOTAS = "sentry.quotas.redis.RedisQuota"

########
# TSDB #
########

# The TSDB is used for building charts as well as making things like per-rate
# alerts possible.

SENTRY_TSDB = "sentry.tsdb.redissnuba.RedisSnubaTSDB"

#########
# SNUBA #
#########

SENTRY_SEARCH = "sentry.search.snuba.EventsDatasetSnubaSearchBackend"
SENTRY_SEARCH_OPTIONS = {}
SENTRY_TAGSTORE_OPTIONS = {}

###########
# Digests #
###########

# The digest backend powers notification summaries.

SENTRY_DIGESTS = "sentry.digests.backends.redis.RedisBackend"

##############
# Web Server #
##############

SENTRY_WEB_HOST = "0.0.0.0"
SENTRY_WEB_PORT = 9000
SENTRY_WEB_OPTIONS = {
    "http": "%s:%s" % (SENTRY_WEB_HOST, SENTRY_WEB_PORT),
    "protocol": "uwsgi",
    # This is needed to prevent https://git.io/fj7Lw
    "uwsgi-socket": None,
    "http-keepalive": True,
    "http-chunked-input": True,
    "memory-report": False,
    # 'workers': 3,  # the number of web workers
}

###########
# SSL/TLS #
###########

# If you're using a reverse SSL proxy, you should enable the X-Forwarded-Proto
# header and enable the settings below

# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# SESSION_COOKIE_SECURE = True
# CSRF_COOKIE_SECURE = True
# SOCIAL_AUTH_REDIRECT_IS_HTTPS = True

# End of SSL/TLS settings

############
# Features #
############

SENTRY_FEATURES["projects:sample-events"] = False
SENTRY_FEATURES.update(
    {
        feature: True
        for feature in (
            "organizations:discover",
            "organizations:events",
            "organizations:discover-basic",
            "organizations:discover-query",
            "organizations:events-v2",
            "organizations:global-views",
            "organizations:integrations-issue-basic",
            "organizations:integrations-issue-sync",
            "organizations:invite-members",
            "organizations:sso-basic",
            "organizations:sso-rippling",
            "organizations:sso-saml2",
            "projects:custom-inbound-filters",
            "projects:data-forwarding",
            "projects:discard-groups",
            "projects:plugins",
            "projects:rate-limits",
            "projects:servicehooks",
        )
    }
)

######################
# GitHub Integration #
######################

GITHUB_EXTENDED_PERMISSIONS = ['repo']

#########################
# Bitbucket Integration #
########################

# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'
# BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'

GEOIP_PATH_MMDB = "/etc/sentry/GeoLite2-City.mmdb"
docker-compose.yml
version: '3.4'
x-restart-policy: &restart_policy
restart: unless-stopped
x-sentry-defaults: &sentry_defaults
<< : *restart_policy
build:
    context: ./sentry
    args:
    - SENTRY_IMAGE
image: sentry-onpremise-local
depends_on:
    - redis
    - postgres
    - memcached
    - smtp
    - snuba-api
    - snuba-consumer
    - snuba-replacer
    - symbolicator
    - kafka
environment:
    SENTRY_CONF: '/etc/sentry'
    SNUBA: 'http://snuba-api:1218'
volumes:
    - 'sentry-data:/data'
    - './sentry:/etc/sentry'
x-snuba-defaults: &snuba_defaults
<< : *restart_policy
depends_on:
    - redis
    - clickhouse
    - kafka
image: 'getsentry/snuba:latest'
environment:
    SNUBA_SETTINGS: docker
    CLICKHOUSE_HOST: clickhouse
    DEFAULT_BROKERS: 'kafka:9092'
    REDIS_HOST: redis
    UWSGI_MAX_REQUESTS: '10000'
    UWSGI_DISABLE_LOGGING: 'true'
services:
smtp:
    << : *restart_policy
    image: tianon/exim4
    volumes:
    - 'sentry-smtp:/var/spool/exim4'
    - 'sentry-smtp-log:/var/log/exim4'
memcached:
    << : *restart_policy
    image: 'memcached:1.5-alpine'
redis:
    << : *restart_policy
    image: 'redis:5.0-alpine'
    volumes:
    - 'sentry-redis:/data'
postgres:
    << : *restart_policy
    image: 'postgres:9.6'
    environment:
    POSTGRES_HOST_AUTH_METHOD: 'trust'
    volumes:
    - 'sentry-postgres:/var/lib/postgresql/data'
zookeeper:
    << : *restart_policy
    image: 'confluentinc/cp-zookeeper:5.1.2'
    environment:
    ZOOKEEPER_CLIENT_PORT: '2181'
    CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
    ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: 'WARN'
    ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: 'WARN'
    volumes:
    - 'sentry-zookeeper:/var/lib/zookeeper/data'
    - 'sentry-zookeeper-log:/var/lib/zookeeper/log'
    - 'sentry-secrets:/etc/zookeeper/secrets'
kafka:
    << : *restart_policy
    depends_on:
    - zookeeper
    image: 'confluentinc/cp-kafka:5.1.2'
    environment:
    KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
    KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://kafka:9092'
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
    CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
    KAFKA_LOG4J_LOGGERS: 'kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN'
    KAFKA_LOG4J_ROOT_LOGLEVEL: 'WARN'
    KAFKA_TOOLS_LOG4J_LOGLEVEL: 'WARN'
    volumes:
    - 'sentry-kafka:/var/lib/kafka/data'
    - 'sentry-kafka-log:/var/lib/kafka/log'
    - 'sentry-secrets:/etc/kafka/secrets'
clickhouse:
    << : *restart_policy
    image: 'yandex/clickhouse-server:19.11'
    ulimits:
    nofile:
        soft: 262144
        hard: 262144
    volumes:
    - 'sentry-clickhouse:/var/lib/clickhouse'
snuba-api:
    << : *snuba_defaults
snuba-consumer:
    << : *snuba_defaults
    command: consumer --auto-offset-reset=latest --max-batch-time-ms 750
snuba-replacer:
    << : *snuba_defaults
    command: replacer --auto-offset-reset=latest --max-batch-size 3
snuba-cleanup:
    << : *snuba_defaults
    image: snuba-cleanup-onpremise-local
    build:
    context: ./cron
    args:
        BASE_IMAGE: 'getsentry/snuba:latest'
    command: '"*/5 * * * * gosu snuba snuba cleanup --dry-run False"'
symbolicator:
    << : *restart_policy
    image: 'getsentry/symbolicator:latest'
    volumes:
    - 'sentry-symbolicator:/data'
    command: run
symbolicator-cleanup:
    << : *restart_policy
    image: symbolicator-cleanup-onpremise-local
    build:
    context: ./cron
    args:
        BASE_IMAGE: 'getsentry/symbolicator:latest'
    command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
    volumes:
    - 'sentry-symbolicator:/data'
web:
    << : *sentry_defaults
    ports:
    - '9000:9000/tcp'
cron:
    << : *sentry_defaults
    command: run cron
worker:
    << : *sentry_defaults
    command: run worker
post-process-forwarder:
    << : *sentry_defaults
    # Increase `--commit-batch-size 1` below to deal with high-load environments.
    command: run post-process-forwarder --commit-batch-size 1
sentry-cleanup:
    << : *sentry_defaults
    image: sentry-cleanup-onpremise-local
    build:
    context: ./cron
    args:
        BASE_IMAGE: 'sentry-onpremise-local'
    command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
volumes:
sentry-data:
    external: true
sentry-postgres:
    external: true
sentry-redis:
    external: true
sentry-zookeeper:
    external: true
sentry-kafka:
    external: true
sentry-clickhouse:
    external: true
sentry-symbolicator:
    external: true
sentry-secrets:
sentry-smtp:
sentry-zookeeper-log:
sentry-kafka-log:
sentry-smtp-log: