The configured event stream backend does not need a forwarder process to enqueue post-process tasks. Exiting

When I deployed the post-process-forwarder to the kubernetes cluster, the post-process-forwarder always exit.The specific log information is as follows

`01:50:53 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.

01:51:02 [INFO] sentry.plugins.github: apps-not-configured
01:51:02 [DEBUG] sentry.digests: Validating Redis version…
01:51:02 [DEBUG] sentry.tsdb.redis: Validating Redis version…
The configured event stream backend does not need a forwarder process to enqueue post-process tasks. Exiting…`

Environment variable information:
SNUBA=http://snuba-api:1218 SENTRY_REDIS_HOST=10.138.0.65 SENTRY_REDIS_PORT=6379 SENTRY_MEMCACHED_HOST=10.138.0.65 SENTRY_MEMCACHED_PORT=11211 SENTRY_EMAIL_HOST=10.138.0.65 SENTRY_SECRET_KEY=tLC8fz7W2NF%^M%sPA]5+yXR6I-vb9 SENTRY_DB_NAME=postgres SENTRY_DB_USER=sentry SENTRY_DB_PASSWORD=mysecretpassword SENTRY_POSTGRES_HOST=10.138.0.65 SENTRY_POSTGRES_PORT=5432 DEFAULT_BROKERS=10.138.0.65:9092 C_FORCE_ROOT=true

I also modified the config.yml, sentr.conf.py files

Start the command:
sentry run post-process-forwarder --loglevel debug --commit-batch-size 1

What did I configure wrong?

You need this in your sentry.conf.py file: https://github.com/getsentry/onpremise/blob/9dfc5c99defd01b7978a5988657581e7c668461b/sentry/sentry.conf.example.py#L90

Also make sure your sentry.conf.py file is shared across all Sentry instances (web, workers, and post-process-forwarder), not just web.

1、Eventstream is configured in the sentry.conf.py file.
DEFAULT_KAFKA_OPTIONS = {
“bootstrap.servers”: “10.138.0.65:9092”,
“message.max.bytes”: 50000000,
“socket.timeout.ms”: 1000,
}

SENTRY_EVENTSTREAM = “sentry.eventstream.kafka.KafkaEventStream”
SENTRY_EVENTSTREAM_OPTIONS = {“producer_configuration”: DEFAULT_KAFKA_OPTIONS}

KAFKA_CLUSTERS[“default”] = DEFAULT_KAFKA_OPTIONS

2、The web, workers, and post-process-forwarder use the same image, so each image has a sentr.conf.py file.

3、Since I am now deploying sentry to the kubernetes cluster, I have replaced the previous sentry-data mount volume with ali cloud nas

4、The complete configuration file is shown below:
# This file is just Python, with a touch of Django which means
# you can inherit and tweak settings to your hearts content.

from sentry.conf.server import *  # NOQA

DATABASES = {
    "default": {
        "ENGINE": "sentry.db.postgres",
        "NAME": "postgres",
        "USER": "sentry",
        "PASSWORD": "mysecretpassword",
        "HOST": "10.138.0.65",
        "PORT": "5432",
    }
}

# You should not change this setting after your database has been created
# unless you have altered all schemas first
SENTRY_USE_BIG_INTS = True

# If you're expecting any kind of real traffic on Sentry, we highly recommend
# configuring the CACHES and Redis settings

###########
# General #
###########

# Instruct Sentry that this install intends to be run by a single organization
# and thus various UI optimizations should be enabled.
SENTRY_SINGLE_ORGANIZATION = True

SENTRY_OPTIONS["system.event-retention-days"] = int(env('SENTRY_EVENT_RETENTION_DAYS', '90'))

#########
# Redis #
#########

# Generic Redis configuration used as defaults for various things including:
# Buffers, Quotas, TSDB

SENTRY_OPTIONS["redis.clusters"] = {
    "default": {
        "hosts": {0: {"host": "10.138.0.65", "password": "", "port": "6379", "db": "0"}}
    }
}

#########
# Queue #
#########

# See https://docs.getsentry.com/on-premise/server/queue/ for more
# information on configuring your queue broker and workers. Sentry relies
# on a Python framework called Celery to manage queues.

rabbitmq_host = None
if rabbitmq_host:
    BROKER_URL = "amqp://{username}:{password}@{host}/{vhost}".format(
        username="guest", password="guest", host=rabbitmq_host, vhost="/"
    )
else:
    BROKER_URL = "redis://:{password}@{host}:{port}/{db}".format(
        **SENTRY_OPTIONS["redis.clusters"]["default"]["hosts"][0]
    )


#########
# Cache #
#########

# Sentry currently utilizes two separate mechanisms. While CACHES is not a
# requirement, it will optimize several high throughput patterns.

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.memcached.MemcachedCache",
        "LOCATION": ["10.138.0.65:11211"],
        "TIMEOUT": 3600,
    }
}

# A primary cache is required for things such as processing events
SENTRY_CACHE = "sentry.cache.redis.RedisCache"

DEFAULT_KAFKA_OPTIONS = {
    "bootstrap.servers": "10.138.0.65:9092",
    "message.max.bytes": 50000000,
    "socket.timeout.ms": 1000,
}

SENTRY_EVENTSTREAM = "sentry.eventstream.kafka.KafkaEventStream"
SENTRY_EVENTSTREAM_OPTIONS = {"producer_configuration": DEFAULT_KAFKA_OPTIONS}

KAFKA_CLUSTERS["default"] = DEFAULT_KAFKA_OPTIONS

###############
# Rate Limits #
###############

# Rate limits apply to notification handlers and are enforced per-project
# automatically.

SENTRY_RATELIMITER = "sentry.ratelimits.redis.RedisRateLimiter"

##################
# Update Buffers #
##################

# Buffers (combined with queueing) act as an intermediate layer between the
# database and the storage API. They will greatly improve efficiency on large
# numbers of the same events being sent to the API in a short amount of time.
# (read: if you send any kind of real data to Sentry, you should enable buffers)

SENTRY_BUFFER = "sentry.buffer.redis.RedisBuffer"

##########
# Quotas #
##########

# Quotas allow you to rate limit individual projects or the Sentry install as
# a whole.

SENTRY_QUOTAS = "sentry.quotas.redis.RedisQuota"

########
# TSDB #
########

# The TSDB is used for building charts as well as making things like per-rate
# alerts possible.

SENTRY_TSDB = "sentry.tsdb.redissnuba.RedisSnubaTSDB"

#########
# SNUBA #
#########

SENTRY_SEARCH = "sentry.search.snuba.EventsDatasetSnubaSearchBackend"
SENTRY_SEARCH_OPTIONS = {}
SENTRY_TAGSTORE_OPTIONS = {}

###########
# Digests #
###########

# The digest backend powers notification summaries.

SENTRY_DIGESTS = "sentry.digests.backends.redis.RedisBackend"

##############
# Web Server #
##############

SENTRY_WEB_HOST = "0.0.0.0"
SENTRY_WEB_PORT = 9000
SENTRY_WEB_OPTIONS = {
    "http": "%s:%s" % (SENTRY_WEB_HOST, SENTRY_WEB_PORT),
    "protocol": "uwsgi",
    # This is needed to prevent https://git.io/fj7Lw
    "uwsgi-socket": None,
    "http-keepalive": True,
    "http-chunked-input": True,
    "memory-report": False,
    # 'workers': 3,  # the number of web workers
}

###########
# SSL/TLS #
###########

# If you're using a reverse SSL proxy, you should enable the X-Forwarded-Proto
# header and enable the settings below

# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# SESSION_COOKIE_SECURE = True
# CSRF_COOKIE_SECURE = True
# SOCIAL_AUTH_REDIRECT_IS_HTTPS = True

# End of SSL/TLS settings

############
# Features #
############

SENTRY_FEATURES["projects:sample-events"] = False
SENTRY_FEATURES.update(
    {
        feature: True
        for feature in (
            "organizations:discover",
            "organizations:events",
            "organizations:global-views",
            "organizations:integrations-issue-basic",
            "organizations:integrations-issue-sync",
            "organizations:invite-members",
            "organizations:new-issue-ui",
            "organizations:repos",
            "organizations:require-2fa",
            "organizations:sentry10",
            "organizations:sso-basic",
            "organizations:sso-rippling",
            "organizations:sso-saml2",
            "organizations:suggested-commits",
            "projects:custom-inbound-filters",
            "projects:data-forwarding",
            "projects:discard-groups",
            "projects:plugins",
            "projects:rate-limits",
            "projects:servicehooks",
        )
    }
)

######################
# GitHub Integration #
######################

GITHUB_EXTENDED_PERMISSIONS = ['repo']

#########################
# Bitbucket Integration #
########################

# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'
# BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'

Your configuration seems correct but I’m 100% sure that you are not using that configuration from post-process-forwarder at least. This error is thrown from https://github.com/getsentry/sentry/blob/b99e4a145096857f0f6c136af2e863ebddfa3539/src/sentry/runner/commands/run.py#L308-L325 which can only happen if you are not using the KafkaEventStream backend.

Other people have this problem:
https://github.com/getsentry/sentry/issues/17157

I know how to solve this problem。
copy sentry.conf.py to /etc/sentry

Yes, that means you did not mount the config correctly as I suggested earlier?

You are right, but the conf file should be deployed to two locations. At first I just thought it would be deployed to the /usr/src/sentry directory. Later I found out that /etc/sentry also needs to be deployed

This shouldn’t be the case, all config should be under $SENTRY_CONF which is set to /etc/sentry by default. Do we say you should put the config under /usr/src/sentry in somewhere since if we do I want to fix it. Sorry for the confusion about all this.