Batching-kafka-consumer: Flushing items

I am on Sentry on-prem 20.9.0. Have been seeing event being triggered in the logs, but not displaying.

Earlier post showed this error fixed in version 20.7.0, due to a bug. Can somebody assist on the issue . Still seeing events being flushed and not getting saved or displayed on sentry.

Logs , showing the flush events …

nginx_1                        | 17.222.104.23 - - [03/Feb/2021:00:21:31 +0000] "POST /api/19/store/ HTTP/1.1" 200 41 "-" "sentry.python/0.19.5"
nginx_1                        | 17.222.104.23 - - [03/Feb/2021:00:21:31 +0000] "POST /api/19/store/ HTTP/1.1" 200 41 "-" "sentry.python/0.19.5"
ingest-consumer_1              | 00:21:32 [INFO] batching-kafka-consumer: Flushing 2 items (from {(u'ingest-events', 0): [1395226L, 1395227L]}): forced:False size:False time:True
ingest-consumer_1              | 00:21:32 [INFO] batching-kafka-consumer: Worker flush took 
15ms

Can you share other logs please? Relay, post-process-forwarder, and kafka would be useful.

@BYK

Relay logs

  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)
2021-02-03T12:02:38Z [relay_server::endpoints::common] ERROR: error handling request: failed to queue envelope
  caused by: Too many events (event_buffer_size reached)

KAFKA

===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_SUPPORT_METRICS_ENABLE=false
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=0aab88217a2e
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
KAFKA_MAX_REQUEST_SIZE=50000000
KAFKA_MESSAGE_MAX_BYTES=50000000
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=0aab88217a2e
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-957.10.1.el7.x86_64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=174MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2635MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=178MB
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/192.168.16.6:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/192.168.16.6:2181: Connection refused
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/192.168.16.6:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/192.168.16.6:2181: Connection refused
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/192.168.16.6:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/192.168.16.6:2181: Connection refused
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/192.168.16.6:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /192.168.16.10:37622, server: zookeeper/192.168.16.6:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/192.168.16.6:2181, sessionid = 0x100602ab5bb0000, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100602ab5bb0000 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x100602ab5bb0000
===> Launching ...
===> Launching kafka ...
[2021-02-03 11:03:21,685] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-02-03 11:03:24,241] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2021-02-03 11:03:24,241] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2021-02-03 11:03:27,209] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2021-02-03 11:03:27,289] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2021-02-03 11:03:28,052] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-02-03 11:03:28,146] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2021-02-03 11:03:28,148] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2021-02-03 11:03:29,326] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-02-03 11:03:29,373] INFO Stat of the created znode at /brokers/ids/1001 is: 2709,2709,1612350209344,1612350209344,1,0,0,72163330591752193,180,0,2709
 (kafka.zk.KafkaZkClient)
[2021-02-03 11:03:29,374] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 2709 (kafka.zk.KafkaZkClient)
[2021-02-03 11:03:29,949] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-02-03 11:03:30,286] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)

When I tail the logs

I get no error on the post , something like this

snuba-outcomes-consumer_1      | 2021-02-03 12:03:23,378 Completed processing <Batch: 21 messages, open for 0.80 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:24,225 Completed processing <Batch: 16 messages, open for 0.85 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:25,025 Completed processing <Batch: 29 messages, open for 0.80 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:25,943 Completed processing <Batch: 15 messages, open for 0.92 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:26,771 Completed processing <Batch: 26 messages, open for 0.83 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:27,637 Completed processing <Batch: 39 messages, open for 0.83 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:28,581 Completed processing <Batch: 10 messages, open for 0.94 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:29,865 Completed processing <Batch: 21 messages, open for 1.28 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:31,172 Completed processing <Batch: 1 message, open for 1.24 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:32,340 Completed processing <Batch: 5 messages, open for 1.17 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:33,132 Completed processing <Batch: 24 messages, open for 0.79 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:33,938 Completed processing <Batch: 26 messages, open for 0.81 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:34,729 Completed processing <Batch: 20 messages, open for 0.79 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:36,429 Completed processing <Batch: 24 messages, open for 1.70 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:37,242 Completed processing <Batch: 9 messages, open for 0.81 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:38,156 Completed processing <Batch: 28 messages, open for 0.91 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:38,982 Completed processing <Batch: 18 messages, open for 0.83 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:39,755 Completed processing <Batch: 31 messages, open for 0.77 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:40,727 Completed processing <Batch: 20 messages, open for 0.97 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:41,627 Completed processing <Batch: 14 messages, open for 0.90 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:42,412 Completed processing <Batch: 28 messages, open for 0.78 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:43,186 Completed processing <Batch: 25 messages, open for 0.77 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:43,973 Completed processing <Batch: 9 messages, open for 0.79 seconds>.
snuba-outcomes-consumer_1      | 2021-02-03 12:03:45,701 Completed processing <Batch: 18 messages, open for 1.73 seconds>.
nginx_1                        | 17.222.104.23 - - [03/Feb/2021:21:08:05 +0000] "POST /api/19/envelope/ HTTP/1.1" 200 41 "-" "sentry.python/0.19.5"
nginx_1                        | 17.222.104.25 - - [03/Feb/2021:21:08:05 +0000] "POST /api/19/envelope/ HTTP/1.1" 200 41 "-" "sentry.python/0.19.5"
nginx_1                        | 17.74.95.70 - - [03/Feb/2021:21:08:05 +0000] "POST /api/19/envelope/ HTTP/1.1" 200 41 "-" "sentry.python/0.19.5"
ingest-consumer_1              | 21:08:06 [INFO] batching-kafka-consumer: Flushing 3 items (from {(u'ingest-transactions', 0): [591368L, 591370L]}): forced:False size:False time:True
ingest-consumer_1              | 21:08:06 [INFO] batching-kafka-consumer: Worker flush took 16ms

Here is my docker-compose.yml
version: ‘3.4’
x-restart-policy: &restart_policy
restart: unless-stopped
x-sentry-defaults: &sentry_defaults
<< : *restart_policy
build:
context: ./sentry
args:
- SENTRY_IMAGE
image: sentry-onpremise-local
depends_on:
- redis
- postgres
- memcached
- smtp
- snuba-api
- snuba-consumer
- snuba-outcomes-consumer
- snuba-sessions-consumer
- snuba-transactions-consumer
- snuba-replacer
- symbolicator
- kafka
environment:
SENTRY_CONF: ‘/etc/sentry’
SNUBA: ‘http://snuba-api:1218
volumes:
- ‘sentry-data:/data’
- ‘./sentry:/etc/sentry’
x-snuba-defaults: &snuba_defaults
<< : *restart_policy
depends_on:
- redis
- clickhouse
- kafka
image: ‘$SNUBA_IMAGE’
environment:
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: ‘kafka:9092’
REDIS_HOST: redis
UWSGI_MAX_REQUESTS: ‘10000’
UWSGI_DISABLE_LOGGING: ‘true’
services:
smtp:
<< : *restart_policy
image: tianon/exim4
volumes:
- ‘sentry-smtp:/var/spool/exim4’
- ‘sentry-smtp-log:/var/log/exim4’
memcached:
<< : *restart_policy
image: ‘memcached:1.5-alpine’
redis:
<< : *restart_policy
image: ‘redis:5.0-alpine’
volumes:
- ‘sentry-redis:/data’
postgres:
<< : *restart_policy
image: ‘postgres:9.6’
environment:
POSTGRES_HOST_AUTH_METHOD: ‘trust’
volumes:
- ‘sentry-postgres:/var/lib/postgresql/data’
zookeeper:
<< : *restart_policy
image: ‘confluentinc/cp-zookeeper:5.5.0’
environment:
ZOOKEEPER_CLIENT_PORT: ‘2181’
CONFLUENT_SUPPORT_METRICS_ENABLE: ‘false’
ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: ‘WARN’
ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: ‘WARN’
volumes:
- ‘sentry-zookeeper:/var/lib/zookeeper/data’
- ‘sentry-zookeeper-log:/var/lib/zookeeper/log’
- ‘sentry-secrets:/etc/zookeeper/secrets’
kafka:
<< : *restart_policy
depends_on:
- zookeeper
image: ‘confluentinc/cp-kafka:5.5.0’
environment:
KAFKA_ZOOKEEPER_CONNECT: ‘zookeeper:2181’
KAFKA_ADVERTISED_LISTENERS: ‘PLAINTEXT://kafka:9092’
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: ‘1’
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: ‘2’
KAFKA_MESSAGE_MAX_BYTES: ‘50000000’ #50MB or bust
KAFKA_MAX_REQUEST_SIZE: ‘50000000’ #50MB on requests apparently too
CONFLUENT_SUPPORT_METRICS_ENABLE: ‘false’
KAFKA_LOG4J_LOGGERS: ‘kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN’
KAFKA_LOG4J_ROOT_LOGLEVEL: ‘WARN’
KAFKA_TOOLS_LOG4J_LOGLEVEL: ‘WARN’
volumes:
- ‘sentry-kafka:/var/lib/kafka/data’
- ‘sentry-kafka-log:/var/lib/kafka/log’
- ‘sentry-secrets:/etc/kafka/secrets’
clickhouse:
<< : *restart_policy
image: ‘yandex/clickhouse-server:19.17’
ulimits:
nofile:
soft: 262144
hard: 262144
volumes:
- ‘sentry-clickhouse:/var/lib/clickhouse’
- ‘sentry-clickhouse-log:/var/log/clickhouse-server’
- type: bind
read_only: true
source: ./clickhouse/config.xml
target: /etc/clickhouse-server/config.d/sentry.xml
environment:
# This limits Clickhouse’s memory to 70% of the host memory
# If you have high volume and your search return incomplete results
# You might want to change this to a higher value (and ensure your host has enough memory)
MAX_MEMORY_USAGE_RATIO: 0.7
snuba-api:
<< : *snuba_defaults
# Kafka consumer responsible for feeding events into Clickhouse
snuba-consumer:
<< : *snuba_defaults
command: consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
# Kafka consumer responsible for feeding outcomes into Clickhouse
# Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
# since we did not do a proper migration
snuba-outcomes-consumer:
<< : *snuba_defaults
command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
# Kafka consumer responsible for feeding session data into Clickhouse
snuba-sessions-consumer:
<< : *snuba_defaults
command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
# Kafka consumer responsible for feeding transactions data into Clickhouse
snuba-transactions-consumer:
<< : *snuba_defaults
command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750
snuba-replacer:
<< : *snuba_defaults
command: replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-cleanup:
<< : snuba_defaults
image: snuba-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘$SNUBA_IMAGE’
command: '"
/5 * * * * gosu snuba snuba cleanup --dry-run False"’
symbolicator:
<< : *restart_policy
image: ‘$SYMBOLICATOR_IMAGE’
volumes:
- ‘sentry-symbolicator:/data’
- type: bind
read_only: true
source: ./symbolicator
target: /etc/symbolicator
command: run -c /etc/symbolicator/config.yml
symbolicator-cleanup:
<< : *restart_policy
image: symbolicator-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘$SYMBOLICATOR_IMAGE’
command: ‘“55 23 * * * gosu symbolicator symbolicator cleanup”’
volumes:
- ‘sentry-symbolicator:/data’
web:
<< : *sentry_defaults
cron:
<< : *sentry_defaults
command: run cron
worker1:
<< : *sentry_defaults
command: run worker -Q events.process_event
worker2:
<< : *sentry_defaults
command: run worker -Q events.reprocessing.process_event
worker3:
<< : *sentry_defaults
command: run worker -Q events.reprocess_events
worker4:
<< : *sentry_defaults
command: run worker -Q events.save_event
worker5:
<< : *sentry_defaults
command: run worker -Q subscriptions
worker6:
<< : *sentry_defaults
command: run worker -Q integrations
worker:
<< : *sentry_defaults
command: run worker
ingest-consumer:
<< : *sentry_defaults
command: run ingest-consumer --all-consumer-types
post-process-forwarder:
<< : *sentry_defaults
# Increase --commit-batch-size 1 below to deal with high-load environments.
command: run post-process-forwarder --commit-batch-size 100
sentry-cleanup:
<< : *sentry_defaults
image: sentry-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘sentry-onpremise-local’
command: ‘“0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS”’
nginx:
<< : *restart_policy
ports:
- ‘443:443/tcp’
- ‘80:80/tcp’
image: ‘nginx:1.16’
volumes:
- type: bind
read_only: true
source: ./nginx
target: /etc/nginx
depends_on:
- web
- relay
relay:
<< : *restart_policy
image: ‘$RELAY_IMAGE’
volumes:
- type: bind
read_only: true
source: ./relay
target: /work/.relay
depends_on:
- kafka
- redis
volumes:
sentry-data:
external: true
sentry-postgres:
external: true
sentry-redis:
external: true
sentry-zookeeper:
external: true
sentry-kafka:
external: true
sentry-clickhouse:
external: true
sentry-symbolicator:
external: true
sentry-secrets:
sentry-smtp:
sentry-zookeeper-log:
sentry-kafka-log:
sentry-smtp-log:
sentry-clickhouse-log:

Looks like Relay cannot accept these events for some reason. @jauer - any ideas?

Any guidance @jauer ?

Any news on this problem?

Still having this issue , on v20.9.0 @luanraithz . Seems like the guidance i got was to upgrade the instance, however upgrade was failing for me . Seems to be blocked till this

is released.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.