No new events after update

I have updated my sentry to the last version (Sentry 21.3.0.dev0e213aa5) and it stopped to get new events.

Log of services that have error:

subscription-consumer-transactions_1        |     return ctx.invoke(f, *args, **kwargs)
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
subscription-consumer-transactions_1        |     return callback(*args, **kwargs)
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
subscription-consumer-transactions_1        |     return f(get_current_context(), *args, **kwargs)
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py", line 28, in inner
subscription-consumer-transactions_1        |     return ctx.invoke(f, *args, **kwargs)
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
subscription-consumer-transactions_1        |     return callback(*args, **kwargs)
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/sentry/runner/commands/run.py", line 386, in query_subscription_consumer
subscription-consumer-transactions_1        |     subscriber.run()
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/sentry/snuba/query_subscription_consumer.py", line 154, in run
subscription-consumer-transactions_1        |     wait_for_topics(admin_client, [self.topic])
subscription-consumer-transactions_1        |   File "/usr/local/lib/python3.6/site-packages/sentry/utils/batching_kafka_consumer.py", line 38, in wait_for_topics
subscription-consumer-transactions_1        |     f"Timeout when waiting for Kafka topic '{topic}' to become available, last error: {last_error}"
subscription-consumer-transactions_1        | RuntimeError: Timeout when waiting for Kafka topic 'transactions-subscription-results' to become available, last error: KafkaError{code=LEADER_NOT_AVAILABLE,val=5,str="Broker: Leader not available"}
subscription-consumer-transactions_1        | /usr/local/lib/python3.6/site-packages/sentry/runner/initializer.py:185: DeprecatedSettingWarning: The GITHUB_EXTENDED_PERMISSIONS setting is deprecated. Please use SENTRY_OPTIONS['github-login.extended-permissions'] instead.
subscription-consumer-transactions_1        |   warnings.warn(DeprecatedSettingWarning(options_mapper[k], "SENTRY_OPTIONS['%s']" % k))
subscription-consumer-transactions_1        | 17:10:43 [INFO] sentry.plugins.github: apps-not-configured
subscription-consumer-transactions_1        | 17:10:52 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets='{0: None}' partitions='[TopicPartition{topic=transactions-subscription-results,partition=0,offset=-1001,error=None}]')

subscription-consumer-events_1              |     return ctx.invoke(f, *args, **kwargs)
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
subscription-consumer-events_1              |     return callback(*args, **kwargs)
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
subscription-consumer-events_1              |     return f(get_current_context(), *args, **kwargs)
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py", line 28, in inner
subscription-consumer-events_1              |     return ctx.invoke(f, *args, **kwargs)
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
subscription-consumer-events_1              |     return callback(*args, **kwargs)
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/sentry/runner/commands/run.py", line 386, in query_subscription_consumer
subscription-consumer-events_1              |     subscriber.run()
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/sentry/snuba/query_subscription_consumer.py", line 154, in run
subscription-consumer-events_1              |     wait_for_topics(admin_client, [self.topic])
subscription-consumer-events_1              |   File "/usr/local/lib/python3.6/site-packages/sentry/utils/batching_kafka_consumer.py", line 38, in wait_for_topics
subscription-consumer-events_1              |     f"Timeout when waiting for Kafka topic '{topic}' to become available, last error: {last_error}"
subscription-consumer-events_1              | RuntimeError: Timeout when waiting for Kafka topic 'events-subscription-results' to become available, last error: KafkaError{code=LEADER_NOT_AVAILABLE,val=5,str="Broker: Leader not available"}
subscription-consumer-events_1              | /usr/local/lib/python3.6/site-packages/sentry/runner/initializer.py:185: DeprecatedSettingWarning: The GITHUB_EXTENDED_PERMISSIONS setting is deprecated. Please use SENTRY_OPTIONS['github-login.extended-permissions'] instead.
subscription-consumer-events_1              |   warnings.warn(DeprecatedSettingWarning(options_mapper[k], "SENTRY_OPTIONS['%s']" % k))
subscription-consumer-events_1              | 17:10:44 [INFO] sentry.plugins.github: apps-not-configured
subscription-consumer-events_1              | 17:10:52 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets='{0: None}' partitions='[TopicPartition{topic=events-subscription-results,partition=0,offset=-1001,error=None}]')

snuba-transactions-consumer_1               |     return ctx.invoke(self.callback, **ctx.params)
snuba-transactions-consumer_1               |   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
snuba-transactions-consumer_1               |     return callback(*args, **kwargs)
snuba-transactions-consumer_1               |   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
snuba-transactions-consumer_1               |     consumer.run()
snuba-transactions-consumer_1               |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
snuba-transactions-consumer_1               |     self._run_once()
snuba-transactions-consumer_1               |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
snuba-transactions-consumer_1               |     self.__message = self.__consumer.poll(timeout=1.0)
snuba-transactions-consumer_1               |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 767, in poll
snuba-transactions-consumer_1               |     return super().poll(timeout)
snuba-transactions-consumer_1               |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 404, in poll
snuba-transactions-consumer_1               |     raise ConsumerError(str(error))
snuba-transactions-consumer_1               | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}
snuba-transactions-consumer_1               | + '[' c = - ']'
snuba-transactions-consumer_1               | + snuba consumer --help
snuba-transactions-consumer_1               | + set -- snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-transactions-consumer_1               | + set gosu snuba snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-transactions-consumer_1               | + exec gosu snuba snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-transactions-consumer_1               | 2021-03-03 17:10:30,785 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}


snuba-subscription-consumer-events_1        |     SynchronizedConsumer(
snuba-subscription-consumer-events_1        |   File "/usr/src/snuba/snuba/utils/streams/synchronized.py", line 106, in __init__
snuba-subscription-consumer-events_1        |     self.__commit_log_worker.result()
snuba-subscription-consumer-events_1        |   File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 432, in result
snuba-subscription-consumer-events_1        |     return self.__get_result()
snuba-subscription-consumer-events_1        |   File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
snuba-subscription-consumer-events_1        |     raise self._exception
snuba-subscription-consumer-events_1        |   File "/usr/src/snuba/snuba/utils/concurrent.py", line 33, in run
snuba-subscription-consumer-events_1        |     result = function()
snuba-subscription-consumer-events_1        |   File "/usr/src/snuba/snuba/utils/streams/synchronized.py", line 130, in __run_commit_log_worker
snuba-subscription-consumer-events_1        |     message = self.__commit_log_consumer.poll(0.1)
snuba-subscription-consumer-events_1        |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 404, in poll
snuba-subscription-consumer-events_1        |     raise ConsumerError(str(error))
snuba-subscription-consumer-events_1        | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}
snuba-subscription-consumer-events_1        | + '[' s = - ']'
snuba-subscription-consumer-events_1        | + snuba subscriptions --help
snuba-subscription-consumer-events_1        | + set -- snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1        | + set gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1        | + exec gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1        | 2021-03-03 17:10:33,661 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}


snuba-sessions-consumer_1                   |     return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-sessions-consumer_1                   |   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
snuba-sessions-consumer_1                   |     return ctx.invoke(self.callback, **ctx.params)
snuba-sessions-consumer_1                   |   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
snuba-sessions-consumer_1                   |     return callback(*args, **kwargs)
snuba-sessions-consumer_1                   |   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
snuba-sessions-consumer_1                   |     consumer.run()
snuba-sessions-consumer_1                   |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
snuba-sessions-consumer_1                   |     self._run_once()
snuba-sessions-consumer_1                   |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
snuba-sessions-consumer_1                   |     self.__message = self.__consumer.poll(timeout=1.0)
snuba-sessions-consumer_1                   |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 404, in poll
snuba-sessions-consumer_1                   |     raise ConsumerError(str(error))
snuba-sessions-consumer_1                   | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}
snuba-sessions-consumer_1                   | + '[' c = - ']'
snuba-sessions-consumer_1                   | + snuba consumer --help
snuba-sessions-consumer_1                   | + set -- snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1                   | + set gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1                   | + exec gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1                   | 2021-03-03 17:10:26,520 New partitions assigned: {Partition(topic=Topic(name='ingest-sessions'), index=0): 0}

snuba-consumer_1                            |     return callback(*args, **kwargs)
snuba-consumer_1                            |   File "/usr/src/snuba/snuba/cli/consumer.py", line 161, in consumer
snuba-consumer_1                            |     consumer.run()
snuba-consumer_1                            |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
snuba-consumer_1                            |     self._run_once()
snuba-consumer_1                            |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
snuba-consumer_1                            |     self.__message = self.__consumer.poll(timeout=1.0)
snuba-consumer_1                            |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 767, in poll
snuba-consumer_1                            |     return super().poll(timeout)
snuba-consumer_1                            |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 404, in poll
snuba-consumer_1                            |     raise ConsumerError(str(error))
snuba-consumer_1                            | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}
snuba-consumer_1                            | + '[' c = - ']'
snuba-consumer_1                            | + snuba consumer --help
snuba-consumer_1                            | + set -- snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                            | + set gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                            | + exec gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                            | 2021-03-03 17:10:22,839 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}
snuba-consumer_1                            | 2021-03-03 17:10:25,438 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
snuba-consumer_1                            | 2021-03-03 17:10:26,128 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 0}
snuba-replacer_1                            |     return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-replacer_1                            |   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
snuba-replacer_1                            |     return ctx.invoke(self.callback, **ctx.params)
snuba-replacer_1                            |   File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
snuba-replacer_1                            |     return callback(*args, **kwargs)
snuba-replacer_1                            |   File "/usr/src/snuba/snuba/cli/replacer.py", line 133, in replacer
snuba-replacer_1                            |     replacer.run()
snuba-replacer_1                            |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 112, in run
snuba-replacer_1                            |     self._run_once()
snuba-replacer_1                            |   File "/usr/src/snuba/snuba/utils/streams/processing/processor.py", line 142, in _run_once
snuba-replacer_1                            |     self.__message = self.__consumer.poll(timeout=1.0)
snuba-replacer_1                            |   File "/usr/src/snuba/snuba/utils/streams/backends/kafka.py", line 404, in poll
snuba-replacer_1                            |     raise ConsumerError(str(error))
snuba-replacer_1                            | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str="JoinGroup failed: Broker: Coordinator load in progress"}
snuba-replacer_1                            | + '[' r = - ']'
snuba-replacer_1                            | + snuba replacer --help
snuba-replacer_1                            | + set -- snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                            | + set gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                            | + exec gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                            | 2021-03-03 17:10:28,441 New partitions assigned: {Partition(topic=Topic(name='event-replacements'), index=0): 0}

relay_1                                     | 2021-03-03T17:23:28Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 60s
relay_1                                     | 2021-03-03T17:23:28Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:28Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:29Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:29Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:29Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:29Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:29Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:30Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:30Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:31Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:31Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:31Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:32Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:32Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:32Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:34Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:34Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:34Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime
relay_1                                     | 2021-03-03T17:23:34Z [relay_server::actors::events] ERROR: error processing event: event exceeded its configured lifetime

i already tried to reinstall multiple times, last time i did, removed all volumes except postgres, sentry-data and sentry-secrets.

im not worried with events from the past, i can start fresh new, but i want to keep all my projects that was created.

Just check what your kafka and zookeeper services are doing as this seems to be an issue with them.

Kafka looks good

===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_SUPPORT_METRICS_ENABLE=false
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=19b1a1a41f26
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
KAFKA_LOG_RETENTION_HOURS=24
KAFKA_MAX_REQUEST_SIZE=50000000
KAFKA_MESSAGE_MAX_BYTES=50000000
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=19b1a1a41f26
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.18.0-147.3.1.el8_1.x86_64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=472MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=7083MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=477MB
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/192.168.160.7:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /192.168.160.11:49832, server: zookeeper/192.168.160.7:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/192.168.160.7:2181, sessionid = 0x106583fbbaa0000, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x106583fbbaa0000 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x106583fbbaa0000
===> Launching ... 
===> Launching kafka ... 
[2021-03-03 19:30:36,704] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-03-03 19:30:39,244] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2021-03-03 19:30:39,244] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2021-03-03 19:30:41,765] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2021-03-03 19:30:41,936] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2021-03-03 19:30:43,112] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-03-03 19:30:43,203] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2021-03-03 19:30:43,205] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2021-03-03 19:30:43,450] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-03-03 19:30:43,537] INFO Stat of the created znode at /brokers/ids/1001 is: 109,109,1614799843496,1614799843496,1,0,0,73843474652856321,180,0,109
 (kafka.zk.KafkaZkClient)
[2021-03-03 19:30:43,540] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 109 (kafka.zk.KafkaZkClient)
[2021-03-03 19:30:44,757] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-03-03 19:30:45,408] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2021-03-03 19:30:46,072] INFO Creating topic events-subscription-results with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
[2021-03-03 19:30:46,080] INFO Creating topic transactions-subscription-results with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)

zookeper, i cant say the same…

===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/zookeeper/data is writable ...
===> Check if /var/lib/zookeeper/log is writable ...
===> Launching ... 
===> Launching zookeeper ... 
[2021-03-03 19:27:16,467] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2021-03-03 19:27:16,737] WARN o.e.j.s.ServletContextHandler@167fdd33{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
[2021-03-03 19:27:16,737] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=zookeeper
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_SUPPORT_METRICS_ENABLE=false
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=650e8888de5a
KAFKA_VERSION=
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZOOKEEPER_CLIENT_PORT=2181
ZOOKEEPER_LOG4J_ROOT_LOGLEVEL=WARN
ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL=WARN
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/zookeeper/data is writable ...
===> Check if /var/lib/zookeeper/log is writable ...
===> Launching ... 
===> Launching zookeeper ... 
[2021-03-03 19:30:22,172] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2021-03-03 19:30:24,163] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
[2021-03-03 19:30:24,164] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)

now im running a fresh install (removed all volumes, and did sentry export/import) and with same error

Checking again, these look like transitional errors while Kafka takes its time to become available. Do you keep seeing these once you let the system settle, say for about 10-15 minutes?

If you keep having the issues, have you tried restarting these individual services using docker-compose restart subscription-consumer-transactions etc?

it was online since yesterday, but still doesnt receive new events…

my new guess its that the problem is on relay… i tried complete new install, and the relay cant connect to the web

relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web. type: AAAA class: IN
relay_1                                     | 2021-03-04T18:50:56Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 38.443359375s
relay_1                                     | 2021-03-04T18:51:21Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web. type: AAAA class: IN
relay_1                                     | 2021-03-04T18:51:34Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 57.665039062s

i did a lot of search, and found a topic (that i created - Relay errors in fresh new on premise install), tried every suggested solution without success :frowning:

This looks like a docker-compose DNS issue and we started hearing more about these with recent Docker releases. Upgrading or downgrading Docker may help here as it seems like you are using the latest and supposedly fixed version of Relay.

1 Like

It’s sending the request web:9000 to the google DNS. So it’ll not find it.

where did you saw it?

i made tests with three different docker versions, environment:

CentOS Linux release 8.1.1911 (Core) - Docker version 19.03.8, build afacb8b
CentOS Linux release 7.7.1908 (Core) - Docker version 20.10.5, build 55c4c88
Ubuntu 18.04.5 LTS - Docker version 20.10.5, build 55c4c88

All fresh installs, the only one that worked was the ubuntu.
Im ok with that, but i guess the problem isnt the docker version…

we solved that here by setting the machine dns as localhost.

1 Like

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.