Okay, based on your logs I’d guess your Kafka instance is struggling to stay up and deal with the load. This will prevent you from receiving any new error reports but won’t affect account recovery emails or 2FA.
It may just be a matter of time until it settles and starts accepting connections or you may need to be scaling up that Kafka instance which I honestly don’t know how
PS: If you use the code blocks feature on the forum when sharing logs, it would make them a lot more readable Like this:
[root@jumphost onpremise]# docker logs sentry_onpremise_kafka_1
===> ENV Variables …
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_SUPPORT_METRICS_ENABLE=false
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=6eb1a4ea1869
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
KAFKA_LOG_RETENTION_HOURS=24
KAFKA_MAX_REQUEST_SIZE=50000000
KAFKA_MESSAGE_MAX_BYTES=50000000
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring …
===> Running preflight checks …
===> Check if /var/lib/kafka/data is writable …
===> Check if Zookeeper is healthy …
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=6eb1a4ea1869
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-1127.19.1.el7.x86_64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=144MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2172MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=148MB
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.23.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.23.0.10:40402, server: zookeeper/172.23.0.2:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.23.0.2:2181, sessionid = 0x10002d7c6e80000, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10002d7c6e80000 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10002d7c6e80000
===> Launching …
===> Launching kafka …
[2020-10-20 07:25:48,644] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-10-20 07:25:49,489] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2020-10-20 07:25:49,489] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2020-10-20 07:25:50,817] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2020-10-20 07:25:50,903] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2020-10-20 07:25:51,464] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-10-20 07:25:51,529] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-10-20 07:25:51,534] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-10-20 07:25:51,683] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-10-20 07:25:51,748] INFO Stat of the created znode at /brokers/ids/1001 is: 413,413,1603178751724,1603178751724,1,0,0,72060719816245249,180,0,413
(kafka.zk.KafkaZkClient)
[2020-10-20 07:25:51,749] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 413 (kafka.zk.KafkaZkClient)
[2020-10-20 07:25:52,184] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-10-20 07:25:52,398] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[root@jumphost onpremise]#