Upgrading from 9.12 -> 10.0.0 -> v20.6.0

The 404 issue with ads.js is due to https://github.com/getsentry/sentry/pull/21411 and I submitted a fix: https://github.com/getsentry/sentry/pull/21461

That said it should not affect 2FA at all. I suspect there’s a clock skew issue or something with 2FA.

I’m investigating other logs but sounds like a docker networking configuration issue to me. Maybe there are firewall rules restricting docker-compose networks from reaching external networks?

Okay, based on your logs I’d guess your Kafka instance is struggling to stay up and deal with the load. This will prevent you from receiving any new error reports but won’t affect account recovery emails or 2FA.

It may just be a matter of time until it settles and starts accepting connections or you may need to be scaling up that Kafka instance which I honestly don’t know how :frowning:

PS: If you use the code blocks feature on the forum when sharing logs, it would make them a lot more readable :wink: Like this:

 [root@jumphost onpremise]# docker logs sentry_onpremise_kafka_1
===> ENV Variables …
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_SUPPORT_METRICS_ENABLE=false
CONFLUENT_VERSION=5.5.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=6eb1a4ea1869
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
KAFKA_LOG_RETENTION_HOURS=24
KAFKA_MAX_REQUEST_SIZE=50000000
KAFKA_MESSAGE_MAX_BYTES=50000000
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
KAFKA_VERSION=
KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.12
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.38.0.13
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring …
===> Running preflight checks …
===> Check if /var/lib/kafka/data is writable …
===> Check if Zookeeper is healthy …
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=6eb1a4ea1869
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-1127.19.1.el7.x86_64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=144MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2172MB
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=148MB
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
[main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
[main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
[main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.23.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.23.0.10:40402, server: zookeeper/172.23.0.2:2181
[main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.23.0.2:2181, sessionid = 0x10002d7c6e80000, negotiated timeout = 40000
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10002d7c6e80000 closed
[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10002d7c6e80000
===> Launching …
===> Launching kafka …
[2020-10-20 07:25:48,644] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-10-20 07:25:49,489] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2020-10-20 07:25:49,489] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2020-10-20 07:25:50,817] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2020-10-20 07:25:50,903] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2020-10-20 07:25:51,464] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-10-20 07:25:51,529] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-10-20 07:25:51,534] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-10-20 07:25:51,683] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-10-20 07:25:51,748] INFO Stat of the created znode at /brokers/ids/1001 is: 413,413,1603178751724,1603178751724,1,0,0,72060719816245249,180,0,413
(kafka.zk.KafkaZkClient)
[2020-10-20 07:25:51,749] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 413 (kafka.zk.KafkaZkClient)
[2020-10-20 07:25:52,184] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-10-20 07:25:52,398] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[root@jumphost onpremise]#

@BYK

yes, i think that 2key auth and smtp is not working because of some networking issues, but its weird because before upgrading to 20.10.1 it worked as expected (2fac. auth as well as smtp recovery) i double-checked all functionality before i did upgrade.

which service/container is responsible about processing 2fac. auth.?

PS how can i past logs in such a good way as you did it in last message?

Kristaps

PS this is migration log, i see also some kafka errors, maybe it will give additional clues:

LAST UPDATE:
After setting the same UTC time to host machine + container + restarting all env, 2key auth. started to work as expected.

Still struggling with SMTP and password recovery

Hey @BYK

i checked logs when i switched to 20.10.1 and running install.sh and see errors regarding kafka bootstraping, maybe any ideas:

Sorry for the silence in the past few days. Glad setting the timezone fixed the issue for you. Regarding SMTP, my best guess is a firewall rule preventing the connection. Otherwise, I honestly don’t have many more ideas.

You wrap them with triple back ticks: ``` :wink:

These are expected, transient errors while waiting for Kafka to be up. Snuba starts up faster than Kafka and then keeps retrying until Kafka is available.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.