Everytime i try to upgrade, it fails.
It looks like Kafka never comes online and then it just quits.
Any suggestions?
Everytime i try to upgrade, it fails.
It looks like Kafka never comes online and then it just quits.
Any suggestions?
You can start by looking at (and sharing) your kafka logs?
Hi BYK,
Thanks for the reply,
I’ll go see if i can find where those live.
I hope it’s readable…
kafka_1 | ===> ENV Variables …
kafka_1 | ALLOW_UNSIGNED=false
kafka_1 | COMPONENT=kafka
kafka_1 | CONFLUENT_DEB_VERSION=1
kafka_1 | CONFLUENT_PLATFORM_LABEL=
kafka_1 | CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1 | CONFLUENT_VERSION=5.5.0
kafka_1 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1 | HOME=/root
kafka_1 | HOSTNAME=4e72a7d4773e
kafka_1 | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1 | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1 | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1 | KAFKA_LOG_RETENTION_HOURS=24
kafka_1 | KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1 | KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1 | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
kafka_1 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1 | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1 | KAFKA_VERSION=
kafka_1 | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1 | LANG=C.UTF-8
kafka_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1 | PWD=/
kafka_1 | PYTHON_PIP_VERSION=8.1.2
kafka_1 | PYTHON_VERSION=2.7.9-1
kafka_1 | SCALA_VERSION=2.12
kafka_1 | SHLVL=1
kafka_1 | ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1 | _=/usr/bin/env
kafka_1 | ===> User
kafka_1 | uid=0(root) gid=0(root) groups=0(root)
kafka_1 | ===> Configuring …
kafka_1 | ===> Running preflight checks …
kafka_1 | ===> Check if /var/lib/kafka/data is writable …
kafka_1 | ===> Check if Zookeeper is healthy …
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=4e72a7d4773e
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.4.0-201-generic
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=187MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2834MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=192MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka_1 | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.20.0.4:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.20.0.5:36040, server: zookeeper/172.20.0.4:2181
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.20.0.4:2181, sessionid = 0x1000009d5cd0000, negotiated timeout = 40000
kafka_1 | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000009d5cd0000
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000009d5cd0000 closed
kafka_1 | ===> Launching …
kafka_1 | ===> Launching kafka …
kafka_1 | [2021-02-12 14:43:50,703] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka_1 | [2021-02-12 14:43:51,570] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1 | [2021-02-12 14:43:51,571] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1 | [2021-02-12 14:43:52,649] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka_1 | org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /var/lib/kafka/data. A Kafka instance in another process or thread is using this directory.
kafka_1 | at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:249)
kafka_1 | at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
kafka_1 | at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
kafka_1 | at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
kafka_1 | at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
kafka_1 | at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
kafka_1 | at kafka.log.LogManager.lockLogDirs(LogManager.scala:244)
kafka_1 | at kafka.log.LogManager.(LogManager.scala:105)
kafka_1 | at kafka.log.LogManager$.apply(LogManager.scala:1093)
kafka_1 | at kafka.server.KafkaServer.startup(KafkaServer.scala:274)
kafka_1 | at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114)
kafka_1 | at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)
kafka_1 | ===> ENV Variables …
kafka_1 | ALLOW_UNSIGNED=false
kafka_1 | COMPONENT=kafka
kafka_1 | CONFLUENT_DEB_VERSION=1
kafka_1 | CONFLUENT_PLATFORM_LABEL=
kafka_1 | CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1 | CONFLUENT_VERSION=5.5.0
kafka_1 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1 | HOME=/root
kafka_1 | HOSTNAME=4e72a7d4773e
kafka_1 | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1 | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1 | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1 | KAFKA_LOG_RETENTION_HOURS=24
kafka_1 | KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1 | KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1 | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
kafka_1 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1 | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1 | KAFKA_VERSION=
kafka_1 | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1 | LANG=C.UTF-8
kafka_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1 | PWD=/
kafka_1 | PYTHON_PIP_VERSION=8.1.2
kafka_1 | PYTHON_VERSION=2.7.9-1
kafka_1 | SCALA_VERSION=2.12
kafka_1 | SHLVL=1
kafka_1 | ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1 | _=/usr/bin/env
kafka_1 | ===> User
kafka_1 | uid=0(root) gid=0(root) groups=0(root)
kafka_1 | ===> Configuring …
kafka_1 | ===> Running preflight checks …
kafka_1 | ===> Check if /var/lib/kafka/data is writable …
kafka_1 | ===> Check if Zookeeper is healthy …
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=4e72a7d4773e
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.4.0-201-generic
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=187MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2834MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=192MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka_1 | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.20.0.4:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.20.0.2:48268, server: zookeeper/172.20.0.4:2181
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.20.0.4:2181, sessionid = 0x1000009d5cd0002, negotiated timeout = 40000
kafka_1 | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000009d5cd0002
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000009d5cd0002 closed
kafka_1 | ===> Launching …
kafka_1 | ===> Launching kafka …
kafka_1 | [2021-02-12 14:44:02,325] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka_1 | [2021-02-12 14:44:03,152] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1 | [2021-02-12 14:44:03,152] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1 | [2021-02-12 14:44:03,881] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka_1 | org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /var/lib/kafka/data. A Kafka instance in another process or thread is using this directory.
kafka_1 | at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:249)
kafka_1 | at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
kafka_1 | at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
kafka_1 | at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
kafka_1 | at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
kafka_1 | at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
kafka_1 | at kafka.log.LogManager.lockLogDirs(LogManager.scala:244)
kafka_1 | at kafka.log.LogManager.(LogManager.scala:105)
kafka_1 | at kafka.log.LogManager$.apply(LogManager.scala:1093)
kafka_1 | at kafka.server.KafkaServer.startup(KafkaServer.scala:274)
kafka_1 | at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114)
kafka_1 | at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)
kafka_1 | ===> ENV Variables …
kafka_1 | ALLOW_UNSIGNED=false
kafka_1 | COMPONENT=kafka
kafka_1 | CONFLUENT_DEB_VERSION=1
kafka_1 | CONFLUENT_PLATFORM_LABEL=
kafka_1 | CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1 | CONFLUENT_VERSION=5.5.0
kafka_1 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1 | HOME=/root
kafka_1 | HOSTNAME=4e72a7d4773e
kafka_1 | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1 | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1 | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1 | KAFKA_LOG_RETENTION_HOURS=24
kafka_1 | KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1 | KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1 | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
kafka_1 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1 | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1 | KAFKA_VERSION=
kafka_1 | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1 | LANG=C.UTF-8
kafka_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1 | PWD=/
kafka_1 | PYTHON_PIP_VERSION=8.1.2
kafka_1 | PYTHON_VERSION=2.7.9-1
kafka_1 | SCALA_VERSION=2.12
kafka_1 | SHLVL=1
kafka_1 | ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1 | _=/usr/bin/env
kafka_1 | ===> User
kafka_1 | uid=0(root) gid=0(root) groups=0(root)
kafka_1 | ===> Configuring …
kafka_1 | ===> Running preflight checks …
kafka_1 | ===> Check if /var/lib/kafka/data is writable …
kafka_1 | ===> Check if Zookeeper is healthy …
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=4e72a7d4773e
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.4.0-201-generic
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=187MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2834MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=192MB
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka_1 | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1 | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.20.0.4:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.20.0.2:48296, server: zookeeper/172.20.0.4:2181
kafka_1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.20.0.4:2181, sessionid = 0x1000009d5cd0004, negotiated timeout = 40000
kafka_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000009d5cd0004 closed
kafka_1 | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000009d5cd0004
kafka_1 | ===> Launching …
kafka_1 | ===> Launching kafka …
kafka_1 | [2021-02-12 14:44:13,434] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka_1 | [2021-02-12 14:44:14,197] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1 | [2021-02-12 14:44:14,198] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1 | [2021-02-12 14:44:14,918] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka_1 | org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /var/lib/kafka/data. A Kafka instance in another process or thread is using this directory.
kafka_1 | at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:249)
kafka_1 | at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
kafka_1 | at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
kafka_1 | at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
kafka_1 | at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
kafka_1 | at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
kafka_1 | at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
kafka_1 | at kafka.log.LogManager.lockLogDirs(LogManager.scala:244)
kafka_1 | at kafka.log.LogManager.(LogManager.scala:105)
kafka_1 | at kafka.log.LogManager$.apply(LogManager.scala:1093)
kafka_1 | at kafka.server.KafkaServer.startup(KafkaServer.scala:274)
kafka_1 | at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114)
kafka_1 | at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)
I tried upgrading in a smaller step from Sentry 10.0.0.dev0 to like 10.0.1 using SENTRY_IMAGE=getsentry/sentry:98d6387 ./install.sh
But that errors out with a manifest not found error.
Yeah, just go to latest which has many fixes for migrations from older versions and it is safe.
Unless you keep restarting things (or specifically, kafka
) seems like your kafka
service keeps crashing or getting killed. This might be due to running out of memory. Could that be the case?
I don’t think so, i believe the server has 12GB of memory.
I’ll have to check the actual usage tuesday.
Stopping the current docker containers before running install.sh seems to have resolved the Kafka issue.
And it seems to get further into the process but then fails when trying to modify the database i think.
Defining variables and helpers …
Parsing command line …
Setting up error handling …
Checking minimum requirements …
Creating volumes for persistent storage …
Created sentry-data.
Created sentry-postgres.
Created sentry-redis.
Created sentry-zookeeper.
Created sentry-kafka.
Created sentry-clickhouse.
Created sentry-symbolicator.Ensuring files from examples …
sentry/sentry.conf.py already exists, skipped creation.
sentry/config.yml already exists, skipped creation.
sentry/requirements.txt already exists, skipped creation.
Creating symbolicator/config.yml…
Creating relay/config.yml…Generating secret key …
Replacing TSDB …
Fetching and updating Docker images …
nightly: Pulling from getsentry/sentry
Digest: sha256:9cf0af4abaa4c48d748f2f019b9e5b0d2aab0a8801346875fe2fa0e2545f7bab
Status: Image is up to date for getsentry/sentry:nightly
docker.io/getsentry/sentry:nightlyBuilding and tagging Docker images …
smtp uses an image, skipping
memcached uses an image, skipping
redis uses an image, skipping
postgres uses an image, skipping
zookeeper uses an image, skipping
kafka uses an image, skipping
clickhouse uses an image, skipping
geoipupdate uses an image, skipping
snuba-api uses an image, skipping
snuba-consumer uses an image, skipping
snuba-outcomes-consumer uses an image, skipping
snuba-sessions-consumer uses an image, skipping
snuba-transactions-consumer uses an image, skipping
snuba-replacer uses an image, skipping
snuba-subscription-consumer-events uses an image, skipping
snuba-subscription-consumer-transactions uses an image, skipping
symbolicator uses an image, skipping
web uses an image, skipping
cron uses an image, skipping
worker uses an image, skipping
ingest-consumer uses an image, skipping
post-process-forwarder uses an image, skipping
subscription-consumer-events uses an image, skipping
subscription-consumer-transactions uses an image, skipping
relay uses an image, skipping
nginx uses an image, skipping
Building snuba-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
—> 9409e72b1828
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
—> Running in 85a51609037a
Get:1 Index of /debian-security buster/updates InRelease [65.4 kB]
Get:2 Index of /debian buster InRelease [122 kB]
Get:3 Index of /debian buster-updates InRelease [51.9 kB]
Get:4 Index of /debian-security buster/updates/main amd64 Packages [267 kB]
Get:5 Index of /debian buster/main amd64 Packages [7907 kB]
Get:6 Index of /debian buster-updates/main amd64 Packages [9504 B]
Fetched 8422 kB in 3s (2779 kB/s)
Reading package lists…
Reading package lists…
Building dependency tree…
Reading state information…
The following additional packages will be installed:
lsb-base sensible-utils
Suggested packages:
anacron logrotate checksecurity
Recommended packages:
default-mta | mail-transport-agent
The following NEW packages will be installed:
cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 3 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 Index of /debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 Index of /debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 Index of /debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
e[91mdebconf: delaying package configuration, since apt-utils is not installed
e[0mFetched 143 kB in 0s (1706 kB/s)
Selecting previously unselected package sensible-utils.
(Reading database …
(Reading database … 5%
(Reading database … 10%
(Reading database … 15%
(Reading database … 20%
(Reading database … 25%
(Reading database … 30%
(Reading database … 35%
(Reading database … 40%
(Reading database … 45%
(Reading database … 50%
(Reading database … 55%
(Reading database … 60%
(Reading database … 65%
(Reading database … 70%
(Reading database … 75%
(Reading database … 80%
(Reading database … 85%
(Reading database … 90%
(Reading database … 95%
(Reading database … 100%
(Reading database … 6840 files and directories currently installed.)
Preparing to unpack …/sensible-utils_0.0.12_all.deb …
Unpacking sensible-utils (0.0.12) …
Selecting previously unselected package lsb-base.
Preparing to unpack …/lsb-base_10.2019051400_all.deb …
Unpacking lsb-base (10.2019051400) …
Selecting previously unselected package cron.
Preparing to unpack …/cron_3.0pl1-134+deb10u1_amd64.deb …
Unpacking cron (3.0pl1-134+deb10u1) …
Setting up lsb-base (10.2019051400) …
Setting up sensible-utils (0.0.12) …
Setting up cron (3.0pl1-134+deb10u1) …
Adding group `crontab’ (GID 101) …
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 85a51609037a
—> a4792c4d8902
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
—> bcabd3dd7b26
Step 5/5 : ENTRYPOINT [“/entrypoint.sh”]
—> Running in 7b77c5db4b00
Removing intermediate container 7b77c5db4b00
—> b69dd225a179Successfully built b69dd225a179
Successfully tagged snuba-cleanup-onpremise-local:latest
Building symbolicator-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
—> d97b29d85aaa
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
—> Running in 943d77eb03f9
Ign:1 Index of /debian stretch InRelease
Get:2 Index of /debian-security stretch/updates InRelease [53.0 kB]
Get:3 Index of /debian stretch-updates InRelease [93.6 kB]
Get:4 Index of /debian stretch Release [118 kB]
Get:5 Index of /debian stretch Release.gpg [2410 B]
Get:6 Index of /debian-security stretch/updates/main amd64 Packages [654 kB]
Get:7 Index of /debian stretch-updates/main amd64 Packages [2596 B]
Get:8 Index of /debian stretch/main amd64 Packages [7080 kB]
Fetched 8003 kB in 2s (3510 kB/s)
Reading package lists…
Reading package lists…
Building dependency tree…
Reading state information…
Suggested packages:
anacron logrotate checksecurity
Recommended packages:
exim4 | postfix | mail-transport-agent
The following NEW packages will be installed:
cron
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 95.4 kB of archives.
After this operation, 257 kB of additional disk space will be used.
Get:1 Index of /debian stretch/main amd64 cron amd64 3.0pl1-128+deb9u1 [95.4 kB]
e[91mdebconf: delaying package configuration, since apt-utils is not installed
e[0mFetched 95.4 kB in 0s (1818 kB/s)
Selecting previously unselected package cron.
(Reading database …
(Reading database … 5%
(Reading database … 10%
(Reading database … 15%
(Reading database … 20%
(Reading database … 25%
(Reading database … 30%
(Reading database … 35%
(Reading database … 40%
(Reading database … 45%
(Reading database … 50%
(Reading database … 55%
(Reading database … 60%
(Reading database … 65%
(Reading database … 70%
(Reading database … 75%
(Reading database … 80%
(Reading database … 85%
(Reading database … 90%
(Reading database … 95%
(Reading database … 100%
(Reading database … 6661 files and directories currently installed.)
Preparing to unpack …/cron_3.0pl1-128+deb9u1_amd64.deb …
Unpacking cron (3.0pl1-128+deb9u1) …
Setting up cron (3.0pl1-128+deb9u1) …
Adding group `crontab’ (GID 101) …
Done.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 943d77eb03f9
—> 23ed6eb113c1
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
—> b2c062ee7db4
Step 5/5 : ENTRYPOINT [“/entrypoint.sh”]
—> Running in 2fc86cdf1da5
Removing intermediate container 2fc86cdf1da5
—> 9fa52ef6afc6Successfully built 9fa52ef6afc6
Successfully tagged symbolicator-cleanup-onpremise-local:latest
Building sentry-cleanup
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
—> caae48af1a8a
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron && rm -r /var/lib/apt/lists/*
—> Running in 5a95894b37a4
Get:1 Index of /debian-security buster/updates InRelease [65.4 kB]
Get:2 Index of /debian buster InRelease [122 kB]
Get:3 Index of /debian buster-updates InRelease [51.9 kB]
Get:4 Index of /debian-security buster/updates/main amd64 Packages [267 kB]
Get:5 Index of /debian buster/main amd64 Packages [7907 kB]
Get:6 Index of /debian buster-updates/main amd64 Packages [9504 B]
Fetched 8422 kB in 2s (3419 kB/s)
Reading package lists…
Reading package lists…
Building dependency tree…
Reading state information…
The following additional packages will be installed:
lsb-base sensible-utils
Suggested packages:
anacron logrotate checksecurity
Recommended packages:
default-mta | mail-transport-agent
The following NEW packages will be installed:
cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 4 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 Index of /debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 Index of /debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 Index of /debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
e[91mdebconf: delaying package configuration, since apt-utils is not installed
e[0mFetched 143 kB in 0s (2088 kB/s)
Selecting previously unselected package sensible-utils.
(Reading database …
(Reading database … 5%
(Reading database … 10%
(Reading database … 15%
(Reading database … 20%
(Reading database … 25%
(Reading database … 30%
(Reading database … 35%
(Reading database … 40%
(Reading database … 45%
(Reading database … 50%
(Reading database … 55%
(Reading database … 60%
(Reading database … 65%
(Reading database … 70%
(Reading database … 75%
(Reading database … 80%
(Reading database … 85%
(Reading database … 90%
(Reading database … 95%
(Reading database … 100%
(Reading database … 11935 files and directories currently installed.)
Preparing to unpack …/sensible-utils_0.0.12_all.deb …
Unpacking sensible-utils (0.0.12) …
Selecting previously unselected package lsb-base.
Preparing to unpack …/lsb-base_10.2019051400_all.deb …
Unpacking lsb-base (10.2019051400) …
Selecting previously unselected package cron.
Preparing to unpack …/cron_3.0pl1-134+deb10u1_amd64.deb …
Unpacking cron (3.0pl1-134+deb10u1) …
Setting up lsb-base (10.2019051400) …
Setting up sensible-utils (0.0.12) …
Setting up cron (3.0pl1-134+deb10u1) …
Adding group `crontab’ (GID 101) …
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 5a95894b37a4
—> d0ce11be8df0
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
—> 0dd9c3a44d2a
Step 5/5 : ENTRYPOINT [“/entrypoint.sh”]
—> Running in f1aafc942dee
Removing intermediate container f1aafc942dee
—> d475e22c9ec0Successfully built d475e22c9ec0
Successfully tagged sentry-cleanup-onpremise-local:latestDocker images built.
Turning things off …
Removing network onpremise_default
Network onpremise_default not found.
Removing network sentry_onpremise_default
Network sentry_onpremise_default not found.Setting up Zookeeper …
Creating network “sentry_onpremise_default” with the default driver
Creating volume “sentry_onpremise_sentry-secrets” with default driver
Creating volume “sentry_onpremise_sentry-smtp” with default driver
Creating volume “sentry_onpremise_sentry-zookeeper-log” with default driver
Creating volume “sentry_onpremise_sentry-kafka-log” with default driver
Creating volume “sentry_onpremise_sentry-smtp-log” with default driver
Creating volume “sentry_onpremise_sentry-clickhouse-log” with default driverBootstrapping and migrating Snuba …
Creating sentry_onpremise_clickhouse_1 …
Creating sentry_onpremise_redis_1 …
Creating sentry_onpremise_zookeeper_1 …
Creating sentry_onpremise_zookeeper_1 … done
Creating sentry_onpremise_kafka_1 …
Creating sentry_onpremise_redis_1 … done
Creating sentry_onpremise_kafka_1 … done
Creating sentry_onpremise_clickhouse_1 … done
- ‘[’ b = - ‘]’
- snuba bootstrap --help
- set – snuba bootstrap --no-migrate --force
- set gosu snuba snuba bootstrap --no-migrate --force
- exec gosu snuba snuba bootstrap --no-migrate --force
%3|1613725698.522|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 11ms in state CONNECT)
2021-02-19 09:08:19,511 Connection to Kafka failed (attempt 0)
Traceback (most recent call last):
File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 55, in bootstrap
client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str=“Failed to get metadata: Local: Broker transport failure”}
%3|1613725699.512|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
%3|1613725700.514|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1613725701.515|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-02-19 09:08:21,516 Connection to Kafka failed (attempt 1)
Traceback (most recent call last):
File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 55, in bootstrap
client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str=“Failed to get metadata: Local: Broker transport failure”}
%3|1613725702.524|FAIL|rdkafka#producer-3| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1613725703.524|FAIL|rdkafka#producer-3| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.19.0.5:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-02-19 09:08:23,526 Connection to Kafka failed (attempt 2)
Traceback (most recent call last):
File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 55, in bootstrap
client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str=“Failed to get metadata: Local: Broker transport failure”}
2021-02-19 09:08:26,293 Topic outcomes created
2021-02-19 09:08:26,293 Topic events created
2021-02-19 09:08:26,294 Topic event-replacements created
2021-02-19 09:08:26,294 Topic snuba-commit-log created
2021-02-19 09:08:26,294 Topic ingest-sessions created
2021-02-19 09:08:26,295 Topic cdc created
Starting sentry_onpremise_redis_1 …
Starting sentry_onpremise_clickhouse_1 …
Starting sentry_onpremise_zookeeper_1 …
Starting sentry_onpremise_zookeeper_1 … done
Starting sentry_onpremise_clickhouse_1 … done
Starting sentry_onpremise_redis_1 … done
Starting sentry_onpremise_kafka_1 …
Starting sentry_onpremise_kafka_1 … done- ‘[’ m = - ‘]’
- snuba migrations --help
- set – snuba migrations migrate --force
- set gosu snuba snuba migrations migrate --force
- exec gosu snuba snuba migrations migrate --force
Traceback (most recent call last):
File “/usr/src/snuba/snuba/clickhouse/native.py”, line 80, in execute
result: Sequence[Any] = conn.execute(
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 214, in execute
rv = self.process_ordinary_query(
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 340, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 83, in receive_result
return result.get_result()
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/result.py”, line 48, in get_result
for packet in self.packet_generator:
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 95, in packet_generator
packet = self.receive_packet()
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 112, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 44.
DB::Exception: ALTER of key column transaction_name must be metadata-only. Stack trace:
- Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
- DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
- ? @ 0xd913054 in /usr/bin/clickhouse
- DB::InterpreterAlterQuery::execute() @ 0xd0a6585 in /usr/bin/clickhouse
- ? @ 0xd550808 in /usr/bin/clickhouse
- DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd553441 in /usr/bin/clickhouse
- DB::TCPHandler::runImpl() @ 0x9024489 in /usr/bin/clickhouse
- DB::TCPHandler::run() @ 0x9025470 in /usr/bin/clickhouse
- Poco::Net::TCPServerConnection::start() @ 0xe3ac69b in /usr/bin/clickhouse
- Poco::Net::TCPServerDispatcher::run() @ 0xe3acb1d in /usr/bin/clickhouse
- Poco::PooledThread::run() @ 0x105c3317 in /usr/bin/clickhouse
- Poco::ThreadImpl::runnableEntry(void*) @ 0x105bf11c in /usr/bin/clickhouse
- ? @ 0x105c0abd in /usr/bin/clickhouse
- start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
- clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/usr/local/bin/snuba”, line 33, in
sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
return self.main(*args, **kwargs)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
rv = self.invoke(ctx)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
return callback(*args, **kwargs)
File “/usr/src/snuba/snuba/cli/migrations.py”, line 52, in migrate
runner.run_all(force=force)
File “/usr/src/snuba/snuba/migrations/runner.py”, line 148, in run_all
self._run_migration_impl(migration_key, force=force)
File “/usr/src/snuba/snuba/migrations/runner.py”, line 208, in _run_migration_impl
migration.forwards(context, dry_run)
File “/usr/src/snuba/snuba/migrations/migration.py”, line 100, in forwards
op.execute(local=True)
File “/usr/src/snuba/snuba/migrations/operations.py”, line 41, in execute
connection.execute(self.format_sql())
File “/usr/src/snuba/snuba/clickhouse/native.py”, line 103, in execute
raise ClickhouseError(e.code, e.message) from e
snuba.clickhouse.errors.ClickhouseError: [44] DB::Exception: ALTER of key column transaction_name must be metadata-only. Stack trace:
- Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
- DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
- ? @ 0xd913054 in /usr/bin/clickhouse
- DB::InterpreterAlterQuery::execute() @ 0xd0a6585 in /usr/bin/clickhouse
- ? @ 0xd550808 in /usr/bin/clickhouse
- DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd553441 in /usr/bin/clickhouse
- DB::TCPHandler::runImpl() @ 0x9024489 in /usr/bin/clickhouse
- DB::TCPHandler::run() @ 0x9025470 in /usr/bin/clickhouse
- Poco::Net::TCPServerConnection::start() @ 0xe3ac69b in /usr/bin/clickhouse
- Poco::Net::TCPServerDispatcher::run() @ 0xe3acb1d in /usr/bin/clickhouse
- Poco::PooledThread::run() @ 0x105c3317 in /usr/bin/clickhouse
- Poco::ThreadImpl::runnableEntry(void*) @ 0x105bf11c in /usr/bin/clickhouse
- ? @ 0x105c0abd in /usr/bin/clickhouse
- start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
- clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
An error occurred, caught SIGERR on line 283
Cleaning up…
Trying to run install.sh again straight after results in:
Starting sentry_onpremise_zookeeper_1 …
Starting sentry_onpremise_clickhouse_1 …
Starting sentry_onpremise_zookeeper_1 … done
Starting sentry_onpremise_clickhouse_1 … done
Starting sentry_onpremise_redis_1 …
Starting sentry_onpremise_kafka_1 …
Starting sentry_onpremise_kafka_1 … done
Starting sentry_onpremise_redis_1 … done
- ‘[’ m = - ‘]’
- snuba migrations --help
- set – snuba migrations migrate --force
- set gosu snuba snuba migrations migrate --force
- exec gosu snuba snuba migrations migrate --force
Traceback (most recent call last):
File “/usr/local/bin/snuba”, line 33, in
sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
return self.main(*args, **kwargs)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
rv = self.invoke(ctx)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
return callback(*args, **kwargs)
File “/usr/src/snuba/snuba/cli/migrations.py”, line 52, in migrate
runner.run_all(force=force)
File “/usr/src/snuba/snuba/migrations/runner.py”, line 136, in run_all
pending_migrations = self._get_pending_migrations()
File “/usr/src/snuba/snuba/migrations/runner.py”, line 288, in _get_pending_migrations
raise MigrationInProgress(migration_key)
snuba.migrations.errors.MigrationInProgress: transactions: 0003_transactions_onpremise_fix_columns
An error occurred, caught SIGERR on line 283
Cleaning up…
Getting closer…
Listing the migrations, docker-compose --no-ansi run --rm snuba-api migrations list
Show the following output:
system
0001_migrationsevents
0001_events_initial
0002_events_onpremise_compatibility
0003_errors
0004_errors_onpremise_compatibility
0005_events_tags_hash_map
0006_errors_tags_hash_map
0007_groupedmessages
0008_groupassignees
0009_errors_add_http_fields
0010_groupedmessages_onpremise_compatibility
0011_rebuild_errors
0012_errors_make_level_nullabletransactions
0001_transactions
0002_transactions_onpremise_fix_orderby_and_partitionby
[-] 0003_transactions_onpremise_fix_columns (IN PROGRESS) (blocking)
0004_transactions_add_tags_hash_map (blocking)
0005_transactions_add_measurements
Then reversing the migration using docker-compose --no-ansi run --rm snuba-api migrations reverse --group transactions --migration-id 0003_transactions_onpremise_fix_columns
Followed by docker-compose --no-ansi run --rm snuba-api migrations migrate --force
Returns us to the previous problem off:
- exec gosu snuba snuba migrations migrate --force
Traceback (most recent call last):
File “/usr/src/snuba/snuba/clickhouse/native.py”, line 80, in execute
result: Sequence[Any] = conn.execute(
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 214, in execute
rv = self.process_ordinary_query(
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 340, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 83, in receive_result
return result.get_result()
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/result.py”, line 48, in get_result
for packet in self.packet_generator:
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 95, in packet_generator
packet = self.receive_packet()
File “/usr/local/lib/python3.8/site-packages/clickhouse_driver/client.py”, line 112, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 44.
DB::Exception: ALTER of key column transaction_name must be metadata-only. Stack trace:
- Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
- DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
- ? @ 0xd913054 in /usr/bin/clickhouse
- DB::InterpreterAlterQuery::execute() @ 0xd0a6585 in /usr/bin/clickhouse
- ? @ 0xd550808 in /usr/bin/clickhouse
- DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd553441 in /usr/bin/clickhouse
- DB::TCPHandler::runImpl() @ 0x9024489 in /usr/bin/clickhouse
- DB::TCPHandler::run() @ 0x9025470 in /usr/bin/clickhouse
- Poco::Net::TCPServerConnection::start() @ 0xe3ac69b in /usr/bin/clickhouse
- Poco::Net::TCPServerDispatcher::run() @ 0xe3acb1d in /usr/bin/clickhouse
- Poco::PooledThread::run() @ 0x105c3317 in /usr/bin/clickhouse
- Poco::ThreadImpl::runnableEntry(void*) @ 0x105bf11c in /usr/bin/clickhouse
- ? @ 0x105c0abd in /usr/bin/clickhouse
- start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
- clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/usr/local/bin/snuba”, line 33, in
sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
return self.main(*args, **kwargs)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
rv = self.invoke(ctx)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
return callback(*args, **kwargs)
File “/usr/src/snuba/snuba/cli/migrations.py”, line 52, in migrate
runner.run_all(force=force)
File “/usr/src/snuba/snuba/migrations/runner.py”, line 148, in run_all
self._run_migration_impl(migration_key, force=force)
File “/usr/src/snuba/snuba/migrations/runner.py”, line 208, in _run_migration_impl
migration.forwards(context, dry_run)
File “/usr/src/snuba/snuba/migrations/migration.py”, line 100, in forwards
op.execute(local=True)
File “/usr/src/snuba/snuba/migrations/operations.py”, line 41, in execute
connection.execute(self.format_sql())
File “/usr/src/snuba/snuba/clickhouse/native.py”, line 103, in execute
raise ClickhouseError(e.code, e.message) from e
snuba.clickhouse.errors.ClickhouseError: [44] DB::Exception: ALTER of key column transaction_name must be metadata-only. Stack trace:
- Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x105351b0 in /usr/bin/clickhouse
- DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x8f4172d in /usr/bin/clickhouse
- ? @ 0xd913054 in /usr/bin/clickhouse
- DB::InterpreterAlterQuery::execute() @ 0xd0a6585 in /usr/bin/clickhouse
- ? @ 0xd550808 in /usr/bin/clickhouse
- DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, bool) @ 0xd553441 in /usr/bin/clickhouse
- DB::TCPHandler::runImpl() @ 0x9024489 in /usr/bin/clickhouse
- DB::TCPHandler::run() @ 0x9025470 in /usr/bin/clickhouse
- Poco::Net::TCPServerConnection::start() @ 0xe3ac69b in /usr/bin/clickhouse
- Poco::Net::TCPServerDispatcher::run() @ 0xe3acb1d in /usr/bin/clickhouse
- Poco::PooledThread::run() @ 0x105c3317 in /usr/bin/clickhouse
- Poco::ThreadImpl::runnableEntry(void*) @ 0x105bf11c in /usr/bin/clickhouse
- ? @ 0x105c0abd in /usr/bin/clickhouse
- start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
- clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
@BYK Any idea how we can resolve this database migration problem?
Should I create an issue on github?
Hey @Scale were you by chance a very early user of transactions? And can you tell me if you have any data in your transactions_local
ClickHouse table right now?
Hi lynnagara,
To be honest I don’t know when it was introduced.
But the table is empty indeed.
This implies that you are doing something different with the docker-compose network which may explain the issues you’ve been getting. We have auto-shutdown during upgrades but those steps seem to have failed for you.
For this one, I’ll ping @lynnagara for more info.
@Scale I’m not sure why but I think you have an old version of the transactions table that the migration system can’t recover from. Since you have no data in there my suggestion would be to first reverse the transaction migrations 0003, 0002 and 0001. This will drop the table. Then migrate forward to recreate.
I have resolved the issue, just before your reply. I reversed the 3rd migration and dropped the table. I manually recreated the table with the changes from the 3rd migration script.
Then the installation continued on without a problem.
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.