Sentry not logging events

Hi
Our On-Premise Sentry is not logging any event, it happened after our server space got full we increased the server space and ran ./install.sh and docker-compose up -d.

We can’t see any events in sentry and we also need a solution for the volume issue, we don’t need the old events more than 2 week old is there any solution for that? as the postgres volume is consuming maximum space.
Below I’m pasting the errors in the respective containers:

Below is the error from: sentry_onpremise_post-process-forwarder_1

*07:15:30 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.

07:15:41 [INFO] sentry.plugins.github: apps-not-configured
%3|1594624541.563|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.10:9092 failed: Connection refused
%3|1594624541.563|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.10:9092 failed: Connection refused
%3|1594624541.563|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594624541.582|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.10:9092 failed: Connection refused
%3|1594624541.583|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.10:9092 failed: Connection refused
%3|1594624541.583|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594625066.948|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Disconnected
%3|1594625066.948|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Disconnected
%3|1594625066.948|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594625066.950|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Disconnected
%3|1594625066.950|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Disconnected
%3|1594625066.950|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
07:25:27 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
07:25:39 [INFO] sentry.plugins.github: apps-not-configured
%3|1594625140.396|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625140.397|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625140.397|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594625140.401|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625140.401|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625140.401|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594625148.725|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625148.725|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625148.725|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
%3|1594625148.725|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625148.725|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.28.0.8:9092 failed: Connection refused
%3|1594625148.725|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down*

I don’t know whether this is an error or not in : sentry_onpremise_memcached_1:

Signal handled: Terminated.
Signal handled: Terminated.

sentry_onpremise_postgres_1

PostgreSQL Database directory appears to contain a database; Skipping initialization

LOG: database system was shut down at 2020-07-13 06:37:49 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
ERROR: relation “south_migrationhistory” does not exist at character 15
STATEMENT: SELECT 1 FROM south_migrationhistory LIMIT 1
LOG: received smart shutdown request
LOG: autovacuum launcher shutting down
LOG: shutting down
LOG: database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

LOG: database system was shut down at 2020-07-13 06:38:43 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
LOG: received smart shutdown request
LOG: autovacuum launcher shutting down
LOG: shutting down
LOG: database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

LOG: database system was shut down at 2020-07-13 07:24:28 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started

sentry_onpremise_kafka_1 logs:

[2020-07-13 06:38:19,960] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-07-13 06:38:20,447] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2020-07-13 06:38:20,447] WARN The support metrics collection feature (“Metrics”) of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
[2020-07-13 06:38:21,004] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2020-07-13 06:38:21,091] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2020-07-13 06:38:21,356] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-07-13 06:38:21,410] INFO [SocketServer brokerId=1002] Started 1 acceptor threads (kafka.network.SocketServer)
[2020-07-13 06:38:21,496] INFO Creating /brokers/ids/1002 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-07-13 06:38:21,499] INFO Result of znode creation at /brokers/ids/1002 is: OK (kafka.zk.KafkaZkClient)
[2020-07-13 06:38:21,500] INFO Registered broker 1002 at path /brokers/ids/1002 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2020-07-13 06:38:21,727] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-07-13 06:38:21,779] INFO [SocketServer brokerId=1002] Started processors for 1 acceptors (kafka.network.SocketServer)
[2020-07-13 06:38:21,801] WARN Attempting to send response via channel for which there is no open connection, connection id 172.28.0.5:9092-172.28.0.6:58662-0 (kafka.network.Processor)
[2020-07-13 06:38:21,888] ERROR [Controller id=1002 epoch=26] Controller 1002 epoch 26 failed to change state for partition __consumer_offsets-22 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-22 under strategy OfflinePartitionLeaderElectionStrategy
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:366)
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:364)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:364)
at kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:292)
at kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:210)
at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:133)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:123)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:109)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:66)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:260)
at kafka.controller.KafkaController.kafka$controller$KafkaController$$elect(KafkaController.scala:1221)
at kafka.controller.KafkaController$Startup$.process(KafkaController.scala:1134)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2020-07-13 06:38:21,890] ERROR [Controller id=1002 epoch=26] Controller 1002 epoch 26 failed to change state for partition __consumer_offsets-30 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-30 under strategy OfflinePartitionLeaderElectionStrategy
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:366)
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:364)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:364)
at kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:292)
at kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:210)
at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:133)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:123)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:109)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:66)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:260)
at kafka.controller.KafkaController.kafka$controller$KafkaController$$elect(KafkaController.scala:1221)
at kafka.controller.KafkaController$Startup$.process(KafkaController.scala:1134)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2020-07-13 06:38:21,891] ERROR [Controller id=1002 epoch=26] Controller 1002 epoch 26 failed to change state for partition __consumer_offsets-8 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-8 under strategy OfflinePartitionLeaderElectionStrategy
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:366)
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:364)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:364)
at kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:292)
at kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:210)
at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:133)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:123)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:109)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:66)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:260)
at kafka.controller.KafkaController.kafka$controller$KafkaController$$elect(KafkaController.scala:1221)
at kafka.controller.KafkaController$Startup$.process(KafkaController.scala:1134)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2020-07-13 06:38:21,892] ERROR [Controller id=1002 epoch=26] Controller 1002 epoch 26 failed to change state for partition __consumer_offsets-21 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-21 under strategy OfflinePartitionLeaderElectionStrategy
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:366)
at kafka.controller.PartitionStateMachine$$anonfun$doElectLeaderForPartitions$3.apply(PartitionStateMachine.scala:364)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:364)
at kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:292)
at kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:210)
at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:133)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:123)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:109)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:66)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:260)
at kafka.controller.KafkaController.kafka$controller$KafkaController$$elect(KafkaController.scala:1221)
at kafka.controller.KafkaController$Startup$.process(KafkaController.scala:1134)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:89)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
at kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2020-07-13 06:38:21,893] ERROR [Controller id=1002 epoch=26] Controller 1002 epoch 26 failed to change state for partition __consumer_offsets-4 from OfflinePartition to OnlinePartition (state.change.logger)
kafka.common.StateChangeFailedException: Failed to elect leader for partition __consumer_offsets-4 under strategy OfflinePartitionLeaderElection

Clickhouse logs:

Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Include not found: networks
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Include not found: networks
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging trace to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Include not found: networks
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression

This is the logs from sentry install logs :

Creating volumes for persistent storage…
Created sentry-data.
Created sentry-postgres.
Created sentry-redis.
Created sentry-zookeeper.
Created sentry-kafka.
Created sentry-clickhouse.
Created sentry-symbolicator.

sentry/sentry.conf.py already exists, skipped creation.
sentry/config.yml already exists, skipped creation.
sentry/requirements.txt already exists, skipped creation.

Generating secret key…
Secret key written to sentry/config.yml

Building and tagging Docker images…

Some service image(s) must be built from source by running:
docker-compose build snuba-cleanup cron worker symbolicator-cleanup sentry-cleanup web post-process-forwarder
latest: Pulling from getsentry/sentry

Successfully built d5de77feb034
Successfully tagged snuba-cleanup-onpremise-local:latest
Building snuba-cleanup … done

Docker images built.
Bootstrapping Snuba…
Creating network “sentry_onpremise_default” with the default driver
Creating sentry_onpremise_zookeeper_1 …
Creating sentry_onpremise_redis_1 …
Creating sentry_onpremise_clickhouse_1 …
Creating sentry_onpremise_zookeeper_1 … done
Creating sentry_onpremise_kafka_1 …
Creating sentry_onpremise_clickhouse_1 … done
Creating sentry_onpremise_redis_1 … done
Creating sentry_onpremise_kafka_1 … done

  • ‘[’ b = - ‘]’
  • snuba bootstrap --help
  • set – snuba bootstrap --force
  • set gosu snuba snuba bootstrap --force
  • exec gosu snuba snuba bootstrap --force
    2020-07-13 06:38:18,397 Connection to Kafka failed (attempt 0)
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 56, in bootstrap
    client.list_topics(timeout=1)
    cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str=“Failed to get metadata: Local: Broker transport failure”}
    2020-07-13 06:38:20,400 Connection to Kafka failed (attempt 1)
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 56, in bootstrap
    client.list_topics(timeout=1)
    cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str=“Failed to get metadata: Local: Broker transport failure”}
    2020-07-13 06:38:22,403 Connection to Kafka failed (attempt 2)
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 56, in bootstrap
    client.list_topics(timeout=1)
    cimpl.KafkaException: KafkaError{code=_TIMED_OUT,val=-185,str=“Failed to get metadata: Local: Timed out”}
    2020-07-13 06:38:23,520 Failed to create topic outcomes
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 435, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘outcomes’ already exists.”}
    2020-07-13 06:38:23,521 Failed to create topic cdc
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘cdc’ already exists.”}
    2020-07-13 06:38:23,522 Failed to create topic events
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘events’ already exists.”}
    2020-07-13 06:38:23,522 Failed to create topic event-replacements
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘event-replacements’ already exists.”}
    2020-07-13 06:38:23,522 Failed to create topic snuba-commit-log
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘snuba-commit-log’ already exists.”}
    2020-07-13 06:38:23,522 Failed to create topic ingest-sessions
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘ingest-sessions’ already exists.”}
    2020-07-13 06:38:23,523 Failed to create topic errors-replacements
    Traceback (most recent call last):
    File “/usr/src/snuba/snuba/cli/bootstrap.py”, line 92, in bootstrap
    future.result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 428, in result
    return self.__get_result()
    File “/usr/local/lib/python3.7/concurrent/futures/_base.py”, line 384, in __get_result
    raise self._exception
    cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str=“Topic ‘errors-replacements’ already exists.”}
    2020-07-13 06:38:23,535 Creating tables for storage events
    2020-07-13 06:38:23,539 Migrating storage events
    2020-07-13 06:38:23,569 Creating tables for storage errors
    2020-07-13 06:38:23,572 Migrating storage errors
    2020-07-13 06:38:23,594 Creating tables for storage groupedmessages
    2020-07-13 06:38:23,595 Migrating storage groupedmessages
    2020-07-13 06:38:23,601 Creating tables for storage groupassignees
    2020-07-13 06:38:23,602 Migrating storage groupassignees
    2020-07-13 06:38:23,607 Creating tables for storage outcomes_raw
    2020-07-13 06:38:23,608 Migrating storage outcomes_raw
    2020-07-13 06:38:23,611 Column ‘size’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:23,614 Column ‘size’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:23,614 Creating tables for storage outcomes_hourly
    2020-07-13 06:38:23,619 Migrating storage outcomes_hourly
    2020-07-13 06:38:23,622 Creating tables for storage sessions_raw
    2020-07-13 06:38:23,623 Migrating storage sessions_raw
    2020-07-13 06:38:23,628 Creating tables for storage sessions_hourly
    2020-07-13 06:38:23,647 Migrating storage sessions_hourly
    2020-07-13 06:38:23,654 Creating tables for storage transactions
    2020-07-13 06:38:23,656 Migrating storage transactions
    2020-07-13 06:38:23,662 Column ‘_start_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:23,662 Column ‘_finish_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:23,668 Column ‘_start_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:23,668 Column ‘_finish_date’ exists in local ClickHouse but not in schema!
    Starting sentry_onpremise_redis_1 …
    Starting sentry_onpremise_clickhouse_1 …
    Starting sentry_onpremise_zookeeper_1 …
    Starting sentry_onpremise_redis_1 … done
    Starting sentry_onpremise_clickhouse_1 … done
    Starting sentry_onpremise_zookeeper_1 … done
    Starting sentry_onpremise_kafka_1 …
    Starting sentry_onpremise_kafka_1 … done
  • ‘[’ m = - ‘]’
  • snuba migrate --help
  • set – snuba migrate
  • set gosu snuba snuba migrate
  • exec gosu snuba snuba migrate
    2020-07-13 06:38:27,626 Creating tables for storage events
    2020-07-13 06:38:27,641 Migrating storage events
    2020-07-13 06:38:27,672 Creating tables for storage errors
    2020-07-13 06:38:27,675 Migrating storage errors
    2020-07-13 06:38:27,698 Creating tables for storage groupedmessages
    2020-07-13 06:38:27,699 Migrating storage groupedmessages
    2020-07-13 06:38:27,706 Creating tables for storage groupassignees
    2020-07-13 06:38:27,707 Migrating storage groupassignees
    2020-07-13 06:38:27,712 Creating tables for storage outcomes_raw
    2020-07-13 06:38:27,713 Migrating storage outcomes_raw
    2020-07-13 06:38:27,716 Column ‘size’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:27,719 Column ‘size’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:27,719 Creating tables for storage outcomes_hourly
    2020-07-13 06:38:27,721 Migrating storage outcomes_hourly
    2020-07-13 06:38:27,724 Creating tables for storage sessions_raw
    2020-07-13 06:38:27,725 Migrating storage sessions_raw
    2020-07-13 06:38:27,730 Creating tables for storage sessions_hourly
    2020-07-13 06:38:27,735 Migrating storage sessions_hourly
    2020-07-13 06:38:27,741 Creating tables for storage transactions
    2020-07-13 06:38:27,743 Migrating storage transactions
    2020-07-13 06:38:27,749 Column ‘_start_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:27,749 Column ‘_finish_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:27,755 Column ‘_start_date’ exists in local ClickHouse but not in schema!
    2020-07-13 06:38:27,755 Column ‘_finish_date’ exists in local ClickHouse but not in schema!

Setting up database…
Starting sentry_onpremise_clickhouse_1 …
Creating sentry_onpremise_smtp_1 …
Starting sentry_onpremise_clickhouse_1 … done
Starting sentry_onpremise_redis_1 …
Creating sentry_onpremise_postgres_1 …
Creating sentry_onpremise_symbolicator_1 …
Starting sentry_onpremise_redis_1 … done
Creating sentry_onpremise_memcached_1 …
Starting sentry_onpremise_zookeeper_1 …
Starting sentry_onpremise_zookeeper_1 … done
Starting sentry_onpremise_kafka_1 …
Starting sentry_onpremise_kafka_1 … done
Creating sentry_onpremise_snuba-api_1 …
Creating sentry_onpremise_snuba-replacer_1 …
Creating sentry_onpremise_snuba-consumer_1 …
Creating sentry_onpremise_memcached_1 … done
Creating sentry_onpremise_postgres_1 … done
Creating sentry_onpremise_symbolicator_1 … done
Creating sentry_onpremise_snuba-replacer_1 … done
Creating sentry_onpremise_snuba-consumer_1 … done
Creating sentry_onpremise_smtp_1 … done
Creating sentry_onpremise_snuba-api_1 … done
06:38:36 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
06:38:39 [INFO] sentry.plugins.github: apps-not-configured
Operations to perform:
Apply all migrations: admin, auth, contenttypes, jira_ac, nodestore, sentry, sessions, sites, social_auth
Running migrations:
No migrations to apply.
Creating missing DSNs
Correcting Group.num_comments counter
Cleaning up…


You’re all done!

Please help out!

Thanks

I think you are looking for this: https://github.com/getsentry/onpremise#event-retention

That said event data is not stored on Postgres, it is stored on Clickhouse so this probably won’t help with that much.

For not being able to receive events, looks like your Kafka service is down. Either having some network or storage related issues so I’d look there.