Sentry no more catch errors

Since yesterday our instance of Sentry doesn’t work. In fact the web app is up but no events are catched.
I’ve run the ./install.sh to upgrade to latest version.
I’ve also run the cleanup command: /usr/bin/docker-compose --file /home/sentry/onpremise/docker-compose.yml exec worker sentry cleanup --days 30

My server has it’s own nginx instance that listen on 443 and use our ssl certificates to pass the request to Sentry on port 9000. Maybe it’s now useless since an nginx container exists (but i don’t know how to configure it to listen on our host:443 + use certificates
But i don’t think it’s the origin of the problem.

Here is the logs:

sentry@vps560644:~/onpremise$ docker-compose logs -f | grep error -i
clickhouse_1               | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1               | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
kafka_1                    | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.24.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                    | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/172.24.0.2:2181: Connection refused
kafka_1                    | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.24.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                    | [2020-07-22 14:20:35,165] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-22 from OfflinePartition to OnlinePartition (state.change.logger)
postgres_1                 | ERROR:  relation "south_migrationhistory" does not exist at character 15
kafka_1                    | [2020-07-22 14:20:35,185] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-30 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,188] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-8 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,189] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-21 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,190] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-4 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,191] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition outcomes-0 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,191] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-27 from OfflinePartition to OnlinePartition (state.change.logger)
relay_1                    | 2020-07-22T14:57:26Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.24.0.10:9092 failed: Connection refused (after 36ms in state CONNECT)
relay_1                    | 2020-07-22T14:57:26Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                    | 2020-07-22T14:57:27Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.24.0.10:9092 failed: Connection refused (after 0ms in state CONNECT)
relay_1                    | 2020-07-22T14:57:27Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                    | 2020-07-22T14:57:27Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    | 2020-07-22T14:57:27Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    | 2020-07-22T14:57:29Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    | 2020-07-22T14:57:31Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    |   caused by: Failed to connect to host: No route to host (os error 113)
relay_1                    |   caused by: No route to host (os error 113)
relay_1                    |   caused by: No route to host (os error 113)
relay_1                    | 2020-07-22T14:57:33Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    | 2020-07-22T14:57:36Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    | 2020-07-22T14:57:41Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
kafka_1                    | [2020-07-22 14:20:35,192] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-7 from OfflinePartition to OnlinePartition (state.change.logger)
...
kafka_1                    | [2020-07-22 14:20:35,220] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-2 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:20:35,221] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition errors-replacements-0 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | kafka.common.StateChangeFailedException: Failed to elect leader for partition errors-replacements-0 under strategy OfflinePartitionLeaderElectionStrategy(false)
kafka_1                    | [2020-07-22 14:20:35,221] ERROR [Controller id=1002 epoch=20] Controller 1002 epoch 20 failed to change state for partition __consumer_offsets-43 from OfflinePartition to OnlinePartition (state.change.logger)
...
relay_1                    | 2020-07-22T14:57:49Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
relay_1                    | 2020-07-22T14:57:49Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                    |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
relay_1                    |   caused by: Connection refused (os error 111)
nginx_1                    | 2020/07/22 14:57:49 [error] 6#6: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /organizations/sentry/projects/ HTTP/1.0", upstream: "http://172.24.0.21:9000/organizations/sentry/projects/", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/issues/?project=10&query=is%3Aunresolved&statsPeriod=14d"
kafka_1                    | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.24.0.6:2181. Will not attempt to authenticate using SASL (unknown error)
nginx_1                    | 2020/07/22 14:57:50 [error] 6#6: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /favicon.ico HTTP/1.0", upstream: "http://172.24.0.21:9000/favicon.ico", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/projects/"
relay_1                    | 2020-07-22T14:57:50Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
nginx_1                    | 2020/07/22 14:57:51 [error] 6#6: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /organizations/sentry/projects/ HTTP/1.0", upstream: "http://172.24.0.21:9000/organizations/sentry/projects/", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/issues/?project=10&query=is%3Aunresolved&statsPeriod=14d"
nginx_1                    | 2020/07/22 14:57:51 [error] 6#6: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /favicon.ico HTTP/1.0", upstream: "http://172.24.0.21:9000/favicon.ico", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/projects/"
nginx_1                    | 2020/07/22 14:57:52 [error] 6#6: *15 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /organizations/sentry/projects/ HTTP/1.0", upstream: "http://172.24.0.21:9000/organizations/sentry/projects/", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/issues/?project=10&query=is%3Aunresolved&statsPeriod=14d"
relay_1                    | 2020-07-22T14:57:52Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
nginx_1                    | 2020/07/22 14:57:52 [error] 6#6: *17 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /favicon.ico HTTP/1.0", upstream: "http://172.24.0.21:9000/favicon.ico", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/projects/"
nginx_1                    | 2020/07/22 14:57:52 [error] 6#6: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /organizations/sentry/projects/ HTTP/1.0", upstream: "http://172.24.0.21:9000/organizations/sentry/projects/", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/issues/?project=10&query=is%3Aunresolved&statsPeriod=14d"
nginx_1                    | 2020/07/22 14:57:53 [error] 6#6: *21 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /favicon.ico HTTP/1.0", upstream: "http://172.24.0.21:9000/favicon.ico", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/projects/"
nginx_1                    | 2020/07/22 14:57:53 [error] 6#6: *23 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /organizations/sentry/projects/ HTTP/1.0", upstream: "http://172.24.0.21:9000/organizations/sentry/projects/", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/issues/?project=10&query=is%3Aunresolved&statsPeriod=14d"
nginx_1                    | 2020/07/22 14:57:53 [error] 6#6: *25 connect() failed (111: Connection refused) while connecting to upstream, client: 172.24.0.1, server: , request: "GET /favicon.ico HTTP/1.0", upstream: "http://172.24.0.21:9000/favicon.ico", host: "log.tomhealth.fr", referrer: "https://log.tomhealth.fr/organizations/sentry/projects/"
relay_1                    | 2020-07-22T14:57:54Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
post-process-forwarder_1   | %3|1595429876.376|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.24.0.10:9092 failed: Connection refused
post-process-forwarder_1   | %3|1595429876.377|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
post-process-forwarder_1   | %3|1595429876.378|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.24.0.10:9092 failed: Connection refused
post-process-forwarder_1   | %3|1595429876.378|ERROR|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
ingest-consumer_1          | %3|1595429876.427|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.24.0.10:9092 failed: Connection refused
ingest-consumer_1          | %3|1595429876.427|ERROR|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: 1/1 brokers are down
relay_1                    | 2020-07-22T14:57:58Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
relay_1                    | 2020-07-22T14:58:03Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
relay_1                    | 2020-07-22T14:58:05Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
kafka_1                    | [2020-07-22 14:58:06,615] ERROR [Controller id=1002 epoch=21] Controller 1002 epoch 21 failed to change state for partition __consumer_offsets-22 from OfflinePartition to OnlinePartition (state.change.logger)
...
kafka_1                    | [2020-07-22 14:58:06,766] ERROR [Controller id=1002 epoch=21] Controller 1002 epoch 21 failed to change state for partition __consumer_offsets-24 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | [2020-07-22 14:58:06,770] ERROR [Controller id=1002 epoch=21] Controller 1002 epoch 21 failed to change state for partition cdc-0 from OfflinePartition to OnlinePartition (state.change.logger)
...
kafka_1                    | [2020-07-22 14:58:06,832] ERROR [Controller id=1002 epoch=21] Controller 1002 epoch 21 failed to change state for partition errors-replacements-0 from OfflinePartition to OnlinePartition (state.change.logger)
kafka_1                    | kafka.common.StateChangeFailedException: Failed to elect leader for partition errors-replacements-0 under strategy OfflinePartitionLeaderElectionStrategy(false)
kafka_1                    | [2020-07-22 14:58:06,832] ERROR [Controller id=1002 epoch=21] Controller 1002 epoch 21 failed to change state for partition __consumer_offsets-43 from OfflinePartition to OnlinePartition (state.change.logger)
...
relay_1                    | 2020-07-22T14:58:10Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
relay_1                    | 2020-07-22T14:58:22Z [relay_server::actors::project_upstream] ERROR: error fetching project states: attempted to send request while not yet authenticated
relay_1                    | 2020-07-22T14:58:22Z [relay_server::actors::events] ERROR: error processing event: failed to resolve project information
...

This looks like migration didn’t run, try running sentry upgrade or sentry django migrate from one of the sentry containers.

I tried both command in a container but both return the same thing : no migration to apply

sentry upgrade
07:13:38 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
07:13:43 [INFO] sentry.plugins.github: apps-not-configured
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, jira_ac, nodestore, sentry, sessions, sites, social_auth
Running migrations:
  No migrations to apply.
Creating missing DSNs
Correcting Group.num_comments counter
root@0b2cfdadfb2b:/# sentry django migrate
07:15:49 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
07:15:52 [INFO] sentry.plugins.github: apps-not-configured
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, jira_ac, nodestore, sentry, sessions, sites, social_auth
Running migrations:
  No migrations to apply.

I’ve installed a new instance on a WSL2 to check what could happen.
The instance works finely (it catches the events and display them in the project) But i have the same kind of error in relay container :

2020-07-23T16:38:04Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.9:9092 failed: Connection refused (after 88ms in state CONNECT)
2020-07-23T16:38:04Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
2020-07-23T16:38:04Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.9:9092 failed: Connection refused (after 0ms in state CONNECT)
2020-07-23T16:38:04Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
2020-07-23T16:38:06Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Timeout while waiting for response
2020-07-23T16:38:07Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Timeout while waiting for response
2020-07-23T16:38:08Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Failed to connect to host: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
2020-07-23T16:38:09Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Failed to connect to host: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
2020-07-23T16:38:11Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Failed to connect to host: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
2020-07-23T16:38:15Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Failed to connect to host: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
2020-07-23T16:38:20Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: Failed to connect to host: Connection refused (os error 111)
  caused by: Connection refused (os error 111)
  caused by: Connection refused (os error 111)

Version of this release : Sentry 20.8.0.dev01957e9e
VS
Version of the release installed in production : 20.8.0.dev05f081c2

If i run the install.sh in production server, i have this error in the logs :

Creating sentry_onpremise_kafka_1      ... done
+ '[' b = - ']'
+ snuba bootstrap --help
+ set -- snuba bootstrap --force
+ set gosu snuba snuba bootstrap --force
+ exec gosu snuba snuba bootstrap --force
2020-07-24 07:07:45,746 Connection to Kafka failed (attempt 0)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:47,751 Connection to Kafka failed (attempt 1)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:49,759 Connection to Kafka failed (attempt 2)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:51,762 Connection to Kafka failed (attempt 3)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:53,766 Connection to Kafka failed (attempt 4)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:55,769 Connection to Kafka failed (attempt 5)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:57,775 Connection to Kafka failed (attempt 6)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:07:59,779 Connection to Kafka failed (attempt 7)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 56, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}2020-07-24 07:08:00,947 Failed to create topic ingest-sessions
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 435, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'ingest-sessions' already exists."}
2020-07-24 07:08:00,948 Failed to create topic events
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'events' already exists."}
2020-07-24 07:08:00,948 Failed to create topic event-replacements
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'event-replacements' already exists."}
2020-07-24 07:08:00,949 Failed to create topic snuba-commit-log
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'snuba-commit-log' already exists."}
2020-07-24 07:08:00,949 Failed to create topic cdc
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'cdc' already exists."}
2020-07-24 07:08:00,950 Failed to create topic errors-replacements
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'errors-replacements' already exists."}
2020-07-24 07:08:00,950 Failed to create topic outcomes
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 92, in bootstrap
    future.result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 428, in result
    return self.__get_result()
  File "/usr/local/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=TOPIC_ALREADY_EXISTS,val=36,str="Topic 'outcomes' already exists."}

Another information : if i do a captureMessage, i receive an eventId so i imagine that the event is stored somewhere, but then it’s not managed by a component of the Sentry stack.

These logs suggest that your Kafka instance is having trouble staying up or being reached. Are you sure you have enough resources on the machine you are running Sentry on?

I though i had enough resources.
Without any more lues, i resetted the server and install it again (50Go HDD, 8Go RAM)
Hope it will be enough.