Relay cannot connect to web:9000 selft hosting on Google Cloud

I installed latest version On-Premise, but relay cannot connect to web:9000.

relay_1 | 2021-02-24T18:40:18Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1 | caused by: could not send request using reqwest
relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN

How can I setup it?
Thank you so much!

1 Like

Same issue here, did you find a solution?

Are you using the onpremise repo or a custom setup?

I setup stand alone relay, so I can connect to Sentry.

I got onpremise repo and run ./install.sh.
I setup stand alone relay, I can connect to Sentry through ip 127.0.0.1:9000

If you are setting up a stand-alone relay, you cannot use web:9000 as its target as the domain name web only resolves inside the docker-compose network. You’ll also need to generate credentials and add the public key of the external relay to the allow list in your sentry config. Not sure if this applies to self-hosted but worth a try: Getting Started | Sentry Documentation

Otherwise you’ll need to add the public key to SENTRY_RELAY_WHITELIST_PK in your sentry.conf.py file. This setting is a list of keys so it must be a list or a tuple.

/cc @jauer

sorry made you confuse, after I could not connect relay in onremise package. I installed stand alone relay.
For relay in onpremise, it seem docker cannot resolve web:9000 address.
if I change address to 127.0.0.1:9000 it show refuse connection.

Can you share your full logs without the external Relay so we can investigate @starofsky?

Sorry for late response.
I don’t know why docker run very slow. So I cannot start command.
docker-composer up.

root@server:/opt/sentry/onpremise# ./install.sh
:arrow_forward: Defining variables and helpers …

▶ Parsing command line ...

▶ Setting up error handling ...

▶ Checking minimum requirements ...

▶ Creating volumes for persistent storage ...
Created sentry-data.
Created sentry-postgres.
Created sentry-redis.
Created sentry-zookeeper.
Created sentry-kafka.
Created sentry-clickhouse.
Created sentry-symbolicator.

▶ Ensuring files from examples ...
Creating sentry/sentry.conf.py...
Creating sentry/config.yml...
Creating sentry/requirements.txt...
Creating symbolicator/config.yml...
Creating relay/config.yml...

▶ Generating secret key ...
Secret key written to sentry/config.yml

▶ Replacing TSDB ...

▶ Fetching and updating Docker images ...
--no-ansi option is deprecated and will be removed in future versions.
Some service image(s) must be built from source by running:
    docker-compose build sentry-cleanup snuba-cleanup symbolicator-cleanup
nightly: Pulling from getsentry/sentry
Digest: sha256:5a5ee326323d46ec730e0739acdb593f7cbb4cb396135e7b7ce6d623c4535ada
Status: Image is up to date for getsentry/sentry:nightly
docker.io/getsentry/sentry:nightly

▶ Building and tagging Docker images ...

--no-ansi option is deprecated and will be removed in future versions.
smtp uses an image, skipping
memcached uses an image, skipping
redis uses an image, skipping
postgres uses an image, skipping
zookeeper uses an image, skipping
kafka uses an image, skipping
clickhouse uses an image, skipping
geoipupdate uses an image, skipping
snuba-api uses an image, skipping
snuba-consumer uses an image, skipping
snuba-outcomes-consumer uses an image, skipping
snuba-sessions-consumer uses an image, skipping
snuba-transactions-consumer uses an image, skipping
snuba-replacer uses an image, skipping
snuba-subscription-consumer-events uses an image, skipping
snuba-subscription-consumer-transactions uses an image, skipping
symbolicator uses an image, skipping
web uses an image, skipping
cron uses an image, skipping
worker uses an image, skipping
ingest-consumer uses an image, skipping
post-process-forwarder uses an image, skipping
subscription-consumer-events uses an image, skipping
subscription-consumer-transactions uses an image, skipping
relay uses an image, skipping
nginx uses an image, skipping
Building snuba-cleanup
Sending build context to Docker daemon  3.072kB

Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 81e63e225173
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in 04f6d17910ed
Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [268 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [9504 B]
Fetched 8423 kB in 1s (5780 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  lsb-base sensible-utils
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  default-mta | mail-transport-agent
The following NEW packages will be installed:
  cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 2 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 143 kB in 0s (0 B/s)
Selecting previously unselected package sensible-utils.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6840 files and directories currently installed.)
Preparing to unpack .../sensible-utils_0.0.12_all.deb ...
Unpacking sensible-utils (0.0.12) ...
Selecting previously unselected package lsb-base.
Preparing to unpack .../lsb-base_10.2019051400_all.deb ...
Unpacking lsb-base (10.2019051400) ...
Selecting previously unselected package cron.
Preparing to unpack .../cron_3.0pl1-134+deb10u1_amd64.deb ...
Unpacking cron (3.0pl1-134+deb10u1) ...
Setting up lsb-base (10.2019051400) ...
Setting up sensible-utils (0.0.12) ...
Setting up cron (3.0pl1-134+deb10u1) ...
Adding group `crontab' (GID 101) ...
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 04f6d17910ed
 ---> 072f6edf9b77
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> 6de60178d8d0
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in 1deeb5ce2da1
Removing intermediate container 1deeb5ce2da1
 ---> 17eba8f62477
Successfully built 17eba8f62477
Successfully tagged snuba-cleanup-onpremise-local:latest
Building symbolicator-cleanup
Sending build context to Docker daemon  3.072kB

Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> ee6e43b3fef9
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in 771aacb410a9
Get:1 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB]
Ign:2 http://deb.debian.org/debian stretch InRelease
Get:3 http://deb.debian.org/debian stretch-updates InRelease [93.6 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://deb.debian.org/debian stretch Release.gpg [2410 B]
Get:6 http://security.debian.org/debian-security stretch/updates/main amd64 Packages [660 kB]
Get:7 http://deb.debian.org/debian stretch-updates/main amd64 Packages [2596 B]
Get:8 http://deb.debian.org/debian stretch/main amd64 Packages [7080 kB]
Fetched 8009 kB in 1s (5947 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  exim4 | postfix | mail-transport-agent
The following NEW packages will be installed:
  cron
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 95.4 kB of archives.
After this operation, 257 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 cron amd64 3.0pl1-128+deb9u1 [95.4 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 95.4 kB in 0s (0 B/s)
Selecting previously unselected package cron.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6661 files and directories currently installed.)
Preparing to unpack .../cron_3.0pl1-128+deb9u1_amd64.deb ...
Unpacking cron (3.0pl1-128+deb9u1) ...
Setting up cron (3.0pl1-128+deb9u1) ...
Adding group `crontab' (GID 101) ...
Done.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 771aacb410a9
 ---> c4cfa2d2e459
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> f9a3bd806a82
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in fc0ebbc9a4c6
Removing intermediate container fc0ebbc9a4c6
 ---> eafaeaca32b6
Successfully built eafaeaca32b6
Successfully tagged symbolicator-cleanup-onpremise-local:latest
Building sentry-cleanup
Sending build context to Docker daemon  3.072kB

Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 32009fa5d23c
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in 58504f63b406
Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [268 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [9504 B]
Fetched 8423 kB in 1s (6004 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  lsb-base sensible-utils
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  default-mta | mail-transport-agent
The following NEW packages will be installed:
  cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 2 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 143 kB in 0s (9634 kB/s)
Selecting previously unselected package sensible-utils.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 11935 files and directories currently installed.)
Preparing to unpack .../sensible-utils_0.0.12_all.deb ...
Unpacking sensible-utils (0.0.12) ...
Selecting previously unselected package lsb-base.
Preparing to unpack .../lsb-base_10.2019051400_all.deb ...
Unpacking lsb-base (10.2019051400) ...
Selecting previously unselected package cron.
Preparing to unpack .../cron_3.0pl1-134+deb10u1_amd64.deb ...
Unpacking cron (3.0pl1-134+deb10u1) ...
Setting up lsb-base (10.2019051400) ...
Setting up sensible-utils (0.0.12) ...
Setting up cron (3.0pl1-134+deb10u1) ...
Adding group `crontab' (GID 101) ...
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container 58504f63b406
 ---> 2805e84a8883
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> 2ec533ad1a71
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in de3898313408
Removing intermediate container de3898313408
 ---> c4b7998ef67a
Successfully built c4b7998ef67a
Successfully tagged sentry-cleanup-onpremise-local:latest

Docker images built.

▶ Turning things off ...
--no-ansi option is deprecated and will be removed in future versions.
Removing network onpremise_default
Network onpremise_default not found.
--no-ansi option is deprecated and will be removed in future versions.
Removing network sentry_onpremise_default
Network sentry_onpremise_default not found.

▶ Setting up Zookeeper ...
--no-ansi option is deprecated and will be removed in future versions.
Creating network "sentry_onpremise_default" with the default driver
Creating volume "sentry_onpremise_sentry-secrets" with default driver
Creating volume "sentry_onpremise_sentry-smtp" with default driver
Creating volume "sentry_onpremise_sentry-zookeeper-log" with default driver
Creating volume "sentry_onpremise_sentry-kafka-log" with default driver
Creating volume "sentry_onpremise_sentry-smtp-log" with default driver
Creating volume "sentry_onpremise_sentry-clickhouse-log" with default driver
Creating sentry_onpremise_zookeeper_run ...
Creating sentry_onpremise_zookeeper_run ... done

▶ Bootstrapping and migrating Snuba ...
--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_zookeeper_1  ...
Creating sentry_onpremise_redis_1      ...
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_zookeeper_1  ... done
Creating sentry_onpremise_kafka_1      ...
Creating sentry_onpremise_redis_1      ... done
Creating sentry_onpremise_kafka_1      ... done
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
+ '[' b = - ']'
+ snuba bootstrap --help
+ set -- snuba bootstrap --no-migrate --force
+ set gosu snuba snuba bootstrap --no-migrate --force
+ exec gosu snuba snuba bootstrap --no-migrate --force
%3|1615263992.404|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.5:9092 failed: Connection refused (after 11ms in state CONNECT)
2021-03-09 04:26:33,399 Connection to Kafka failed (attempt 0)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 55, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
%3|1615263993.400|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.5:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
%3|1615263994.401|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.5:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1615263995.402|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.5:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-03-09 04:26:35,402 Connection to Kafka failed (attempt 1)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 55, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
%3|1615263996.405|FAIL|rdkafka#producer-3| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.5:9092 failed: Connection refused (after 0ms in state CONNECT)
2021-03-09 04:26:37,406 Connection to Kafka failed (attempt 2)
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/cli/bootstrap.py", line 55, in bootstrap
    client.list_topics(timeout=1)
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
2021-03-09 04:26:39,060 Topic cdc created
2021-03-09 04:26:39,060 Topic events created
2021-03-09 04:26:39,060 Topic snuba-commit-log created
2021-03-09 04:26:39,061 Topic event-replacements created
2021-03-09 04:26:39,061 Topic outcomes created
2021-03-09 04:26:39,061 Topic ingest-sessions created
--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
+ '[' m = - ']'
+ snuba migrations --help
+ set -- snuba migrations migrate --force
+ set gosu snuba snuba migrations migrate --force
+ exec gosu snuba snuba migrations migrate --force
Finished running migrations

▶ Creating additional Kafka topics ...
--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_kafka_run ...
Creating sentry_onpremise_kafka_run ... done
Created topic ingest-attachments.

--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_kafka_run ...
Creating sentry_onpremise_kafka_run ... done
Created topic ingest-transactions.

--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_kafka_run ...
Creating sentry_onpremise_kafka_run ... done
Created topic ingest-events.


▶ Ensuring proper PostgreSQL version ...

▶ Setting up database ...
--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_smtp_1 ...
Creating sentry_onpremise_postgres_1 ...
Creating sentry_onpremise_symbolicator_1 ...
Creating sentry_onpremise_memcached_1    ...
Creating sentry_onpremise_snuba-transactions-consumer_1 ...
Creating sentry_onpremise_snuba-consumer_1              ...
Creating sentry_onpremise_snuba-sessions-consumer_1     ...
Creating sentry_onpremise_snuba-replacer_1              ...
Creating sentry_onpremise_snuba-subscription-consumer-events_1 ...
Creating sentry_onpremise_snuba-outcomes-consumer_1            ...
Creating sentry_onpremise_snuba-api_1                          ...
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ...
Creating sentry_onpremise_postgres_1                                 ... done
Creating sentry_onpremise_symbolicator_1                             ... done
Creating sentry_onpremise_snuba-transactions-consumer_1              ... done
Creating sentry_onpremise_snuba-consumer_1                           ... done
Creating sentry_onpremise_memcached_1                                ... done
Creating sentry_onpremise_snuba-replacer_1                           ... done
Creating sentry_onpremise_smtp_1                                     ... done
Creating sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Creating sentry_onpremise_snuba-sessions-consumer_1                  ... done
Creating sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Creating sentry_onpremise_snuba-api_1                                ... done
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Creating sentry_onpremise_web_run                                    ...
Creating sentry_onpremise_web_run                                    ... done
Installing additional dependencies...

04:27:21 [WARNING] sentry.utils.geo: Error opening GeoIP database: /geoip/GeoLite2-City.mmdb
04:27:21 [WARNING] sentry.utils.geo: Error opening GeoIP database in Rust: /geoip/GeoLite2-City.mmdb
04:27:27 [INFO] sentry.plugins.github: apps-not-configured
* Unknown config option found: 'slack.legacy-app'
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, jira_ac, nodestore, sentry, sessions, sites, social_auth
Running migrations:
  Applying sentry.0001_initial... OK
  Applying contenttypes.0001_initial... OK

 OK
  Applying sentry.0025_organizationaccessrequest_requester... OK
  Applying sentry.0026_delete_event... OK
  Applying sentry.0027_exporteddata... OK
  Applying sentry.0028_user_reports... OK
  Applying sentry.0029_discover_query_upgrade... OK
  Applying sentry.0030_auto_20200201_0039... OK
  Applying sentry.0031_delete_alert_rules_and_incidents... OK
  Applying sentry.0032_delete_alert_email... OK
  Applying sentry.0033_auto_20200210_2137... OK
  Applying sentry.0034_auto_20200210_2311... OK
  Applying sentry.0035_auto_20200127_1711... OK
  Applying sentry.0036_auto_20200213_0106... OK
  Applying sentry.0037_auto_20200213_0140... OK
  Applying sentry.0038_auto_20200213_1904... OK
  Applying sentry.0039_delete_incidentsuspectcommit... OK
  Applying sentry.0040_remove_incidentsuspectcommittable... OK
  Applying sentry.0041_incidenttrigger_date_modified... OK
  Applying sentry.0042_auto_20200214_1607... OK
  Applying sentry.0043_auto_20200218_1903... OK
  Applying sentry.0044_auto_20200219_0018... OK
  Applying sentry.0045_remove_incidentactivity_event_stats_snapshot... OK
  Applying sentry.0046_auto_20200221_1735... OK
  Applying sentry.0047_auto_20200224_2319... OK
  Applying sentry.0048_auto_20200302_1825... OK
  Applying sentry.0049_auto_20200304_0254... OK
  Applying sentry.0050_auto_20200306_2346... OK
  Applying sentry.0051_fix_auditlog_pickled_data... OK
  Applying sentry.0052_organizationonboardingtask_completion_seen... OK
  Applying sentry.0053_migrate_alert_task_onboarding... OK
  Applying sentry.0054_create_key_transaction... OK
  Applying sentry.0055_query_subscription_status... OK
  Applying sentry.0056_remove_old_functions... OK
  Applying sentry.0057_remove_unused_project_flag... OK
  Applying sentry.0058_project_issue_alerts_targeting... OK
  Applying sentry.0059_add_new_sentry_app_features... OK
  Applying sentry.0060_add_file_eventattachment_index... OK
  Applying sentry.0061_alertrule_partial_index... OK
  Applying sentry.0062_key_transactions_unique_with_owner... OK
  Applying sentry.0063_drop_alertrule_constraint... OK
  Applying sentry.0064_project_has_transactions... OK
  Applying sentry.0065_add_incident_status_method... OK
  Applying sentry.0066_alertrule_manager... OK
  Applying sentry.0067_migrate_rules_alert_targeting... OK
  Applying sentry.0068_project_default_flags... OK
  Applying sentry.0069_remove_tracked_superusers... OK
  Applying sentry.0070_incident_snapshot_support... OK
  Applying sentry.0071_add_default_fields_model_subclass... OK
  Applying sentry.0072_alert_rules_query_changes... OK
  Applying sentry.0073_migrate_alert_query_model... OK
  Applying sentry.0074_add_metric_alert_feature... OK
  Applying sentry.0075_metric_alerts_fix_releases... OK
  Applying sentry.0076_alert_rules_disable_constraints... OK
  Applying sentry.0077_alert_query_col_drop_state... OK
  Applying sentry.0078_incident_field_updates... OK
  Applying sentry.0079_incidents_remove_query_field_state... OK
  Applying sentry.0080_alert_rules_drop_unused_tables_cols... OK
  Applying sentry.0081_add_integraiton_upgrade_audit_log... OK
  Applying sentry.0082_alert_rules_threshold_float... OK
  Applying sentry.0083_add_max_length_webhook_url... OK
  Applying sentry.0084_exported_data_blobs... OK
  Applying sentry.0085_fix_error_rate_snuba_query... OK
  Applying sentry.0086_sentry_app_installation_for_provider... OK
  Applying sentry.0087_fix_time_series_data_type... OK
  Applying sentry.0088_rule_level_resolve_threshold_type... OK
  Applying sentry.0089_rule_level_fields_backfill... OK
  Applying sentry.0090_fix_auditlog_pickled_data_take_2... OK
  Applying sentry.0091_alertruleactivity... OK
  Applying sentry.0092_remove_trigger_threshold_type_nullable... OK
  Applying sentry.0093_make_identity_user_id_textfield... OK
  Applying sentry.0094_cleanup_unreferenced_event_files... OK
  Applying sentry.0095_ruleactivity... OK
  Applying sentry.0096_sentry_app_component_skip_load_on_open... OK
  Applying sentry.0097_add_sentry_app_id_to_sentry_alertruletriggeraction... OK
  Applying sentry.0098_add-performance-onboarding... OK
  Applying sentry.0099_fix_project_platforms... OK
  Applying sentry.0100_file_type_on_event_attachment... OK
  Applying sentry.0101_backfill_file_type_on_event_attachment... OK
  Applying sentry.0102_collect_relay_analytics... OK
  Applying sentry.0103_project_has_alert_filters... OK
  Applying sentry.0104_collect_relay_public_key_usage... OK
  Applying sentry.0105_remove_nullability_of_event_attachment_type... OK
  Applying sentry.0106_service_hook_project_id_nullable... OK
  Applying sentry.0107_remove_spaces_from_slugs... OK
  Applying sentry.0108_update_fileblob_action... OK
  Applying sentry.0109_sentry_app_creator... OK
  Applying sentry.0110_sentry_app_creator_backill... OK
  Applying sentry.0111_snuba_query_event_type... OK
  Applying sentry.0112_groupinboxmodel... OK
  Applying sentry.0113_add_repositoryprojectpathconfig... OK
  Applying sentry.0114_add_unhandled_savedsearch... OK
  Applying sentry.0115_add_checksum_to_debug_file... OK
  Applying sentry.0116_backfill_debug_file_checksum... OK
  Applying sentry.0117_dummy-activityupdate... OK
  Applying sentry.0118_backfill_snuba_query_event_types... OK
  Applying sentry.0119_fix_set_none... OK
  Applying sentry.0120_commit_author_charfield... OK
GroupInbox: 100% |#                                             | ETA:  --:--:--
 OK
  Applying sentry.0122_add_release_status... OK
  Applying sentry.0123_groupinbox_addprojandorg... OK
  Applying sentry.0124_add_release_status_model... OK
  Applying sentry.0125_add_platformexternalissue_project_id... OK
  Applying sentry.0158_create_externalteam_table... OK
  Applying sentry.0159_create_externaluser_table... OK
  Applying sentry.0160_create_projectcodeowners_table... OK
  Applying sentry.0161_add_saved_search_sort... OK
  Applying sentry.0162_backfill_saved_search_sort... OK
  Applying sentry.0163_add_organizationmember_and_external_name... OK
  Applying sentry.0164_add_protect_on_delete_codeowners... OK
  Applying sentry.0165_metric_alerts_fix_group_ids... OK
  Applying sentry.0166_create_notificationsetting_table... OK
  Applying sentry.0167_rm_organization_integration_from_projectcodeowners... OK
  Applying sentry.0168_demo_orgs_users... OK
  Applying sentry.0169_delete_organization_integration_from_projectcodeowners... OK
  Applying sentry.0170_actor_introduction... OK
  Applying sentry.0171_backfill_actors... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
  Applying social_auth.0001_initial... OK
/usr/local/lib/python3.6/site-packages/sentry/receivers/onboarding.py:69: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  project.organization_id,
04:28:22 [WARNING] sentry: Cannot initiate onboarding for organization (1) due to missing owners
Created internal Sentry project (slug=internal, id=1)

Would you like to create a user account now? [Y/n]: y
Email: 
Password:
Repeat for confirmation:
Added to organization: sentry
User created: 
Creating missing DSNs
Correcting Group.num_comments counter

▶ Migrating file storage ...
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ba3557a56b15: Already exists
Digest: sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be
Status: Downloaded newer image for alpine:latest

▶ Generating Relay credentials ...
--no-ansi option is deprecated and will be removed in future versions.
Creating sentry_onpremise_relay_run ...
Creating sentry_onpremise_relay_run ... done
Relay credentials written to relay/credentials.json

▶ Setting up GeoIP integration ...
Setting up IP address geolocation ...
Installing (empty) IP address geolocation database ... done.
IP address geolocation is not configured for updates.
See https://develop.sentry.dev/self-hosted/geolocation/ for instructions.
Error setting up IP address geolocation.


-----------------------------------------------------------------

You're all done! Run the following command to get Sentry running:

  docker-compose up -d

-----------------------------------------------------------------
root@server:/opt/sentry/onpremise# docker-compose up
Starting sentry_onpremise_redis_1                ... done
Starting sentry_onpremise_symbolicator_1         ... done
Starting sentry_onpremise_memcached_1            ... done
Starting sentry_onpremise_zookeeper_1            ... done
Starting sentry_onpremise_clickhouse_1           ... done
Starting sentry_onpremise_smtp_1                 ... done
Starting sentry_onpremise_postgres_1             ... done
Creating sentry_onpremise_symbolicator-cleanup_1 ... done
Creating sentry_onpremise_geoipupdate_1          ... done
Starting sentry_onpremise_kafka_1                ... done
Starting sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Starting sentry_onpremise_snuba-transactions-consumer_1              ... done
Starting sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Creating sentry_onpremise_relay_1                                    ... done
Starting sentry_onpremise_snuba-api_1                                ... done
Starting sentry_onpremise_snuba-replacer_1                           ... done
Starting sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Starting sentry_onpremise_snuba-consumer_1                           ... done
Starting sentry_onpremise_snuba-sessions-consumer_1                  ... done
Creating sentry_onpremise_snuba-cleanup_1                            ... done
Creating sentry_onpremise_cron_1                                     ... done
Creating sentry_onpremise_subscription-consumer-transactions_1       ... done
Creating sentry_onpremise_post-process-forwarder_1                   ... done
Creating sentry_onpremise_ingest-consumer_1                          ... done
Creating sentry_onpremise_subscription-consumer-events_1             ... done
Creating sentry_onpremise_worker_1                                   ... done
Creating sentry_onpremise_web_1                                      ... done
Creating sentry_onpremise_sentry-cleanup_1                           ... done
Creating sentry_onpremise_nginx_1                                    ... done
Attaching to sentry_onpremise_memcached_1, sentry_onpremise_redis_1, sentry_onpremise_symbolicator_1, sentry_onpremise_clickhouse_1, sentry_onpremise_symbolicator-cleanup_1, sentry_onpremise_zookeeper_1, sentry_onpremise_postgres_1, sentry_onpremise_smtp_1, sentry_onpremise_geoipupdate_1, sentry_onpremise_kafka_1, sentry_onpremise_snuba-subscription-consumer-events_1, sentry_onpremise_snuba-transactions-consumer_1, sentry_onpremise_snuba-outcomes-consumer_1, sentry_onpremise_snuba-consumer_1, sentry_onpremise_snuba-cleanup_1, sentry_onpremise_snuba-sessions-consumer_1, sentry_onpremise_snuba-api_1, sentry_onpremise_snuba-subscription-consumer-transactions_1, sentry_onpremise_relay_1, sentry_onpremise_snuba-replacer_1, sentry_onpremise_subscription-consumer-transactions_1, sentry_onpremise_post-process-forwarder_1, sentry_onpremise_worker_1, sentry_onpremise_cron_1, sentry_onpremise_web_1, sentry_onpremise_sentry-cleanup_1, sentry_onpremise_subscription-consumer-events_1, sentry_onpremise_ingest-consumer_1, sentry_onpremise_nginx_1
clickhouse_1                                | Processing configuration file '/etc/clickhouse-server/config.xml'.
clickhouse_1                                | Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
clickhouse_1                                | Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
clickhouse_1                                | Include not found: clickhouse_remote_servers
clickhouse_1                                | Include not found: clickhouse_compression
clickhouse_1                                | Logging information to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1                                | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1                                | Logging information to console
clickhouse_1                                | 2021.03.09 05:41:57.413208 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
clickhouse_1                                | 2021.03.09 05:41:57.426941 [ 1 ] {} <Information> Application: starting up
clickhouse_1                                | Include not found: networks
clickhouse_1                                | 2021.03.09 05:41:57.473065 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 7.82 GiB because the system has low amount of memory
clickhouse_1                                | 2021.03.09 05:41:57.473315 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
clickhouse_1                                | 2021.03.09 05:41:57.489723 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 1 tables and 0 dictionaries.
clickhouse_1                                | 2021.03.09 05:41:57.513336 [ 44 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
clickhouse_1                                | 2021.03.09 05:41:57.632238 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
clickhouse_1                                | 2021.03.09 05:41:57.753388 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
clickhouse_1                                | 2021.03.09 05:41:57.843073 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
clickhouse_1                                | 2021.03.09 05:41:57.844386 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
clickhouse_1                                | 2021.03.09 05:41:57.845596 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
clickhouse_1                                | 2021.03.09 05:41:57.845634 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
clickhouse_1                                | 2021.03.09 05:41:57.847402 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.03.09 05:41:57.848102 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.03.09 05:41:57.848315 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.03.09 05:41:57.848527 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.03.09 05:41:57.849663 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
clickhouse_1                                | 2021.03.09 05:41:57.849714 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
clickhouse_1                                | 2021.03.09 05:41:57.849752 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
clickhouse_1                                | 2021.03.09 05:41:58.519559 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
clickhouse_1                                | 2021.03.09 05:41:58.520373 [ 1 ] {} <Information> Application: Available RAM: 15.64 GiB; physical cores: 1; logical cores: 2.
clickhouse_1                                | 2021.03.09 05:41:58.520563 [ 1 ] {} <Information> Application: Ready for connections.
clickhouse_1                                | Include not found: clickhouse_remote_servers
clickhouse_1                                | Include not found: clickhouse_compression
geoipupdate_1                               | error loading configuration file /sentry/GeoIP.conf: error opening file: open /sentry/GeoIP.conf: no such file or directory
sentry_onpremise_geoipupdate_1 exited with code 1
kafka_1                                     | ===> ENV Variables ...
kafka_1                                     | ALLOW_UNSIGNED=false
kafka_1                                     | COMPONENT=kafka
kafka_1                                     | CONFLUENT_DEB_VERSION=1
kafka_1                                     | CONFLUENT_PLATFORM_LABEL=
kafka_1                                     | CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1                                     | CONFLUENT_VERSION=5.5.0
kafka_1                                     | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1                                     | HOME=/root
kafka_1                                     | HOSTNAME=2a49effa25f6
kafka_1                                     | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1                                     | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1                                     | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1                                     | KAFKA_LOG_RETENTION_HOURS=24
kafka_1                                     | KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1                                     | KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1                                     | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
kafka_1                                     | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1                                     | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1                                     | KAFKA_VERSION=
kafka_1                                     | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1                                     | LANG=C.UTF-8
kafka_1                                     | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1                                     | PWD=/
kafka_1                                     | PYTHON_PIP_VERSION=8.1.2
kafka_1                                     | PYTHON_VERSION=2.7.9-1
kafka_1                                     | SCALA_VERSION=2.12
kafka_1                                     | SHLVL=1
kafka_1                                     | ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1                                     | _=/usr/bin/env
kafka_1                                     | ===> User
kafka_1                                     | uid=0(root) gid=0(root) groups=0(root)
kafka_1                                     | ===> Configuring ...
postgres_1                                  |
postgres_1                                  | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1                                  |
postgres_1                                  | LOG:  database system was shut down at 2021-03-09 04:29:31 UTC
postgres_1                                  | LOG:  MultiXact member wraparound protections are now enabled
postgres_1                                  | LOG:  database system is ready to accept connections
postgres_1                                  | LOG:  autovacuum launcher started
redis_1                                     | 1:C 09 Mar 2021 05:41:56.020 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                                     | 1:C 09 Mar 2021 05:41:56.020 # Redis version=5.0.12, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                                     | 1:C 09 Mar 2021 05:41:56.020 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1                                     | 1:M 09 Mar 2021 05:41:56.021 * Running mode=standalone, port=6379.
redis_1                                     | 1:M 09 Mar 2021 05:41:56.021 # Server initialized
redis_1                                     | 1:M 09 Mar 2021 05:41:56.022 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1                                     | 1:M 09 Mar 2021 05:41:56.022 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1                                     | 1:M 09 Mar 2021 05:41:56.030 * DB loaded from disk: 0.008 seconds
redis_1                                     | 1:M 09 Mar 2021 05:41:56.030 * Ready to accept connections
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 14ms in state CONNECT)
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 14ms in state CONNECT)
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
relay_1                                     | 2021-03-09T05:42:06Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                                     | 2021-03-09T05:42:07Z [actix::actors::resolver] WARN: Can not create system dns resolver: io error
relay_1                                     | 2021-03-09T05:42:07Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:07Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:07Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 1ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:07Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 1ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:07Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:07Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:08Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:09Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:12Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
sentry-cleanup_1                            | SHELL=/bin/bash
sentry-cleanup_1                            | BASH_ENV=/container.env
sentry-cleanup_1                            | 0 0 * * * gosu sentry sentry cleanup --days 90 > /proc/1/fd/1 2>/proc/1/fd/2
smtp_1                                      | + sed -ri '
smtp_1                                      |   s/^#?(dc_local_interfaces)=.*/\1='\''0.0.0.0 ; ::0'\''/;
smtp_1                                      |   s/^#?(dc_other_hostnames)=.*/\1='\'''\''/;
smtp_1                                      |   s/^#?(dc_relay_nets)=.*/\1='\''0.0.0.0\/0'\''/;
smtp_1                                      |   s/^#?(dc_eximconfig_configtype)=.*/\1='\''internet'\''/;
smtp_1                                      | ' /etc/exim4/update-exim4.conf.conf
smtp_1                                      | + update-exim4.conf -v
smtp_1                                      | using non-split configuration scheme from /etc/exim4/exim4.conf.template
smtp_1                                      |   272 LOG: MAIN
smtp_1                                      |   272   exim 4.92 daemon started: pid=272, no queue runs, listening for SMTP on port 25 (IPv6 and IPv4)
snuba-api_1                                 | + '[' a = - ']'
snuba-api_1                                 | + snuba api --help
snuba-cleanup_1                             | SHELL=/bin/bash
snuba-cleanup_1                             | BASH_ENV=/container.env
snuba-cleanup_1                             | */5 * * * * gosu snuba snuba cleanup --dry-run False > /proc/1/fd/1 2>/proc/1/fd/2
snuba-consumer_1                            | + '[' c = - ']'
snuba-consumer_1                            | + snuba consumer --help
snuba-outcomes-consumer_1                   | + '[' c = - ']'
snuba-outcomes-consumer_1                   | + snuba consumer --help
snuba-replacer_1                            | + '[' r = - ']'
snuba-replacer_1                            | + snuba replacer --help
snuba-sessions-consumer_1                   | + '[' c = - ']'
snuba-sessions-consumer_1                   | + snuba consumer --help
snuba-subscription-consumer-events_1        | + '[' s = - ']'
snuba-subscription-consumer-events_1        | + snuba subscriptions --help
snuba-subscription-consumer-transactions_1  | + '[' s = - ']'
snuba-subscription-consumer-transactions_1  | + snuba subscriptions --help
snuba-transactions-consumer_1               | + '[' c = - ']'
snuba-transactions-consumer_1               | + snuba consumer --help
symbolicator-cleanup_1                      | SHELL=/bin/bash
symbolicator-cleanup_1                      | BASH_ENV=/container.env
symbolicator-cleanup_1                      | 55 23 * * * gosu symbolicator symbolicator cleanup > /proc/1/fd/1 2>/proc/1/fd/2
zookeeper_1                                 | ===> ENV Variables ...
zookeeper_1                                 | ALLOW_UNSIGNED=false
zookeeper_1                                 | COMPONENT=zookeeper
zookeeper_1                                 | CONFLUENT_DEB_VERSION=1
zookeeper_1                                 | CONFLUENT_PLATFORM_LABEL=
zookeeper_1                                 | CONFLUENT_SUPPORT_METRICS_ENABLE=false
zookeeper_1                                 | CONFLUENT_VERSION=5.5.0
zookeeper_1                                 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
zookeeper_1                                 | HOME=/root
zookeeper_1                                 | HOSTNAME=b8d177c4e99b
zookeeper_1                                 | KAFKA_VERSION=
zookeeper_1                                 | LANG=C.UTF-8
zookeeper_1                                 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
zookeeper_1                                 | PWD=/
zookeeper_1                                 | PYTHON_PIP_VERSION=8.1.2
zookeeper_1                                 | PYTHON_VERSION=2.7.9-1
zookeeper_1                                 | SCALA_VERSION=2.12
zookeeper_1                                 | SHLVL=1
zookeeper_1                                 | ZOOKEEPER_CLIENT_PORT=2181
zookeeper_1                                 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL=WARN
zookeeper_1                                 | ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL=WARN
zookeeper_1                                 | ZULU_OPENJDK_VERSION=8=8.38.0.13
zookeeper_1                                 | _=/usr/bin/env
zookeeper_1                                 | ===> User
zookeeper_1                                 | uid=0(root) gid=0(root) groups=0(root)
zookeeper_1                                 | ===> Configuring ...
zookeeper_1                                 | ===> Running preflight checks ...
zookeeper_1                                 | ===> Check if /var/lib/zookeeper/data is writable ...
relay_1                                     | 2021-03-09T05:42:15Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
zookeeper_1                                 | ===> Check if /var/lib/zookeeper/log is writable ...
snuba-subscription-consumer-events_1        | + set -- snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1        | + set gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1        | + exec gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
relay_1                                     | 2021-03-09T05:42:20Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 0ns
relay_1                                     | 2021-03-09T05:42:20Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:20Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 1s
snuba-transactions-consumer_1               | + set -- snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-transactions-consumer_1               | + set gosu snuba snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
snuba-transactions-consumer_1               | + exec gosu snuba snuba consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
relay_1                                     | 2021-03-09T05:42:21Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 1.5s
zookeeper_1                                 | ===> Launching ...
zookeeper_1                                 | ===> Launching zookeeper ...
snuba-outcomes-consumer_1                   | + set -- snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-outcomes-consumer_1                   | + set gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-outcomes-consumer_1                   | + exec gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-consumer_1                            | + set -- snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                            | + set gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                            | + exec gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
relay_1                                     | 2021-03-09T05:42:22Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 2.25s
relay_1                                     | 2021-03-09T05:42:25Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 3.375s
snuba-replacer_1                            | + set -- snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                            | + set gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                            | + exec gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-sessions-consumer_1                   | + set -- snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1                   | + set gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1                   | + exec gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-api_1                                 | + set -- snuba api
snuba-api_1                                 | + set gosu snuba snuba api
snuba-api_1                                 | + exec gosu snuba snuba api
snuba-subscription-consumer-transactions_1  | + set -- snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-transactions-subscriptions-consumers --topic=events --result-topic=transactions-subscription-results --dataset=transactions --commit-log-topic=snuba-commit-log --commit-log-group=transactions_group --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-transactions_1  | + set gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-transactions-subscriptions-consumers --topic=events --result-topic=transactions-subscription-results --dataset=transactions --commit-log-topic=snuba-commit-log --commit-log-group=transactions_group --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-transactions_1  | + exec gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-transactions-subscriptions-consumers --topic=events --result-topic=transactions-subscription-results --dataset=transactions --commit-log-topic=snuba-commit-log --commit-log-group=transactions_group --delay-seconds=60 --schedule-ttl=60
kafka_1                                     | ===> Running preflight checks ...
kafka_1                                     | ===> Check if /var/lib/kafka/data is writable ...
relay_1                                     | 2021-03-09T05:42:28Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1                                     | 2021-03-09T05:42:28Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 5.0625s
zookeeper_1                                 | [2021-03-09 05:42:29,280] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
kafka_1                                     | ===> Check if Zookeeper is healthy ...
zookeeper_1                                 | [2021-03-09 05:42:33,252] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
zookeeper_1                                 | [2021-03-09 05:42:33,282] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
relay_1                                     | 2021-03-09T05:42:33Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 7.59375s
snuba-subscription-consumer-events_1        | %3|1615268555.276|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 2ms in state CONNECT)
snuba-subscription-consumer-events_1        | %3|1615268556.266|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
snuba-subscription-consumer-events_1        | %3|1615268556.266|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-subscription-consumer-events_1        | %3|1615268557.265|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-transactions-consumer_1               | %3|1615268557.633|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 1ms in state CONNECT)
snuba-transactions-consumer_1               | %3|1615268557.659|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 8ms in state CONNECT)
relay_1                                     | 2021-03-09T05:42:37Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 30 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:37Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 30 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:37Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 30 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:37Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 30 identical error(s) suppressed)
snuba-consumer_1                            | %3|1615268558.251|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 2ms in state CONNECT)
snuba-consumer_1                            | %3|1615268558.255|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 23ms in state CONNECT)
snuba-outcomes-consumer_1                   | %3|1615268558.265|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 7ms in state CONNECT)
snuba-outcomes-consumer_1                   | %3|1615268558.266|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 19ms in state CONNECT)
snuba-transactions-consumer_1               | %3|1615268558.631|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-transactions-consumer_1               | %3|1615268558.634|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=2a49effa25f6
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=5.4.0-1037-gcp
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=237MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=3559MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=241MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
snuba-consumer_1                            | %3|1615268559.248|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-outcomes-consumer_1                   | %3|1615268559.249|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-outcomes-consumer_1                   | %3|1615268559.249|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-consumer_1                            | %3|1615268559.250|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
kafka_1                                     | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1                                     | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1                                     | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
relay_1                                     | 2021-03-09T05:42:39Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.18.0.6:2181. Will not attempt to authenticate using SASL (unknown error)
snuba-replacer_1                            | %3|1615268560.226|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 7ms in state CONNECT)
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.18.0.11:49766, server: zookeeper/172.18.0.6:2181
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.18.0.6:2181, sessionid = 0x100049a0a470000, negotiated timeout = 40000
snuba-api_1                                 | *** Starting uWSGI 2.0.18 (64bit) on [Tue Mar  9 05:42:40 2021] ***
snuba-api_1                                 | compiled with version: 8.3.0 on 23 February 2021 15:53:08
snuba-api_1                                 | os: Linux-5.4.0-1037-gcp #40-Ubuntu SMP Fri Feb 5 11:57:53 UTC 2021
snuba-api_1                                 | nodename: 39c09f6125f6
snuba-api_1                                 | machine: x86_64
snuba-api_1                                 | clock source: unix
snuba-api_1                                 | pcre jit disabled
snuba-api_1                                 | detected number of CPU cores: 2
snuba-api_1                                 | current working directory: /usr/src/snuba
snuba-api_1                                 | detected binary path: /usr/local/bin/uwsgi
snuba-api_1                                 | your memory page size is 4096 bytes
snuba-api_1                                 | detected max file descriptor number: 1048576
snuba-api_1                                 | lock engine: pthread robust mutexes
snuba-api_1                                 | thunder lock: enabled
snuba-api_1                                 | uwsgi socket 0 bound to TCP address 0.0.0.0:1218 fd 3
snuba-api_1                                 | Python version: 3.8.8 (default, Feb 19 2021, 18:07:06)  [GCC 8.3.0]
snuba-api_1                                 | Set PythonHome to /usr/local
snuba-api_1                                 | Python main interpreter initialized at 0x557a1d981be0
snuba-sessions-consumer_1                   | %3|1615268560.672|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 8ms in state CONNECT)
snuba-sessions-consumer_1                   | %3|1615268560.673|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 2ms in state CONNECT)
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100049a0a470000 closed
kafka_1                                     | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x100049a0a470000
snuba-api_1                                 | python threads support enabled
snuba-api_1                                 | your server socket listen backlog is limited to 100 connections
snuba-api_1                                 | your mercy for graceful operations on workers is 60 seconds
snuba-api_1                                 | mapped 145808 bytes (142 KB) for 1 cores
snuba-api_1                                 | *** Operational MODE: single process ***
snuba-api_1                                 | initialized 38 metrics
snuba-api_1                                 | spawned uWSGI master process (pid: 1)
snuba-api_1                                 | spawned uWSGI worker 1 (pid: 14, cores: 1)
snuba-api_1                                 | metrics collector thread started
kafka_1                                     | ===> Launching ...
kafka_1                                     | ===> Launching kafka ...
snuba-replacer_1                            | %3|1615268561.208|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:41Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 11.390625s
snuba-sessions-consumer_1                   | %3|1615268561.666|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-sessions-consumer_1                   | %3|1615268561.669|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-subscription-consumer-transactions_1  | %3|1615268561.701|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 13ms in state CONNECT)
snuba-subscription-consumer-transactions_1  | %3|1615268562.687|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
snuba-subscription-consumer-transactions_1  | %3|1615268562.687|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
snuba-subscription-consumer-transactions_1  | %3|1615268563.687|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
kafka_1                                     | [2021-03-09 05:42:45,249] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
snuba-api_1                                 | WSGI app 0 (mountpoint='') ready in 6 seconds on interpreter 0x557a1d981be0 pid: 14 (default app)
kafka_1                                     | [2021-03-09 05:42:49,960] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1                                     | [2021-03-09 05:42:49,967] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
subscription-consumer-transactions_1        | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
post-process-forwarder_1                    | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
web_1                                       | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
subscription-consumer-events_1              | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
worker_1                                    | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
ingest-consumer_1                           | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
cron_1                                      | 05:42:52 [INFO] sentry.plugins.github: apps-not-configured
relay_1                                     | 2021-03-09T05:42:52Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 17.0859375s
subscription-consumer-transactions_1        | %3|1615268574.107|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 4ms in state CONNECT)
post-process-forwarder_1                    | %3|1615268574.184|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
post-process-forwarder_1                    | %3|1615268574.186|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 2ms in state CONNECT)
subscription-consumer-events_1              | %3|1615268574.293|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
worker_1                                    | 05:42:54 [INFO] sentry.bgtasks: bgtask.spawn (task_name='sentry.bgtasks.clean_dsymcache:clean_dsymcache')
web_1                                       | *** Starting uWSGI 2.0.18 (64bit) on [Tue Mar  9 05:42:54 2021] ***
web_1                                       | compiled with version: 8.3.0 on 08 March 2021 18:55:51
web_1                                       | os: Linux-5.4.0-1037-gcp #40-Ubuntu SMP Fri Feb 5 11:57:53 UTC 2021
web_1                                       | nodename: 27393629cb97
web_1                                       | machine: x86_64
web_1                                       | clock source: unix
web_1                                       | detected number of CPU cores: 2
web_1                                       | current working directory: /
web_1                                       | detected binary path: /usr/local/bin/uwsgi
web_1                                       | !!! no internal routing support, rebuild with pcre support !!!
web_1                                       | your memory page size is 4096 bytes
web_1                                       | detected max file descriptor number: 1048576
web_1                                       | lock engine: pthread robust mutexes
web_1                                       | thunder lock: enabled
web_1                                       | uWSGI http bound on 0.0.0.0:9000 fd 4
web_1                                       | uwsgi socket 0 bound to TCP address 127.0.0.1:41627 (port auto-assigned) fd 3
web_1                                       | Python version: 3.6.13 (default, Feb 16 2021, 20:33:02)  [GCC 8.3.0]
web_1                                       | Set PythonHome to /usr/local
worker_1                                    | 05:42:54 [INFO] sentry.bgtasks: bgtask.spawn (task_name='sentry.bgtasks.clean_releasefilecache:clean_releasefilecache')
worker_1                                    | * Unknown config option found: 'slack.legacy-app'
web_1                                       | Python main interpreter initialized at 0x5640c2025ef0
web_1                                       | python threads support enabled
web_1                                       | your server socket listen backlog is limited to 100 connections
web_1                                       | your mercy for graceful operations on workers is 60 seconds
web_1                                       | setting request body buffering size to 65536 bytes
web_1                                       | mapped 1924224 bytes (1879 KB) for 12 cores
web_1                                       | *** Operational MODE: preforking+threaded ***
web_1                                       | spawned uWSGI master process (pid: 19)
web_1                                       | spawned uWSGI worker 1 (pid: 23, cores: 4)
web_1                                       | spawned uWSGI worker 2 (pid: 24, cores: 4)
web_1                                       | spawned uWSGI worker 3 (pid: 25, cores: 4)
web_1                                       | spawned uWSGI http 1 (pid: 26)
cron_1                                      | * Unknown config option found: 'slack.legacy-app'
ingest-consumer_1                           | %3|1615268574.643|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 3ms in state CONNECT)
subscription-consumer-transactions_1        | %3|1615268575.103|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
subscription-consumer-transactions_1        | %3|1615268575.108|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
post-process-forwarder_1                    | %3|1615268575.182|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
post-process-forwarder_1                    | %3|1615268575.184|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
subscription-consumer-events_1              | %3|1615268575.293|FAIL|rdkafka#producer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
subscription-consumer-events_1              | %3|1615268575.293|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT)
kafka_1                                     | [2021-03-09 05:42:55,405] INFO Starting the log cleaner (kafka.log.LogCleaner)
ingest-consumer_1                           | %3|1615268575.639|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
kafka_1                                     | [2021-03-09 05:42:55,735] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
subscription-consumer-transactions_1        | %3|1615268576.104|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
subscription-consumer-events_1              | %3|1615268576.293|FAIL|rdkafka#consumer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
relay_1                                     | 2021-03-09T05:42:56Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                                     |   caused by: could not send request using reqwest
relay_1                                     |   caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
kafka_1                                     | [2021-03-09 05:42:57,283] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1                                     | [2021-03-09 05:42:57,436] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
kafka_1                                     | [2021-03-09 05:42:57,491] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_1                                     | [2021-03-09 05:42:57,881] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1                                     | [2021-03-09 05:42:57,930] INFO Stat of the created znode at /brokers/ids/1001 is: 109,109,1615268577915,1615268577915,1,0,0,72062653681827841,180,0,109
kafka_1                                     |  (kafka.zk.KafkaZkClient)
kafka_1                                     | [2021-03-09 05:42:57,931] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 109 (kafka.zk.KafkaZkClient)
worker_1                                    |
worker_1                                    |  -------------- celery@7afd6977fee3 v4.4.7 (cliffs)
worker_1                                    | --- ***** -----
worker_1                                    | -- ******* ---- Linux-5.4.0-1037-gcp-x86_64-with-debian-10.8 2021-03-09 05:42:57
worker_1                                    | - *** --- * ---
worker_1                                    | - ** ---------- [config]
worker_1                                    | - ** ---------- .> app:         sentry:0x7f4ebf9c8b38
worker_1                                    | - ** ---------- .> transport:   redis://redis:6379/0
worker_1                                    | - ** ---------- .> results:     disabled://
worker_1                                    | - *** --- * --- .> concurrency: 2 (prefork)
worker_1                                    | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
worker_1                                    | --- ***** -----
worker_1                                    |  -------------- [queues]
worker_1                                    |                 .> activity.notify  exchange=default(direct) key=activity.notify
worker_1                                    |                 .> alerts           exchange=default(direct) key=alerts
worker_1                                    |                 .> app_platform     exchange=default(direct) key=app_platform
worker_1                                    |                 .> assemble         exchange=default(direct) key=assemble
worker_1                                    |                 .> auth             exchange=default(direct) key=auth
worker_1                                    |                 .> buffers.process_pending exchange=default(direct) key=buffers.process_pending
worker_1                                    |                 .> cleanup          exchange=default(direct) key=cleanup
worker_1                                    |                 .> commits          exchange=default(direct) key=commits
worker_1                                    |                 .> counters-0       exchange=counters(direct) key=default
worker_1                                    |                 .> data_export      exchange=default(direct) key=data_export
worker_1                                    |                 .> default          exchange=default(direct) key=default
worker_1                                    |                 .> digests.delivery exchange=default(direct) key=digests.delivery
worker_1                                    |                 .> digests.scheduling exchange=default(direct) key=digests.scheduling
worker_1                                    |                 .> email            exchange=default(direct) key=email
worker_1                                    |                 .> events.preprocess_event exchange=default(direct) key=events.preprocess_event
worker_1                                    |                 .> events.process_event exchange=default(direct) key=events.process_event
worker_1                                    |                 .> events.reprocess_events exchange=default(direct) key=events.reprocess_events
worker_1                                    |                 .> events.reprocessing.preprocess_event exchange=default(direct) key=events.reprocessing.preprocess_event
worker_1                                    |                 .> events.reprocessing.process_event exchange=default(direct) key=events.reprocessing.process_event
worker_1                                    |                 .> events.reprocessing.symbolicate_event exchange=default(direct) key=events.reprocessing.symbolicate_event
worker_1                                    |                 .> events.save_event exchange=default(direct) key=events.save_event
worker_1                                    |                 .> events.symbolicate_event exchange=default(direct) key=events.symbolicate_event
worker_1                                    |                 .> files.delete     exchange=default(direct) key=files.delete
worker_1                                    |                 .> group_owners.process_suspect_commits exchange=default(direct) key=group_owners.process_suspect_commits
worker_1                                    |                 .> incident_snapshots exchange=default(direct) key=incident_snapshots
worker_1                                    |                 .> incidents        exchange=default(direct) key=incidents
worker_1                                    |                 .> integrations     exchange=default(direct) key=integrations
worker_1                                    |                 .> merge            exchange=default(direct) key=merge
worker_1                                    |                 .> options          exchange=default(direct) key=options
worker_1                                    |                 .> relay_config     exchange=default(direct) key=relay_config
worker_1                                    |                 .> reports.deliver  exchange=default(direct) key=reports.deliver
worker_1                                    |                 .> reports.prepare  exchange=default(direct) key=reports.prepare
worker_1                                    |                 .> search           exchange=default(direct) key=search
worker_1                                    |                 .> sleep            exchange=default(direct) key=sleep
worker_1                                    |                 .> stats            exchange=default(direct) key=stats
worker_1                                    |                 .> subscriptions    exchange=default(direct) key=subscriptions
worker_1                                    |                 .> triggers-0       exchange=triggers(direct) key=default
worker_1                                    |                 .> unmerge          exchange=default(direct) key=unmerge
worker_1                                    |                 .> update           exchange=default(direct) key=update
worker_1                                    |

kafka_1 | [2021-03-09 05:42:59,164] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1 | [2021-03-09 05:42:59,329] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
kafka_1 | [2021-03-09 05:42:59,714] INFO Creating topic transactions-subscription-results with configuration {} and initial partition assignment Map(0 → ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
ingest-consumer_1 | 05:42:59 [WARNING] batching-kafka-consumer: Topic ‘ingest-events’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:42:59 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-events_1 | 05:42:59 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
ingest-consumer_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘ingest-events’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
ingest-consumer_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘ingest-events’ or its partitions are not ready, retrying…
subscription-consumer-events_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-events_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
snuba-replacer_1 | 2021-03-09 05:43:00,314 Caught ConsumerError(‘KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}’), shutting down…
post-process-forwarder_1 | * Unknown config option found: ‘slack.legacy-app’
snuba-consumer_1 | 2021-03-09 05:43:00,391 Caught ConsumerError(‘KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}’), shutting down…
snuba-replacer_1 | Traceback (most recent call last):
snuba-replacer_1 | File “/usr/local/bin/snuba”, line 33, in
snuba-replacer_1 | sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
snuba-replacer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
snuba-replacer_1 | return self.main(*args, **kwargs)
snuba-replacer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
snuba-replacer_1 | rv = self.invoke(ctx)
snuba-replacer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
snuba-replacer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-replacer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
snuba-replacer_1 | return ctx.invoke(self.callback, **ctx.params)
snuba-replacer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
snuba-replacer_1 | return callback(*args, **kwargs)
snuba-replacer_1 | File “/usr/src/snuba/snuba/cli/replacer.py”, line 133, in replacer
snuba-replacer_1 | replacer.run()
snuba-replacer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 112, in run
snuba-replacer_1 | self._run_once()
snuba-replacer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 142, in _run_once
snuba-replacer_1 | self.__message = self.__consumer.poll(timeout=1.0)
snuba-replacer_1 | File “/usr/src/snuba/snuba/utils/streams/backends/kafka.py”, line 404, in poll
snuba-replacer_1 | raise ConsumerError(str(error))
snuba-replacer_1 | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
snuba-consumer_1 | Traceback (most recent call last):
snuba-consumer_1 | File “/usr/local/bin/snuba”, line 33, in
snuba-consumer_1 | sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
snuba-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
snuba-consumer_1 | return self.main(*args, **kwargs)
snuba-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
snuba-consumer_1 | rv = self.invoke(ctx)
snuba-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
snuba-consumer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
snuba-consumer_1 | return ctx.invoke(self.callback, **ctx.params)
snuba-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
snuba-consumer_1 | return callback(*args, **kwargs)
snuba-consumer_1 | File “/usr/src/snuba/snuba/cli/consumer.py”, line 161, in consumer
snuba-consumer_1 | consumer.run()
snuba-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 112, in run
snuba-consumer_1 | self._run_once()
snuba-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 142, in _run_once
snuba-consumer_1 | self.__message = self.__consumer.poll(timeout=1.0)
snuba-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/backends/kafka.py”, line 767, in poll
snuba-consumer_1 | return super().poll(timeout)
snuba-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/backends/kafka.py”, line 404, in poll
snuba-consumer_1 | raise ConsumerError(str(error))
snuba-consumer_1 | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
post-process-forwarder_1 | Traceback (most recent call last):
post-process-forwarder_1 | File “/usr/local/bin/sentry”, line 8, in
post-process-forwarder_1 | sys.exit(main())
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/init.py”, line 164, in main
post-process-forwarder_1 | cli(prog_name=get_prog(), obj={}, max_content_width=100)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 829, in call
post-process-forwarder_1 | return self.main(*args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 782, in main
post-process-forwarder_1 | rv = self.invoke(ctx)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1259, in invoke
post-process-forwarder_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1259, in invoke
post-process-forwarder_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1066, in invoke
post-process-forwarder_1 | return ctx.invoke(self.callback, **ctx.params)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
post-process-forwarder_1 | return callback(*args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/decorators.py”, line 21, in new_func
post-process-forwarder_1 | return f(get_current_context(), *args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py”, line 66, in inner
post-process-forwarder_1 | return ctx.invoke(f, *args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
post-process-forwarder_1 | return callback(*args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/decorators.py”, line 21, in new_func
post-process-forwarder_1 | return f(get_current_context(), *args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py”, line 28, in inner
post-process-forwarder_1 | return ctx.invoke(f, *args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
post-process-forwarder_1 | return callback(*args, **kwargs)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/commands/run.py”, line 333, in post_process_forwarder
post-process-forwarder_1 | initial_offset_reset=options[“initial_offset_reset”],
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/utils/services.py”, line 102, in
post-process-forwarder_1 | context[key] = (lambda f: lambda *a, **k: getattr(self, f)(*a, **k))(key)
post-process-forwarder_1 | File “/usr/local/lib/python3.6/site-packages/sentry/eventstream/kafka/backend.py”, line 204, in run_post_process_forwarder
post-process-forwarder_1 | raise Exception(error)
post-process-forwarder_1 | Exception: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
snuba-outcomes-consumer_1 | 2021-03-09 05:43:00,443 Caught ConsumerError(‘KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}’), shutting down…
snuba-outcomes-consumer_1 | Traceback (most recent call last):
snuba-outcomes-consumer_1 | File “/usr/local/bin/snuba”, line 33, in
snuba-outcomes-consumer_1 | sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
snuba-outcomes-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
snuba-outcomes-consumer_1 | return self.main(*args, **kwargs)
snuba-outcomes-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
snuba-outcomes-consumer_1 | rv = self.invoke(ctx)
snuba-outcomes-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
snuba-outcomes-consumer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-outcomes-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
snuba-outcomes-consumer_1 | return ctx.invoke(self.callback, **ctx.params)
snuba-outcomes-consumer_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
snuba-outcomes-consumer_1 | return callback(*args, **kwargs)
snuba-outcomes-consumer_1 | File “/usr/src/snuba/snuba/cli/consumer.py”, line 161, in consumer
snuba-outcomes-consumer_1 | consumer.run()
snuba-outcomes-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 112, in run
snuba-outcomes-consumer_1 | self._run_once()
snuba-outcomes-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/processing/processor.py”, line 142, in _run_once
snuba-outcomes-consumer_1 | self.__message = self.__consumer.poll(timeout=1.0)
snuba-outcomes-consumer_1 | File “/usr/src/snuba/snuba/utils/streams/backends/kafka.py”, line 404, in poll
snuba-outcomes-consumer_1 | raise ConsumerError(str(error))
snuba-outcomes-consumer_1 | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-events_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
ingest-consumer_1 | * Unknown config option found: ‘slack.legacy-app’
ingest-consumer_1 | /usr/local/lib/python3.6/site-packages/sentry/utils/batching_kafka_consumer.py:51: DeprecationWarning: The ‘warn’ method is deprecated, use ‘warning’ instead
ingest-consumer_1 | logger.warn(“Topic ‘%s’ or its partitions are not ready, retrying…”, topic)
snuba-subscription-consumer-events_1 | Traceback (most recent call last):
snuba-subscription-consumer-events_1 | File “/usr/local/bin/snuba”, line 33, in
snuba-subscription-consumer-events_1 | sys.exit(load_entry_point(‘snuba’, ‘console_scripts’, ‘snuba’)())
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 829, in call
snuba-subscription-consumer-events_1 | return self.main(*args, **kwargs)
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 782, in main
snuba-subscription-consumer-events_1 | rv = self.invoke(ctx)
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1259, in invoke
snuba-subscription-consumer-events_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 1066, in invoke
snuba-subscription-consumer-events_1 | return ctx.invoke(self.callback, **ctx.params)
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/site-packages/click/core.py”, line 610, in invoke
snuba-subscription-consumer-events_1 | return callback(*args, **kwargs)
snuba-subscription-consumer-events_1 | File “/usr/src/snuba/snuba/cli/subscriptions.py”, line 138, in subscriptions
snuba-subscription-consumer-events_1 | SynchronizedConsumer(
snuba-subscription-consumer-events_1 | File “/usr/src/snuba/snuba/utils/streams/synchronized.py”, line 106, in init
snuba-subscription-consumer-events_1 | self.__commit_log_worker.result()
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/concurrent/futures/_base.py”, line 432, in result
snuba-subscription-consumer-events_1 | return self.__get_result()
snuba-subscription-consumer-events_1 | File “/usr/local/lib/python3.8/concurrent/futures/_base.py”, line 388, in __get_result
snuba-subscription-consumer-events_1 | raise self._exception
snuba-subscription-consumer-events_1 | File “/usr/src/snuba/snuba/utils/concurrent.py”, line 33, in run
snuba-subscription-consumer-events_1 | result = function()
snuba-subscription-consumer-events_1 | File “/usr/src/snuba/snuba/utils/streams/synchronized.py”, line 130, in __run_commit_log_worker
snuba-subscription-consumer-events_1 | message = self.__commit_log_consumer.poll(0.1)
snuba-subscription-consumer-events_1 | File “/usr/src/snuba/snuba/utils/streams/backends/kafka.py”, line 404, in poll
snuba-subscription-consumer-events_1 | raise ConsumerError(str(error))
snuba-subscription-consumer-events_1 | snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
subscription-consumer-events_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
ingest-consumer_1 | Traceback (most recent call last):
ingest-consumer_1 | File “/usr/local/bin/sentry”, line 8, in
ingest-consumer_1 | sys.exit(main())
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/init.py”, line 164, in main
ingest-consumer_1 | cli(prog_name=get_prog(), obj={}, max_content_width=100)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 829, in call
ingest-consumer_1 | return self.main(*args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 782, in main
ingest-consumer_1 | rv = self.invoke(ctx)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1259, in invoke
ingest-consumer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1259, in invoke
ingest-consumer_1 | return _process_result(sub_ctx.command.invoke(sub_ctx))
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 1066, in invoke
ingest-consumer_1 | return ctx.invoke(self.callback, **ctx.params)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
ingest-consumer_1 | return callback(*args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/decorators.py”, line 21, in new_func
ingest-consumer_1 | return f(get_current_context(), *args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py”, line 66, in inner
ingest-consumer_1 | return ctx.invoke(f, *args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
ingest-consumer_1 | return callback(*args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/decorators.py”, line 21, in new_func
ingest-consumer_1 | return f(get_current_context(), *args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py”, line 28, in inner
ingest-consumer_1 | return ctx.invoke(f, *args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/click/core.py”, line 610, in invoke
ingest-consumer_1 | return callback(*args, **kwargs)
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/runner/commands/run.py”, line 486, in ingest_consumer
ingest-consumer_1 | get_ingest_consumer(consumer_types=consumer_types, **options).run()
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/utils/batching_kafka_consumer.py”, line 257, in run
ingest-consumer_1 | self._run_once()
ingest-consumer_1 | File “/usr/local/lib/python3.6/site-packages/sentry/utils/batching_kafka_consumer.py”, line 275, in _run_once
ingest-consumer_1 | raise Exception(msg.error())
ingest-consumer_1 | Exception: KafkaError{code=COORDINATOR_LOAD_IN_PROGRESS,val=14,str=“JoinGroup failed: Broker: Coordinator load in progress”}
subscription-consumer-transactions_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-events_1 | 05:43:00 [WARNING] batching-kafka-consumer: Topic ‘events-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:01 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
subscription-consumer-transactions_1 | 05:43:01 [WARNING] batching-kafka-consumer: Topic ‘transactions-subscription-results’ or its partitions are not ready, retrying…
snuba-transactions-consumer_1 | 2021-03-09 05:43:05,039 New partitions assigned: {Partition(topic=Topic(name=‘events’), index=0): 0}
snuba-sessions-consumer_1 | 2021-03-09 05:43:05,041 New partitions assigned: {Partition(topic=Topic(name=‘ingest-sessions’), index=0): 0}
subscription-consumer-events_1 | 05:43:07 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets=‘{0: None}’ partitions=‘[TopicPartition{topic=events-subscription-results,partition=0,offset=-1001,error=None}]’)
subscription-consumer-transactions_1 | 05:43:07 [INFO] sentry.snuba.query_subscription_consumer: query-subscription-consumer.on_assign (offsets=‘{0: None}’ partitions=‘[TopicPartition{topic=transactions-subscription-results,partition=0,offset=-1001,error=None}]’)
snuba-subscription-consumer-transactions_1 | 2021-03-09 05:43:08,239 New partitions assigned: {Partition(topic=Topic(name=‘events’), index=0): 0}
relay_1 | 2021-03-09T05:43:09Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 25.62890625s
snuba-outcomes-consumer_1 | + set – snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-outcomes-consumer_1 | + set gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-outcomes-consumer_1 | + exec gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
snuba-replacer_1 | + set – snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1 | + set gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1 | + exec gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-subscription-consumer-events_1 | + set – snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1 | + set gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-subscription-consumer-events_1 | + exec gosu snuba snuba subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
snuba-consumer_1 | + set – snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1 | + set gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1 | + exec gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
relay_1 | 2021-03-09T05:43:22Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1 | caused by: could not send request using reqwest
relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
snuba-sessions-consumer_1 | 2021-03-09 05:43:23,049 Partitions revoked: [Partition(topic=Topic(name=‘ingest-sessions’), index=0)]
snuba-outcomes-consumer_1 | 2021-03-09 05:43:23,081 New partitions assigned: {Partition(topic=Topic(name=‘outcomes’), index=0): 0}
snuba-consumer_1 | 2021-03-09 05:43:23,083 New partitions assigned: {Partition(topic=Topic(name=‘events’), index=0): 0}
snuba-sessions-consumer_1 | 2021-03-09 05:43:23,479 New partitions assigned: {Partition(topic=Topic(name=‘ingest-sessions’), index=0): 0}
snuba-replacer_1 | 2021-03-09 05:43:23,990 New partitions assigned: {Partition(topic=Topic(name=‘event-replacements’), index=0): 0}
post-process-forwarder_1 | 05:43:24 [INFO] sentry.plugins.github: apps-not-configured
ingest-consumer_1 | 05:43:24 [INFO] sentry.plugins.github: apps-not-configured
web_1 | 05:43:27 [INFO] sentry.plugins.github: apps-not-configured
web_1 | 05:43:27 [INFO] sentry.plugins.github: apps-not-configured
snuba-subscription-consumer-events_1 | 2021-03-09 05:43:27,499 New partitions assigned: {Partition(topic=Topic(name=‘events’), index=0): 0}
web_1 | 05:43:27 [INFO] sentry.plugins.github: apps-not-configured
web_1 | * Unknown config option found: ‘slack.legacy-app’
web_1 | * Unknown config option found: ‘slack.legacy-app’
web_1 | * Unknown config option found: ‘slack.legacy-app’
web_1 | WSGI app 0 (mountpoint=‘’) ready in 34 seconds on interpreter 0x5640c2025ef0 pid: 25 (default app)
web_1 | WSGI app 0 (mountpoint=‘’) ready in 34 seconds on interpreter 0x5640c2025ef0 pid: 24 (default app)
web_1 | WSGI app 0 (mountpoint=‘’) ready in 34 seconds on interpreter 0x5640c2025ef0 pid: 23 (default app)
post-process-forwarder_1 | 05:43:28 [INFO] sentry.eventstream.kafka.backend: Received partition assignment: [TopicPartition{topic=events,partition=0,offset=-1001,error=None}]
ingest-consumer_1 | 05:43:28 [INFO] batching-kafka-consumer: New partitions assigned: [TopicPartition{topic=ingest-attachments,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-events,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-transactions,partition=0,offset=-1001,error=None}]
relay_1 | 2021-03-09T05:43:35Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 38.443359375s
relay_1 | 2021-03-09T05:44:00Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1 | caused by: could not send request using reqwest
relay_1 | caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: dns error: no record found for name: web.google.internal. type: AAAA class: IN
relay_1 | 2021-03-09T05:44:13Z [relay_server::actors::upstream] WARN: Network outage, scheduling another check in 57.665039062s

Looks like you have a networking related issue there. My bet is some weird problem with using IPv6 so I’d try disabling that, at least for internal network.

Also, if you can use a gist or something to share logs this large, it would make scraping through them much easier. Wrapping them in “blockquote” makes it very hard to read. You can use the “code block” if you want to keep pasting into forum posts.

Sorry for inconvenient, I added log as your suggestion.
I will try to disabled internal IPv6. Thank you so much.
How can I disable relay in onpremise package?

You cannot. Why do you want to do this?

While finding solution for build-in relay connect to Sentry, I run stand alone relay as a replacement. So I want to turn of relay in onpremise.

To be able to do this, you’d need to remove the nginx instance inside the package too and then add your own routing. This would also require you to add your external relay as a trusted relay into your sentry through settings. Highly recommend against this approach.

The same problem.
vds debian 10
4CPU, 8RAM
docker, repo just updated from master branch