Timeouts when all docker services are up

On premises not working when all services are up … Only works on port 8000 after you restart docker and then times out giving a 504 error …

docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

Client: Docker Engine - Community
Version: 20.10.9
API version: 1.41
Go version: go1.16.8
Git commit: c2ea9bc
Built: Mon Oct 4 16:08:29 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.9
API version: 1.41 (minimum version 1.12)
Go version: go1.16.8
Git commit: 79ea9d3
Built: Mon Oct 4 16:06:37 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.11
GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0

We need a lot more details than this such as the logs. See https://develop.sentry.dev/self-hosted/troubleshooting/#general

did a fresh install, and can’t get the install.sh to finish installing,

docker: 'compose' is not a docker command.
See 'docker --help'
▶ Parsing command line ...

▶ Setting up error handling ...

▶ Checking minimum requirements ...
docker: 'compose' is not a docker command.
See 'docker --help'

▶ Creating volumes for persistent storage ...
Created sentry-clickhouse.
Created sentry-data.
Created sentry-kafka.
Created sentry-postgres.
Created sentry-redis.
Created sentry-symbolicator.
Created sentry-zookeeper.

▶ Ensuring files from examples ...
../sentry/sentry.conf.py already exists, skipped creation.
../sentry/config.yml already exists, skipped creation.
../symbolicator/config.yml already exists, skipped creation.
../sentry/requirements.txt already exists, skipped creation.

▶ Generating secret key ...

▶ Replacing TSDB ...

▶ Fetching and updating Docker images ...
Some service image(s) must be built from source by running:
    docker-compose build snuba-transactions-cleanup snuba-cleanup symbolicator-cleanup sentry-cleanup
nightly: Pulling from getsentry/sentry
b4d181a07f80: Already exists
de8ecf497b75: Already exists
5dac6597e743: Already exists
84b2aa8486b9: Already exists
efc9186eea59: Already exists
a34fe8ed021a: Already exists
c1ce90cc8e0b: Already exists
a4dab07f0fea: Pulling fs layer
151af99f0d5a: Pulling fs layer
936910ec1afd: Pulling fs layer
3ffe6e722bb8: Pulling fs layer
cc7338984edb: Pulling fs layer
5445450f4a7e: Pulling fs layer
c0882734ff5e: Pulling fs layer
ce5f71c2372b: Pulling fs layer
cc7338984edb: Waiting
5445450f4a7e: Waiting
c0882734ff5e: Waiting
ce5f71c2372b: Waiting
3ffe6e722bb8: Waiting
a4dab07f0fea: Download complete
a4dab07f0fea: Pull complete
3ffe6e722bb8: Verifying Checksum
3ffe6e722bb8: Download complete
cc7338984edb: Verifying Checksum
cc7338984edb: Download complete
5445450f4a7e: Verifying Checksum
5445450f4a7e: Download complete
936910ec1afd: Verifying Checksum
936910ec1afd: Download complete
c0882734ff5e: Verifying Checksum
c0882734ff5e: Download complete
ce5f71c2372b: Download complete
151af99f0d5a: Download complete
151af99f0d5a: Pull complete
936910ec1afd: Pull complete
3ffe6e722bb8: Pull complete
cc7338984edb: Pull complete
5445450f4a7e: Pull complete
c0882734ff5e: Pull complete
ce5f71c2372b: Pull complete
Digest: sha256:aae49ed0f13d0f84e29c1c7169634c1dd55e845a63319a532318b25286fd4724
Status: Downloaded newer image for getsentry/sentry:nightly
docker.io/getsentry/sentry:nightly

▶ Building and tagging Docker images ...

smtp uses an image, skipping
memcached uses an image, skipping
redis uses an image, skipping
postgres uses an image, skipping
zookeeper uses an image, skipping
kafka uses an image, skipping
clickhouse uses an image, skipping
geoipupdate uses an image, skipping
snuba-api uses an image, skipping
snuba-consumer uses an image, skipping
snuba-outcomes-consumer uses an image, skipping
snuba-sessions-consumer uses an image, skipping
snuba-transactions-consumer uses an image, skipping
snuba-replacer uses an image, skipping
snuba-subscription-consumer-events uses an image, skipping
snuba-subscription-consumer-transactions uses an image, skipping
symbolicator uses an image, skipping
web uses an image, skipping
cron uses an image, skipping
worker uses an image, skipping
ingest-consumer uses an image, skipping
post-process-forwarder uses an image, skipping
subscription-consumer-events uses an image, skipping
subscription-consumer-transactions uses an image, skipping
relay uses an image, skipping
nginx uses an image, skipping
Building snuba-cleanup
Sending build context to Docker daemon  3.584kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 421782132550
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Using cache
 ---> 22cee1e3580d
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> Using cache
 ---> 1a2ea491e209
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Using cache
 ---> f8b0cd75ddd0
Successfully built f8b0cd75ddd0
Successfully tagged snuba-cleanup-onpremise-local:latest

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building snuba-transactions-cleanup
Sending build context to Docker daemon  3.584kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 421782132550
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Using cache
 ---> 22cee1e3580d
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> Using cache
 ---> 1a2ea491e209
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Using cache
 ---> f8b0cd75ddd0
Successfully built f8b0cd75ddd0
Successfully tagged snuba-cleanup-onpremise-local:latest

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building symbolicator-cleanup
Sending build context to Docker daemon  3.584kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> d818fe95aa5e
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Using cache
 ---> 077b8ef9e9b8
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> Using cache
 ---> 6e8109ef1d96
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Using cache
 ---> efc7894ff2d3
Successfully built efc7894ff2d3
Successfully tagged symbolicator-cleanup-onpremise-local:latest

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building sentry-cleanup
Sending build context to Docker daemon  3.584kB
Step 1/5 : ARG BASE_IMAGE
Step 2/5 : FROM ${BASE_IMAGE}
 ---> 54741f4e0089
Step 3/5 : RUN apt-get update && apt-get install -y --no-install-recommends cron &&     rm -r /var/lib/apt/lists/*
 ---> Running in a8a2f5b6a0d9
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [309 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 Packages [7906 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [15.2 kB]
Fetched 8469 kB in 4s (1883 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  lsb-base sensible-utils
Suggested packages:
  anacron logrotate checksecurity
Recommended packages:
  default-mta | mail-transport-agent
The following NEW packages will be installed:
  cron lsb-base sensible-utils
0 upgraded, 3 newly installed, 0 to remove and 6 not upgraded.
Need to get 143 kB of archives.
After this operation, 383 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 cron amd64 3.0pl1-134+deb10u1 [99.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 143 kB in 1s (139 kB/s)
Selecting previously unselected package sensible-utils.
(Reading database ... 11935 files and directories currently installed.)
Preparing to unpack .../sensible-utils_0.0.12_all.deb ...
Unpacking sensible-utils (0.0.12) ...
Selecting previously unselected package lsb-base.
Preparing to unpack .../lsb-base_10.2019051400_all.deb ...
Unpacking lsb-base (10.2019051400) ...
Selecting previously unselected package cron.
Preparing to unpack .../cron_3.0pl1-134+deb10u1_amd64.deb ...
Unpacking cron (3.0pl1-134+deb10u1) ...
Setting up lsb-base (10.2019051400) ...
Setting up sensible-utils (0.0.12) ...
Setting up cron (3.0pl1-134+deb10u1) ...
Adding group `crontab' (GID 101) ...
Done.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Removing intermediate container a8a2f5b6a0d9
 ---> 9106a9f65b31
Step 4/5 : COPY entrypoint.sh /entrypoint.sh
 ---> 405c2017ba39
Step 5/5 : ENTRYPOINT ["/entrypoint.sh"]
 ---> Running in 0f2b567cc222
Removing intermediate container 0f2b567cc222
 ---> 153f7ac5b32b
Successfully built 153f7ac5b32b
Successfully tagged sentry-cleanup-onpremise-local:latest

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

Docker images built.

▶ Turning things off ...
Removing network onpremise_default
Network onpremise_default not found.
Removing sentry_onpremise_snuba-subscription-consumer-events_1       ...
Removing sentry_onpremise_snuba-consumer_1                           ...
Removing sentry_onpremise_snuba-sessions-consumer_1                  ...
Removing sentry_onpremise_snuba-transactions-consumer_1              ...
Removing sentry_onpremise_snuba-subscription-consumer-transactions_1 ...
Removing sentry_onpremise_snuba-replacer_1                           ...
Removing sentry_onpremise_snuba-outcomes-consumer_1                  ...
Removing sentry_onpremise_snuba-api_1                                ...
Removing sentry_onpremise_smtp_1                                     ...
Removing sentry_onpremise_memcached_1                                ...
Removing sentry_onpremise_postgres_1                                 ...
Removing sentry_onpremise_symbolicator_1                             ...
Removing sentry_onpremise_kafka_1                                    ...
Removing sentry_onpremise_clickhouse_1                               ...
Removing sentry_onpremise_zookeeper_1                                ...
Removing sentry_onpremise_redis_1                                    ...
Removing sentry_onpremise_snuba-sessions-consumer_1                  ... done
Removing sentry_onpremise_snuba-transactions-consumer_1              ... done
Removing sentry_onpremise_snuba-replacer_1                           ... done
Removing sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Removing sentry_onpremise_snuba-consumer_1                           ... done
Removing sentry_onpremise_snuba-api_1                                ... done
Removing sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Removing sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Removing sentry_onpremise_clickhouse_1                               ... done
Removing sentry_onpremise_kafka_1                                    ... done
Removing sentry_onpremise_smtp_1                                     ... done
Removing sentry_onpremise_symbolicator_1                             ... done
Removing sentry_onpremise_zookeeper_1                                ... done
Removing sentry_onpremise_memcached_1                                ... done
Removing sentry_onpremise_redis_1                                    ... done
Removing sentry_onpremise_postgres_1                                 ... done
Removing network sentry_onpremise_default

▶ Setting up Zookeeper ...
Creating network "sentry_onpremise_default" with the default driver
Creating sentry_onpremise_zookeeper_run ...
Creating sentry_onpremise_zookeeper_run ... done
Creating sentry_onpremise_zookeeper_run ...
Creating sentry_onpremise_zookeeper_run ... done
Creating sentry_onpremise_zookeeper_run ...
Creating sentry_onpremise_zookeeper_run ... done

▶ Downloading and installing wal2json ...

▶ Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_redis_1      ... done
Creating sentry_onpremise_zookeeper_1  ... done
Creating sentry_onpremise_kafka_1      ...
Creating sentry_onpremise_kafka_1      ... done
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
2021-10-20 19:24:41,036 Attempting to connect to Kafka (attempt 0)...
2021-10-20 19:24:41,145 Connected to Kafka on attempt 0
2021-10-20 19:24:41,146 Creating Kafka topics...
Creating sentry_onpremise_snuba-api_run ...
Creating sentry_onpremise_snuba-api_run ... done
Finished running migrations

▶ Creating additional Kafka topics ...

▶ Ensuring proper PostgreSQL version ...

▶ Setting up / migrating database ...
Creating sentry_onpremise_postgres_1 ...
Creating sentry_onpremise_smtp_1     ...
Creating sentry_onpremise_symbolicator_1 ...
Creating sentry_onpremise_memcached_1    ...
Creating sentry_onpremise_snuba-replacer_1 ...
Creating sentry_onpremise_snuba-outcomes-consumer_1 ...
Creating sentry_onpremise_snuba-api_1               ...
Creating sentry_onpremise_snuba-consumer_1          ...
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ...
Creating sentry_onpremise_snuba-sessions-consumer_1                  ...
Creating sentry_onpremise_snuba-transactions-consumer_1              ...
Creating sentry_onpremise_snuba-subscription-consumer-events_1       ...
Creating sentry_onpremise_smtp_1                                     ... done
Creating sentry_onpremise_snuba-api_1                                ... done
Creating sentry_onpremise_memcached_1                                ... done
Creating sentry_onpremise_symbolicator_1                             ... done
Creating sentry_onpremise_snuba-sessions-consumer_1                  ... done
Creating sentry_onpremise_snuba-replacer_1                           ... done
Creating sentry_onpremise_postgres_1                                 ... done
Creating sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Creating sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Creating sentry_onpremise_snuba-transactions-consumer_1              ... done
Creating sentry_onpremise_snuba-consumer_1                           ... done
Creating sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Creating sentry_onpremise_web_run                                    ...
Creating sentry_onpremise_web_run                                    ... done
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Installing additional dependencies...
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

19:25:38 [WARNING] sentry.utils.geo: Error opening GeoIP database: /geoip/GeoLite2-City.mmdb
19:25:38 [WARNING] sentry.utils.geo: Error opening GeoIP database in Rust: /geoip/GeoLite2-City.mmdb
19:32:20 [INFO] sentry.plugins.github: apps-not-configured





Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 552, in connect
    sock = self._connect()
  File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 609, in _connect
    raise err
  File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 597, in _connect
    sock.connect(socket_address)
TimeoutError: [Errno 110] Connection timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/sentry/buffer/redis.py", line 64, in validate
    client.ping()
  File "/usr/local/lib/python3.6/site-packages/redis/client.py", line 1351, in ping
    return self.execute_command('PING')
  File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/redis.py", line 101, in sentry_patched_execute_command
    return old_execute_command(self, name, *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 488, in execute_command
    buf = self._get_command_buffer(host_id, args[0])
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 355, in _get_command_buffer
    buf = CommandBuffer(host_id, connect, self.auto_batch)
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 91, in __init__
    self.connect()
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 107, in connect
    self.connection = self._connect_func()
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 353, in connect
    return self.connection_pool.get_connection(command_name, shard_hint=host_id)
  File "/usr/local/lib/python3.6/site-packages/rb/clients.py", line 254, in get_connection
    con = real_pool.get_connection(command_name)
  File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 1185, in get_connection
    connection.connect()
  File "/usr/local/lib/python3.6/site-packages/redis/connection.py", line 557, in connect
    raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 110 connecting to redis:6379. Connection timed out.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/sentry", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/__init__.py", line 188, in main
    func(**kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/decorators.py", line 28, in inner
    configure()
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/__init__.py", line 124, in configure
    configure(ctx, py, yaml, skip_service_validation)
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/settings.py", line 155, in configure
    skip_service_validation=skip_service_validation,
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/initializer.py", line 371, in initialize_app
    setup_services(validate=not skip_service_validation)
  File "/usr/local/lib/python3.6/site-packages/sentry/runner/initializer.py", line 415, in setup_services
    service.validate()
  File "/usr/local/lib/python3.6/site-packages/sentry/utils/services.py", line 105, in <lambda>
    context[key] = (lambda f: lambda *a, **k: getattr(self, f)(*a, **k))(key)
  File "/usr/local/lib/python3.6/site-packages/sentry/buffer/redis.py", line 66, in validate
    raise InvalidConfiguration(str(e))
sentry.exceptions.InvalidConfiguration: Error 110 connecting to redis:6379. Connection timed out.
1
An error occurred, caught SIGERR on line 12
Cleaning up...

Looks like your redis instance is having trouble. Can you share its logs?

here you go

Attaching to sentry_onpremise_redis_1
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # Redis version=6.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1                                     | 1:M 20 Oct 2021 19:53:48.531 * monotonic clock: POSIX clock_gettime
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Running mode=standalone, port=6379.
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 # Server initialized
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Loading RDB produced by version 6.2.4
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * RDB age 430 seconds
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * RDB memory usage when created 0.77 Mb
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * DB loaded from disk: 0.000 seconds
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Ready to accept connections
redis_1                                     | 1:signal-handler (1634759765) Received SIGTERM scheduling shutdown...
redis_1                                     | 1:M 20 Oct 2021 19:56:05.260 # User requested shutdown...
redis_1                                     | 1:M 20 Oct 2021 19:56:05.260 * Saving the final RDB snapshot before exiting.
redis_1                                     | 1:M 20 Oct 2021 19:56:05.262 * DB saved on disk
redis_1                                     | 1:M 20 Oct 2021 19:56:05.262 # Redis is now ready to exit, bye bye...
raymond@ops:/var/www/onpremise$

the other install.sh logs

Attaching to sentry_onpremise_kafka_1, sentry_onpremise_clickhouse_1, sentry_onpremise_redis_1, sentry_onpremise_zookeeper_1
kafka_1                                     | ===> ENV Variables ...
kafka_1                                     | ALLOW_UNSIGNED=false
kafka_1                                     | COMPONENT=kafka
kafka_1                                     | CONFLUENT_DEB_VERSION=1
kafka_1                                     | CONFLUENT_PLATFORM_LABEL=
kafka_1                                     | CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1                                     | CONFLUENT_VERSION=5.5.0
kafka_1                                     | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1                                     | HOME=/root
kafka_1                                     | HOSTNAME=5b30abd73537
kafka_1                                     | KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1                                     | KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1                                     | KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1                                     | KAFKA_LOG_RETENTION_HOURS=24
kafka_1                                     | KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1                                     | KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1                                     | KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1
kafka_1                                     | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1                                     | KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1                                     | KAFKA_VERSION=
kafka_1                                     | KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1                                     | LANG=C.UTF-8
kafka_1                                     | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1                                     | PWD=/
kafka_1                                     | PYTHON_PIP_VERSION=8.1.2
kafka_1                                     | PYTHON_VERSION=2.7.9-1
kafka_1                                     | SCALA_VERSION=2.12
kafka_1                                     | SHLVL=1
kafka_1                                     | ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1                                     | _=/usr/bin/env
kafka_1                                     | ===> User
kafka_1                                     | uid=0(root) gid=0(root) groups=0(root)
kafka_1                                     | ===> Configuring ...
kafka_1                                     | ===> Running preflight checks ...
kafka_1                                     | ===> Check if /var/lib/kafka/data is writable ...
kafka_1                                     | ===> Check if Zookeeper is healthy ...
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=5b30abd73537
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=5.4.0-88-generic
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=146MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=2215MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=149MB
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka_1                                     | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1                                     | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1                                     | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.22.0.4:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.22.0.5:50296, server: zookeeper/172.22.0.4:2181
kafka_1                                     | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.22.0.4:2181, sessionid = 0x10021a7f3a20000, negotiated timeout = 40000
kafka_1                                     | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10021a7f3a20000 closed
kafka_1                                     | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10021a7f3a20000
kafka_1                                     | ===> Launching ...
kafka_1                                     | ===> Launching kafka ...
kafka_1                                     | [2021-10-20 19:54:26,456] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
kafka_1                                     | [2021-10-20 19:54:27,089] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1                                     | [2021-10-20 19:54:27,089] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1                                     | [2021-10-20 19:54:28,350] INFO Starting the log cleaner (kafka.log.LogCleaner)
kafka_1                                     | [2021-10-20 19:54:28,446] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
kafka_1                                     | [2021-10-20 19:54:28,763] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1                                     | [2021-10-20 19:54:28,795] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
kafka_1                                     | [2021-10-20 19:54:28,797] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_1                                     | [2021-10-20 19:54:28,885] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1                                     | [2021-10-20 19:54:28,916] INFO Stat of the created znode at /brokers/ids/1001 is: 278,278,1634759668905,1634759668905,1,0,0,72094599268663297,180,0,278
kafka_1                                     |  (kafka.zk.KafkaZkClient)
kafka_1                                     | [2021-10-20 19:54:28,917] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 278 (kafka.zk.KafkaZkClient)
kafka_1                                     | [2021-10-20 19:54:29,137] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1                                     | [2021-10-20 19:54:29,256] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
kafka_1                                     | [2021-10-20 19:55:12,009] WARN Client session timed out, have not heard from server in 12003ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
kafka_1                                     | [2021-10-20 19:55:31,633] WARN Client session timed out, have not heard from server in 19523ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
kafka_1                                     | [2021-10-20 19:55:51,636] WARN Client session timed out, have not heard from server in 19902ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
kafka_1                                     | [2021-10-20 19:56:09,742] WARN Client session timed out, have not heard from server in 18004ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
kafka_1                                     | [2021-10-20 19:56:29,390] WARN Client session timed out, have not heard from server in 19547ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
kafka_1                                     | [2021-10-20 19:56:49,175] WARN Client session timed out, have not heard from server in 19684ms for sessionid 0x10021a7f3a20001 (org.apache.zookeeper.ClientCnxn)
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # Redis version=6.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                                     | 1:C 20 Oct 2021 19:53:48.530 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1                                     | 1:M 20 Oct 2021 19:53:48.531 * monotonic clock: POSIX clock_gettime
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Running mode=standalone, port=6379.
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 # Server initialized
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Loading RDB produced by version 6.2.4
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * RDB age 430 seconds
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * RDB memory usage when created 0.77 Mb
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * DB loaded from disk: 0.000 seconds
redis_1                                     | 1:M 20 Oct 2021 19:53:48.532 * Ready to accept connections
redis_1                                     | 1:signal-handler (1634759765) Received SIGTERM scheduling shutdown...
redis_1                                     | 1:M 20 Oct 2021 19:56:05.260 # User requested shutdown...
redis_1                                     | 1:M 20 Oct 2021 19:56:05.260 * Saving the final RDB snapshot before exiting.
redis_1                                     | 1:M 20 Oct 2021 19:56:05.262 * DB saved on disk
redis_1                                     | 1:M 20 Oct 2021 19:56:05.262 # Redis is now ready to exit, bye bye...
zookeeper_1                                 | ===> ENV Variables ...
zookeeper_1                                 | ALLOW_UNSIGNED=false
zookeeper_1                                 | COMPONENT=zookeeper
zookeeper_1                                 | CONFLUENT_DEB_VERSION=1
zookeeper_1                                 | CONFLUENT_PLATFORM_LABEL=
zookeeper_1                                 | CONFLUENT_SUPPORT_METRICS_ENABLE=false
zookeeper_1                                 | CONFLUENT_VERSION=5.5.0
zookeeper_1                                 | CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
zookeeper_1                                 | HOME=/root
zookeeper_1                                 | HOSTNAME=166463af7b68
zookeeper_1                                 | KAFKA_OPTS=-Dzookeeper.4lw.commands.whitelist=ruok
zookeeper_1                                 | KAFKA_VERSION=
zookeeper_1                                 | LANG=C.UTF-8
zookeeper_1                                 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
zookeeper_1                                 | PWD=/
zookeeper_1                                 | PYTHON_PIP_VERSION=8.1.2
zookeeper_1                                 | PYTHON_VERSION=2.7.9-1
zookeeper_1                                 | SCALA_VERSION=2.12
zookeeper_1                                 | SHLVL=1
zookeeper_1                                 | ZOOKEEPER_CLIENT_PORT=2181
zookeeper_1                                 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL=WARN
zookeeper_1                                 | ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL=WARN
zookeeper_1                                 | ZULU_OPENJDK_VERSION=8=8.38.0.13
zookeeper_1                                 | _=/usr/bin/env
zookeeper_1                                 | ===> User
zookeeper_1                                 | uid=0(root) gid=0(root) groups=0(root)
zookeeper_1                                 | ===> Configuring ...
zookeeper_1                                 | ===> Running preflight checks ...
zookeeper_1                                 | ===> Check if /var/lib/zookeeper/data is writable ...
zookeeper_1                                 | ===> Check if /var/lib/zookeeper/log is writable ...
zookeeper_1                                 | ===> Launching ...
zookeeper_1                                 | ===> Launching zookeeper ...
zookeeper_1                                 | [2021-10-20 19:53:53,987] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
zookeeper_1                                 | [2021-10-20 19:53:54,160] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
zookeeper_1                                 | [2021-10-20 19:53:54,161] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
clickhouse_1                                | Processing configuration file '/etc/clickhouse-server/config.xml'.
clickhouse_1                                | Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
clickhouse_1                                | Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
clickhouse_1                                | Include not found: clickhouse_remote_servers
clickhouse_1                                | Include not found: clickhouse_compression
clickhouse_1                                | Logging information to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1                                | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1                                | Logging information to console
clickhouse_1                                | 2021.10.20 19:53:49.110303 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
clickhouse_1                                | 2021.10.20 19:53:49.113928 [ 1 ] {} <Information> Application: starting up
clickhouse_1                                | Include not found: networks
clickhouse_1                                | 2021.10.20 19:53:49.233123 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 4.87 GiB because the system has low amount of memory
clickhouse_1                                | 2021.10.20 19:53:49.265991 [ 1 ] {} <Information> Application: Mark cache size was lowered to 4.87 GiB because the system has low amount of memory
clickhouse_1                                | 2021.10.20 19:53:49.266264 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
clickhouse_1                                | 2021.10.20 19:53:49.280163 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
clickhouse_1                                | 2021.10.20 19:53:49.288135 [ 49 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
clickhouse_1                                | 2021.10.20 19:53:49.335401 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
clickhouse_1                                | 2021.10.20 19:53:49.418844 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
clickhouse_1                                | 2021.10.20 19:53:49.446344 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
clickhouse_1                                | 2021.10.20 19:53:49.448383 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
clickhouse_1                                | 2021.10.20 19:53:49.448955 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
clickhouse_1                                | 2021.10.20 19:53:49.448992 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
clickhouse_1                                | 2021.10.20 19:53:49.466305 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.10.20 19:53:49.476809 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.10.20 19:53:49.477329 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.10.20 19:53:49.477771 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse_1                                | 2021.10.20 19:53:49.478235 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
clickhouse_1                                | 2021.10.20 19:53:49.478346 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
clickhouse_1                                | 2021.10.20 19:53:49.478424 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
clickhouse_1                                | 2021.10.20 19:53:49.633429 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
clickhouse_1                                | 2021.10.20 19:53:49.633999 [ 1 ] {} <Information> Application: Available RAM: 9.73 GiB; physical cores: 8; logical cores: 8.
clickhouse_1                                | 2021.10.20 19:53:49.634019 [ 1 ] {} <Information> Application: Ready for connections.
clickhouse_1                                | Include not found: clickhouse_remote_servers
clickhouse_1                                | Include not found: clickhouse_compression
clickhouse_1                                | 2021.10.20 19:54:59.561693 [ 89 ] {d8ce926a-149b-4a86-b159-286d0c4386b8} <Information> executeQuery: Read 1 rows, 1.00 B in 0.001 sec., 791 rows/sec., 791.36 B/sec.
clickhouse_1                                | 2021.10.20 19:54:59.561869 [ 89 ] {} <Information> TCPHandler: Processed in 0.002 sec.
clickhouse_1                                | 2021.10.20 19:54:59.596472 [ 89 ] {cc2a47bf-579c-4244-ae49-cd3bcb5a8d27} <Information> executeQuery: Read 42 rows, 2.64 KiB in 0.034 sec., 1251 rows/sec., 78.57 KiB/sec.
clickhouse_1                                | 2021.10.20 19:54:59.596744 [ 89 ] {} <Information> TCPHandler: Processed in 0.034 sec.
clickhouse_1                                | 2021.10.20 19:54:59.632525 [ 89 ] {} <Information> TCPHandler: Done processing connection.
clickhouse_1                                | 2021.10.20 19:56:05.239203 [ 47 ] {} <Information> Application: Received termination signal (Terminated)
clickhouse_1                                | 2021.10.20 19:56:05.834917 [ 1 ] {} <Information> Application: Closed all listening sockets.
clickhouse_1                                | 2021.10.20 19:56:05.834985 [ 1 ] {} <Information> Application: Closed connections.
clickhouse_1                                | 2021.10.20 19:56:05.836047 [ 1 ] {} <Information> Application: Shutting down storages.
clickhouse_1                                | 2021.10.20 19:56:06.555334 [ 1 ] {} <Information> Application: shutting down
clickhouse_1                                | 2021.10.20 19:56:06.555411 [ 47 ] {} <Information> BaseDaemon: Stop SignalListener thread

These logs look fine so then what is left is a networking issue. Do you have any firewall rules or proxy settings that might interfere with docker-compose’s internal networking?

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.18.0.0/24        tcp dpt:http

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Could it be that your docker-compose network is created on a different subnet rather than the 172.18.0.0/24 range, causing packet drops?

what could cause docker-compose to use a different subnet then … because its strange sudo install.sh requires you to upgrade using the 21.10.0 released 4 days ago then also an update for compose ?

[
    {
        "Name": "sentry_onpremise_default",
        "Id": "5070bdb5e4146384bdacf7ade3bcdfb1c3479adca76372377f8320105c8e8f51",
        "Created": "2021-10-21T09:13:36.975175665Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.26.0.0/16",
                    "Gateway": "172.26.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "sentry_onpremise",
            "com.docker.compose.version": "1.29.2"
        }
    }
]```