Doesn't show any issue

Hello there,
Recently I installed gsentry/onpremis by install.sh.
everything was ok and no error appeared in the docker log but there is no issue although I have my environment dev and prod which I posted issue with that envs.
I can see accepted in State dashboard too!

What I missed !?

Can you share your service logs with us? You can get them with docker-compose logs.

Also which version do you have?

smtp_1                          + sed -ri '
smtp_1                          	s/^#?(dc_local_interfaces)=.*/\1='\''0.0.0.0 ; ::0'\''/;
smtp_1                          	s/^#?(dc_other_hostnames)=.*/\1='\'''\''/;
smtp_1                          	s/^#?(dc_relay_nets)=.*/\1='\''0.0.0.0\/0'\''/;
smtp_1                          	s/^#?(dc_eximconfig_configtype)=.*/\1='\''internet'\''/;
smtp_1                          ' /etc/exim4/update-exim4.conf.conf
smtp_1                          + update-exim4.conf -v
smtp_1                          using non-split configuration scheme from /etc/exim4/exim4.conf.template
smtp_1                            269 LOG: MAIN
smtp_1                            269   exim 4.92 daemon started: pid=269, no queue runs, listening for SMTP on port 25 (IPv6 and IPv4)
clickhouse_1                    Include not found: clickhouse_remote_servers
clickhouse_1                    Include not found: clickhouse_compression
clickhouse_1                    Logging trace to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1                    Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1                    Include not found: networks
clickhouse_1                    Include not found: clickhouse_remote_servers
clickhouse_1                    Include not found: clickhouse_compression
redis_1                         1:C 20 Jul 2020 08:06:09.910 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                         1:C 20 Jul 2020 08:06:09.910 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                         1:C 20 Jul 2020 08:06:09.910 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1                         1:M 20 Jul 2020 08:06:09.911 * Running mode=standalone, port=6379.
redis_1                         1:M 20 Jul 2020 08:06:09.911 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1                         1:M 20 Jul 2020 08:06:09.911 # Server initialized
redis_1                         1:M 20 Jul 2020 08:06:09.911 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1                         1:M 20 Jul 2020 08:06:09.911 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1                         1:M 20 Jul 2020 08:06:09.919 * DB loaded from disk: 0.007 seconds
redis_1                         1:M 20 Jul 2020 08:06:09.919 * Ready to accept connections
zookeeper_1                     ===> ENV Variables ...
zookeeper_1                     ALLOW_UNSIGNED=false
zookeeper_1                     COMPONENT=zookeeper
zookeeper_1                     CONFLUENT_DEB_VERSION=1
zookeeper_1                     CONFLUENT_PLATFORM_LABEL=
zookeeper_1                     CONFLUENT_SUPPORT_METRICS_ENABLE=false
zookeeper_1                     CONFLUENT_VERSION=5.5.0
zookeeper_1                     CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
zookeeper_1                     HOME=/root
zookeeper_1                     HOSTNAME=d5c8242eb7a6
zookeeper_1                     KAFKA_VERSION=
zookeeper_1                     LANG=C.UTF-8
zookeeper_1                     PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
zookeeper_1                     PWD=/
zookeeper_1                     PYTHON_PIP_VERSION=8.1.2
zookeeper_1                     PYTHON_VERSION=2.7.9-1
zookeeper_1                     SCALA_VERSION=2.12
zookeeper_1                     SHLVL=1
zookeeper_1                     ZOOKEEPER_CLIENT_PORT=2181
zookeeper_1                     ZOOKEEPER_LOG4J_ROOT_LOGLEVEL=WARN
zookeeper_1                     ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL=WARN
zookeeper_1                     ZULU_OPENJDK_VERSION=8=8.38.0.13
zookeeper_1                     _=/usr/bin/env
zookeeper_1                     ===> User
zookeeper_1                     uid=0(root) gid=0(root) groups=0(root)
zookeeper_1                     ===> Configuring ...
e[34msymbolicator-cleanup_1          SHELL=/bin/bash
e[34msymbolicator-cleanup_1          BASH_ENV=/container.env
e[34msymbolicator-cleanup_1          55 23 * * * gosu symbolicator symbolicator cleanup > /proc/1/fd/1 2>/proc/1/fd/2
kafka_1                         ===> ENV Variables ...
kafka_1                         ALLOW_UNSIGNED=false
kafka_1                         COMPONENT=kafka
kafka_1                         CONFLUENT_DEB_VERSION=1
kafka_1                         CONFLUENT_PLATFORM_LABEL=
kafka_1                         CONFLUENT_SUPPORT_METRICS_ENABLE=false
kafka_1                         CONFLUENT_VERSION=5.5.0
kafka_1                         CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
kafka_1                         HOME=/root
kafka_1                         HOSTNAME=958375364b4d
kafka_1                         KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
kafka_1                         KAFKA_LOG4J_LOGGERS=kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN
kafka_1                         KAFKA_LOG4J_ROOT_LOGLEVEL=WARN
kafka_1                         KAFKA_MAX_REQUEST_SIZE=50000000
kafka_1                         KAFKA_MESSAGE_MAX_BYTES=50000000
kafka_1                         KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
kafka_1                         KAFKA_TOOLS_LOG4J_LOGLEVEL=WARN
kafka_1                         KAFKA_VERSION=
kafka_1                         KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
kafka_1                         LANG=C.UTF-8
kafka_1                         PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
kafka_1                         PWD=/
kafka_1                         PYTHON_PIP_VERSION=8.1.2
kafka_1                         PYTHON_VERSION=2.7.9-1
kafka_1                         SCALA_VERSION=2.12
kafka_1                         SHLVL=1
kafka_1                         ZULU_OPENJDK_VERSION=8=8.38.0.13
kafka_1                         _=/usr/bin/env
relay_1                         2020-07-20T08:06:12Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.21.0.10:9092 failed: Connection refused (after 30ms in state CONNECT)
relay_1                         2020-07-20T08:06:12Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
postgres_1                      
postgres_1                      PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1                      
postgres_1                      LOG:  database system was shut down at 2020-07-20 07:40:22 UTC
postgres_1                      LOG:  MultiXact member wraparound protections are now enabled
snuba-cleanup_1                 SHELL=/bin/bash
snuba-cleanup_1                 BASH_ENV=/container.env
snuba-cleanup_1                 */5 * * * * gosu snuba snuba cleanup --dry-run False > /proc/1/fd/1 2>/proc/1/fd/2
msnuba-outcomes-consumer_1       + '[' c = - ']'
msnuba-outcomes-consumer_1       + snuba consumer --help
msnuba-outcomes-consumer_1       + set -- snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
msnuba-outcomes-consumer_1       + set gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
msnuba-outcomes-consumer_1       + exec gosu snuba snuba consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
postgres_1                      LOG:  autovacuum launcher started
postgres_1                      LOG:  database system is ready to accept connections
relay_1                         2020-07-20T08:06:12Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.21.0.10:9092 failed: Connection refused (after 0ms in state CONNECT)
relay_1                         2020-07-20T08:06:12Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
kafka_1                         ===> User
relay_1                         2020-07-20T08:06:12Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Failed resolving hostname: no record found for name: web type: AAAA class: IN
relay_1                           caused by: Failed resolving hostname: no record found for name: web type: AAAA class: IN
relay_1                           caused by: Failed resolving hostname: no record found for name: web type: AAAA class: IN
kafka_1                         uid=0(root) gid=0(root) groups=0(root)
kafka_1                         ===> Configuring ...
relay_1                         2020-07-20T08:06:12Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Failed resolving hostname: no record found for name: web type: AAAA class: IN
relay_1                           caused by: Failed resolving hostname: no record found for name: web type: AAAA class: IN
relay_1                           caused by: Failed resolving hostname: no record found for name: web type: AAAA class: IN
relay_1                         2020-07-20T08:06:14Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Timeout while waiting for response
snuba-consumer_1                + '[' c = - ']'
snuba-consumer_1                + snuba consumer --help
snuba-consumer_1                + set -- snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                + set gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1                + exec gosu snuba snuba consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
snuba-replacer_1                + '[' r = - ']'
snuba-replacer_1                + snuba replacer --help
snuba-replacer_1                + set -- snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                + set gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1                + exec gosu snuba snuba replacer --storage events --auto-offset-reset=latest --max-batch-size 3
snuba-api_1                     + '[' a = - ']'
snuba-api_1                     + snuba api --help
snuba-api_1                     + set -- snuba api
snuba-api_1                     + set gosu snuba snuba api
snuba-api_1                     + exec gosu snuba snuba api
snuba-api_1                     *** Starting uWSGI 2.0.18 (64bit) on [Mon Jul 20 08:06:15 2020] ***
snuba-api_1                     compiled with version: 8.3.0 on 18 July 2020 03:46:50
snuba-api_1                     os: Linux-3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020
snuba-api_1                     nodename: d8c438edd454
snuba-api_1                     machine: x86_64
snuba-api_1                     clock source: unix
snuba-api_1                     pcre jit disabled
snuba-api_1                     detected number of CPU cores: 8
snuba-api_1                     current working directory: /usr/src/snuba
snuba-api_1                     detected binary path: /usr/local/bin/uwsgi
snuba-api_1                     your memory page size is 4096 bytes
snuba-api_1                     detected max file descriptor number: 1048576
snuba-api_1                     lock engine: pthread robust mutexes
snuba-api_1                     thunder lock: enabled
snuba-api_1                     uwsgi socket 0 bound to TCP address 0.0.0.0:1218 fd 3
snuba-api_1                     Python version: 3.7.8 (default, Jun 30 2020, 18:36:05)  [GCC 8.3.0]
snuba-api_1                     Set PythonHome to /usr/local
snuba-api_1                     Python main interpreter initialized at 0x55dc31462660
snuba-api_1                     python threads support enabled
snuba-api_1                     your server socket listen backlog is limited to 100 connections
snuba-api_1                     your mercy for graceful operations on workers is 60 seconds
snuba-api_1                     mapped 145808 bytes (142 KB) for 1 cores
snuba-api_1                     *** Operational MODE: single process ***
snuba-api_1                     initialized 38 metrics
snuba-api_1                     spawned uWSGI master process (pid: 1)
snuba-api_1                     spawned uWSGI worker 1 (pid: 17, cores: 1)
snuba-api_1                     metrics collector thread started
snuba-sessions-consumer_1       + '[' c = - ']'
snuba-sessions-consumer_1       + snuba consumer --help
snuba-sessions-consumer_1       + set -- snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1       + set gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-sessions-consumer_1       + exec gosu snuba snuba consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
snuba-transactions-consumer_1   + '[' c = - ']'
snuba-transactions-consumer_1   + snuba consumer --help
snuba-transactions-consumer_1   + set -- snuba consumer --storage transactions --auto-offset-reset=latest --max-batch-time-ms 750
snuba-transactions-consumer_1   + set gosu snuba snuba consumer --storage transactions --auto-offset-reset=latest --max-batch-time-ms 750
snuba-transactions-consumer_1   + exec gosu snuba snuba consumer --storage transactions --auto-offset-reset=latest --max-batch-time-ms 750
sentry-cleanup_1                SHELL=/bin/bash
sentry-cleanup_1                BASH_ENV=/container.env
sentry-cleanup_1                0 0 * * * gosu sentry sentry cleanup --days 90 > /proc/1/fd/1 2>/proc/1/fd/2
relay_1                         2020-07-20T08:06:16Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
snuba-api_1                     WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x55dc31462660 pid: 17 (default app)
zookeeper_1                     ===> Running preflight checks ... 
zookeeper_1                     ===> Check if /var/lib/zookeeper/data is writable ...
zookeeper_1                     ===> Check if /var/lib/zookeeper/log is writable ...
relay_1                         2020-07-20T08:06:18Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
kafka_1                         ===> Running preflight checks ... 
kafka_1                         ===> Check if /var/lib/kafka/data is writable ...
zookeeper_1                     ===> Launching ... 
zookeeper_1                     ===> Launching zookeeper ... 
kafka_1                         ===> Check if Zookeeper is healthy ...
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=958375364b4d
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.10.0-1127.13.1.el7.x86_64
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=55MB
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=843MB
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=57MB
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka_1                         [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka_1                         [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka_1                         [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.21.0.5:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/172.21.0.5:2181: Connection refused
relay_1                         2020-07-20T08:06:21Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.21.0.5:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/172.21.0.5:2181: Connection refused
zookeeper_1                     [2020-07-20 08:06:23,089] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.21.0.5:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/172.21.0.5:2181: Connection refused
zookeeper_1                     [2020-07-20 08:06:23,347] WARN o.e.j.s.ServletContextHandler@4d95d2a2{/,null,UNAVAILABLE} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler)
zookeeper_1                     [2020-07-20 08:06:23,348] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.21.0.5:2181. Will not attempt to authenticate using SASL (unknown error)
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /172.21.0.10:58202, server: zookeeper/172.21.0.5:2181
kafka_1                         [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server zookeeper/172.21.0.5:2181, sessionid = 0x100044cd7540000, negotiated timeout = 40000
e[34mcron_1                          08:06:24 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
web_1                           08:06:24 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
ingest-consumer_1               08:06:24 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
e[33;1mpost-process-forwarder_1        08:06:24 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
worker_1                        08:06:24 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
kafka_1                         [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x100044cd7540000 closed
kafka_1                         [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x100044cd7540000
kafka_1                         ===> Launching ... 
kafka_1                         ===> Launching kafka ... 
kafka_1                         [2020-07-20 08:06:26,532] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
relay_1                         2020-07-20T08:06:26Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
kafka_1                         [2020-07-20 08:06:27,475] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
kafka_1                         [2020-07-20 08:06:27,476] WARN The support metrics collection feature ("Metrics") of Proactive Support is disabled. (io.confluent.support.metrics.SupportedServerStartable)
kafka_1                         [2020-07-20 08:06:31,198] INFO Starting the log cleaner (kafka.log.LogCleaner)
kafka_1                         [2020-07-20 08:06:31,283] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
kafka_1                         [2020-07-20 08:06:31,643] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1                         [2020-07-20 08:06:31,683] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : EndPoint(0.0.0.0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
kafka_1                         [2020-07-20 08:06:31,684] INFO [SocketServer brokerId=1001] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
kafka_1                         [2020-07-20 08:06:31,780] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
kafka_1                         [2020-07-20 08:06:31,820] INFO Stat of the created znode at /brokers/ids/1001 is: 336,336,1595232391800,1595232391800,1,0,0,72062322114560001,180,0,336
kafka_1                          (kafka.zk.KafkaZkClient)
kafka_1                         [2020-07-20 08:06:31,821] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 336 (kafka.zk.KafkaZkClient)
kafka_1                         [2020-07-20 08:06:32,085] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
kafka_1                         [2020-07-20 08:06:32,184] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
relay_1                         2020-07-20T08:06:34Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
snuba-consumer_1                2020-07-20 08:06:38,661 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 60}
snuba-sessions-consumer_1       2020-07-20 08:06:38,677 New partitions assigned: {Partition(topic=Topic(name='ingest-sessions'), index=0): 0}
snuba-consumer_1                2020-07-20 08:06:41,663 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
snuba-sessions-consumer_1       2020-07-20 08:06:41,678 Partitions revoked: [Partition(topic=Topic(name='ingest-sessions'), index=0)]
snuba-consumer_1                2020-07-20 08:06:41,691 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 60}
msnuba-outcomes-consumer_1       2020-07-20 08:06:41,692 New partitions assigned: {Partition(topic=Topic(name='outcomes'), index=0): 60}
snuba-sessions-consumer_1       2020-07-20 08:06:41,720 New partitions assigned: {Partition(topic=Topic(name='ingest-sessions'), index=0): 0}
e[33;1mpost-process-forwarder_1        08:06:43 [INFO] sentry.plugins.github: apps-not-configured
ingest-consumer_1               08:06:43 [INFO] sentry.plugins.github: apps-not-configured
web_1                           08:06:43 [INFO] sentry.plugins.github: apps-not-configured
e[34mcron_1                          08:06:43 [INFO] sentry.plugins.github: apps-not-configured
worker_1                        08:06:43 [INFO] sentry.plugins.github: apps-not-configured
snuba-consumer_1                2020-07-20 08:06:44,694 Partitions revoked: [Partition(topic=Topic(name='events'), index=0)]
msnuba-outcomes-consumer_1       2020-07-20 08:06:44,694 Partitions revoked: [Partition(topic=Topic(name='outcomes'), index=0)]
snuba-sessions-consumer_1       2020-07-20 08:06:44,722 Partitions revoked: [Partition(topic=Topic(name='ingest-sessions'), index=0)]
snuba-transactions-consumer_1   2020-07-20 08:06:44,735 New partitions assigned: {}
snuba-consumer_1                2020-07-20 08:06:44,736 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 60}
msnuba-outcomes-consumer_1       2020-07-20 08:06:44,736 New partitions assigned: {Partition(topic=Topic(name='outcomes'), index=0): 60}
snuba-sessions-consumer_1       2020-07-20 08:06:44,800 New partitions assigned: {Partition(topic=Topic(name='ingest-sessions'), index=0): 0}
relay_1                         2020-07-20T08:06:45Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                           caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
relay_1                           caused by: Connection refused (os error 111)
snuba-replacer_1                2020-07-20 08:06:46,661 New partitions assigned: {Partition(topic=Topic(name='event-replacements'), index=0): 0}
worker_1                        08:06:46 [INFO] sentry.bgtasks: bgtask.spawn (task_name=u'sentry.bgtasks.clean_dsymcache:clean_dsymcache')
worker_1                        08:06:46 [INFO] sentry.bgtasks: bgtask.spawn (task_name=u'sentry.bgtasks.clean_releasefilecache:clean_releasefilecache')
web_1                           *** Starting uWSGI 2.0.19.1 (64bit) on [Mon Jul 20 08:06:46 2020] ***
web_1                           compiled with version: 8.3.0 on 15 July 2020 00:38:16
web_1                           os: Linux-3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020
web_1                           nodename: 2ebfa378d83d
web_1                           machine: x86_64
web_1                           clock source: unix
web_1                           detected number of CPU cores: 8
web_1                           current working directory: /
web_1                           detected binary path: /usr/local/bin/uwsgi
web_1                           !!! no internal routing support, rebuild with pcre support !!!
web_1                           your memory page size is 4096 bytes
web_1                           detected max file descriptor number: 1048576
web_1                           lock engine: pthread robust mutexes
web_1                           thunder lock: enabled
web_1                           uWSGI http bound on 0.0.0.0:9000 fd 6
web_1                           uwsgi socket 0 bound to TCP address 127.0.0.1:36311 (port auto-assigned) fd 3
web_1                           Python version: 2.7.16 (default, Oct 17 2019, 07:39:30)  [GCC 8.3.0]
web_1                           Set PythonHome to /usr/local
web_1                           Python main interpreter initialized at 0x55f86135bfd0
web_1                           python threads support enabled
web_1                           your server socket listen backlog is limited to 100 connections
web_1                           your mercy for graceful operations on workers is 60 seconds
web_1                           setting request body buffering size to 65536 bytes
web_1                           mapped 1924224 bytes (1879 KB) for 12 cores
web_1                           *** Operational MODE: preforking+threaded ***
web_1                           spawned uWSGI master process (pid: 18)
web_1                           spawned uWSGI worker 1 (pid: 23, cores: 4)
web_1                           spawned uWSGI worker 2 (pid: 24, cores: 4)
web_1                           spawned uWSGI worker 3 (pid: 25, cores: 4)
web_1                           spawned uWSGI http 1 (pid: 26)
web_1                           08:06:47 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
web_1                           08:06:47 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
web_1                           08:06:47 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
web_1                           08:06:51 [INFO] sentry.plugins.github: apps-not-configured
web_1                           08:06:51 [INFO] sentry.plugins.github: apps-not-configured
web_1                           08:06:51 [INFO] sentry.plugins.github: apps-not-configured
web_1                           WSGI app 0 (mountpoint='') ready in 5 seconds on interpreter 0x55f86135bfd0 pid: 25 (default app)
web_1                           WSGI app 0 (mountpoint='') ready in 5 seconds on interpreter 0x55f86135bfd0 pid: 23 (default app)
web_1                           WSGI app 0 (mountpoint='') ready in 5 seconds on interpreter 0x55f86135bfd0 pid: 24 (default app)
worker_1                         
worker_1                         -------------- celery@5439d678e82e v3.1.26.post2 (Cipater)
worker_1                        ---- **** ----- 
worker_1                        --- * ***  * -- Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-debian-10.1
worker_1                        -- * - **** --- 
worker_1                        - ** ---------- [config]
worker_1                        - ** ---------- .> app:         sentry:0x7ff8e73b1250
worker_1                        - ** ---------- .> transport:   redis://redis:6379/0
worker_1                        - ** ---------- .> results:     disabled://
worker_1                        - *** --- * --- .> concurrency: 8 (prefork)
worker_1                        -- ******* ---- 
worker_1                        --- ***** ----- [queues]
worker_1                         -------------- .> activity.notify  exchange=default(direct) key=activity.notify
worker_1                                        .> alerts           exchange=default(direct) key=alerts
worker_1                                        .> app_platform     exchange=default(direct) key=app_platform
worker_1                                        .> assemble         exchange=default(direct) key=assemble
worker_1                                        .> auth             exchange=default(direct) key=auth
worker_1                                        .> buffers.process_pending exchange=default(direct) key=buffers.process_pending
worker_1                                        .> cleanup          exchange=default(direct) key=cleanup
worker_1                                        .> commits          exchange=default(direct) key=commits
worker_1                                        .> counters-0       exchange=counters(direct) key=
worker_1                                        .> data_export      exchange=default(direct) key=data_export
worker_1                                        .> default          exchange=default(direct) key=default
worker_1                                        .> digests.delivery exchange=default(direct) key=digests.delivery
worker_1                                        .> digests.scheduling exchange=default(direct) key=digests.scheduling
worker_1                                        .> email            exchange=default(direct) key=email
worker_1                                        .> events.preprocess_event exchange=default(direct) key=events.preprocess_event
worker_1                                        .> events.process_event exchange=default(direct) key=events.process_event
worker_1                                        .> events.reprocess_events exchange=default(direct) key=events.reprocess_events
worker_1                                        .> events.reprocessing.preprocess_event exchange=default(direct) key=events.reprocessing.preprocess_event
worker_1                                        .> events.reprocessing.process_event exchange=default(direct) key=events.reprocessing.process_event
worker_1                                        .> events.reprocessing.symbolicate_event exchange=default(direct) key=events.reprocessing.symbolicate_event
worker_1                                        .> events.save_event exchange=default(direct) key=events.save_event
worker_1                                        .> events.symbolicate_event exchange=default(direct) key=events.symbolicate_event
worker_1                                        .> files.delete     exchange=default(direct) key=files.delete
worker_1                                        .> incident_snapshots exchange=default(direct) key=incident_snapshots
worker_1                                        .> incidents        exchange=default(direct) key=incidents
worker_1                                        .> integrations     exchange=default(direct) key=integrations
worker_1                                        .> merge            exchange=default(direct) key=merge
worker_1                                        .> options          exchange=default(direct) key=options
worker_1                                        .> relay_config     exchange=default(direct) key=relay_config
worker_1                                        .> reports.deliver  exchange=default(direct) key=reports.deliver
worker_1                                        .> reports.prepare  exchange=default(direct) key=reports.prepare
worker_1                                        .> search           exchange=default(direct) key=search
worker_1                                        .> sleep            exchange=default(direct) key=sleep
worker_1                                        .> stats            exchange=default(direct) key=stats
worker_1                                        .> subscriptions    exchange=default(direct) key=subscriptions
worker_1                                        .> triggers-0       exchange=triggers(direct) key=
worker_1                                        .> unmerge          exchange=default(direct) key=unmerge
worker_1                                        .> update           exchange=default(direct) key=update
worker_1                        
worker_1                        08:06:55 [INFO] sentry.tasks.update_user_reports: update_user_reports.records_updated (reports_with_event=0 updated_reports=0 reports_to_update=0)
ingest-consumer_1               08:07:03 [INFO] batching-kafka-consumer: New partitions assigned: [TopicPartition{topic=ingest-attachments,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-events,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-transactions,partition=0,offset=-1001,error=None}]
snuba-cleanup_1                 2020-07-20 08:10:02,981 Dropped 0 partitions on None
redis_1                         1:M 20 Jul 2020 08:11:10.014 * 100 changes in 300 seconds. Saving...
redis_1                         1:M 20 Jul 2020 08:11:10.020 * Background saving started by pid 12
redis_1                         12:C 20 Jul 2020 08:11:10.064 * DB saved on disk
redis_1                         12:C 20 Jul 2020 08:11:10.064 * RDB: 0 MB of memory used by copy-on-write
redis_1                         1:M 20 Jul 2020 08:11:10.121 * Background saving terminated with success
snuba-cleanup_1                 2020-07-20 08:15:02,384 Dropped 0 partitions on None
redis_1                         1:M 20 Jul 2020 08:16:11.056 * 100 changes in 300 seconds. Saving...
redis_1                         1:M 20 Jul 2020 08:16:11.057 * Background saving started by pid 13
redis_1                         13:C 20 Jul 2020 08:16:11.129 * DB saved on disk
redis_1                         13:C 20 Jul 2020 08:16:11.130 * RDB: 0 MB of memory used by copy-on-write
redis_1                         1:M 20 Jul 2020 08:16:11.157 * Background saving terminated with success
snuba-cleanup_1                 2020-07-20 08:20:02,865 Dropped 0 partitions on None
redis_1                         1:M 20 Jul 2020 08:21:12.012 * 100 changes in 300 seconds. Saving...
redis_1                         1:M 20 Jul 2020 08:21:12.012 * Background saving started by pid 14
redis_1                         14:C 20 Jul 2020 08:21:12.079 * DB saved on disk
redis_1                         14:C 20 Jul 2020 08:21:12.080 * RDB: 0 MB of memory used by copy-on-write
redis_1                         1:M 20 Jul 2020 08:21:12.113 * Background saving terminated with success
worker_1                        08:21:52 [INFO] sentry.tasks.update_user_reports: update_user_reports.records_updated (reports_with_event=0 updated_reports=0 reports_to_update=0)

sentry --version output -> Version : 20.8.0.dev0

Resolved!I don’t know how but it’s resolved

1 Like