my company has specific rule which public ip address of server are not reachable inside local network,
Sentry 20.8.0.dev0992c6f6
my on-premise sentry on ubuntu 18 has a yyy.yyy.yyy.yyy local IP address, and also reachable from xxx.xxx.xxx.xxx public Ip address from internet (port 9000 on both local and public IP address).
i can work with sentry web interface and there is no problem.
but android SDK cannot sent anything and this error is logged on nginx docker container:
nginx_1 | 2020/10/10 07:42:55 [error] 6#6:
*71 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc,
server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.25:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [10/Oct/2020:07:42:55 +0000]
"POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
Public IP: xxx.xxx.xxx.xxx
Local IP: yyy.yyy.yyy.yyy
Client IP: ccc.ccc.ccc.ccc
system.url-prefix: ‘http://xxx.xxx.xxx.xxx:9000 ’
I have tried to forward outgoing traffic of xxx.xxx.xxx.xxx:9000 to yyy.yyy.yyy.yyy:9000 using iptables with no success,
please help
BYK
October 12, 2020, 9:55am
2
Seems like you have a network configuration issue with Docker Compose’s internal network, created only for Sentry or you have DNS records colliding with the services we define in the docker-compose.yml
file. http://172.19.0.25:3000
should be the relay
service and Nginx should be able to reach it.
ubuntu 18 clean install only for sentry, no other services, but there is proxy configurations everywhere (apt, docker daemon, docker-compose, shell)
BYK
October 12, 2020, 10:38am
4
Yeah, than I’d look there.
I have disabled all proxy configuration from these locations (Cannot install senrty on-premise on ubuntu 18.04 ) and rebooted the server:
sudo nano /etc/environment
sudo nano /etc/apt/apt.conf.d/proxy.conf
sudo nano /etc/systemd/system/docker.service.d/proxy.conf
sudo nano ~/.docker/config.json
but error is the same:
nginx_1 | 2020/10/12 11:15:19 [error] 6#6: *21 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.22:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [12/Oct/2020:11:15:19 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
nginx_1 | 2020/10/12 11:15:22 [error] 6#6: *23 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.22:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [12/Oct/2020:11:15:22 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
nginx_1 | 2020/10/12 11:15:25 [error] 6#6: *25 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.22:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [12/Oct/2020:11:15:25 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
nginx_1 | 2020/10/12 11:15:28 [error] 6#6: *27 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.22:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [12/Oct/2020:11:15:28 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
nginx_1 | 2020/10/12 11:15:31 [error] 6#6: *29 connect() failed (113: No route to host) while connecting to upstream, client: ccc.ccc.ccc.ccc, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.22:3000/api/2/envelope/", host: "xxx.xxx.xxx.xxx:9000"
nginx_1 | ccc.ccc.ccc.ccc - - [12/Oct/2020:11:15:31 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
BYK
October 12, 2020, 3:46pm
6
Again, this is clearly a network configuration issue, specific to your system. You need to debug this yourself, we cannot provide help from here. This means the Nginx container cannot reach that IP address while it should be. “No route to host” means they probably are on separate networks for some reason (maybe something you did trying to expose Nginx publicly took it out of the internal docker network)?
i believe there is a problem with relay container:
as you can see , nginx tries to connect to relay server 172.19.0.25:3000, but relay container has no IP address assigned!
nginx_1 | 2020/10/13 09:41:48 [error] 6#6: *5 connect() failed (113: No route to host) while connecting to upstream, client: 172.19.0.1, server: , request: "POST /api/2/envelope/ HTTP/1.1", upstream: "http://172.19.0.25:3000/api/2/envelope/", host: "yyy.yyy.yyy.yyy:9000"
nginx_1 | 172.19.0.1 - - [13/Oct/2020:09:41:48 +0000] "POST /api/2/envelope/ HTTP/1.1" 502 150 "-" "sentry.java.android/3.0.0"
^CERROR: Aborting.
emdad@sanaproxy:~/onpremise$ wget http://172.19.0.25:3000/api/2/envelope/
--2020-10-13 09:42:13-- http://172.19.0.25:3000/api/2/envelope/
Connecting to 172.19.0.25:3000... failed: No route to host.
emdad@sanaproxy:~/onpremise$ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
WARNING: Error loading config file: /home/emdad/.docker/config.json: EOF
WARNING: Error loading config file: /home/emdad/.docker/config.json: EOF
/sentry_onpremise_nginx_1 - 172.19.0.10
/sentry_onpremise_sentry-cleanup_1 - 172.19.0.16
/sentry_onpremise_worker_1 - 172.19.0.23
/sentry_onpremise_web_1 - 172.19.0.24
/sentry_onpremise_post-process-forwarder_1 - 172.19.0.7
/sentry_onpremise_ingest-consumer_1 - 172.19.0.22
/sentry_onpremise_cron_1 - 172.19.0.17
/sentry_onpremise_snuba-cleanup_1 - 172.19.0.6
/sentry_onpremise_relay_1 -
/sentry_onpremise_symbolicator-cleanup_1 - 172.19.0.3
/sentry_onpremise_snuba-replacer_1 - 172.19.0.18
/sentry_onpremise_snuba-outcomes-consumer_1 - 172.19.0.13
/sentry_onpremise_snuba-transactions-consumer_1 - 172.19.0.14
/sentry_onpremise_snuba-consumer_1 - 172.19.0.21
/sentry_onpremise_snuba-sessions-consumer_1 - 172.19.0.15
/sentry_onpremise_snuba-api_1 - 172.19.0.12
/sentry_onpremise_smtp_1 - 172.19.0.2
/sentry_onpremise_symbolicator_1 - 172.19.0.8
/sentry_onpremise_memcached_1 - 172.19.0.5
/sentry_onpremise_postgres_1 - 172.19.0.19
/sentry_onpremise_kafka_1 - 172.19.0.20
/sentry_onpremise_zookeeper_1 - 172.19.0.9
/sentry_onpremise_redis_1 - 172.19.0.4
/sentry_onpremise_clickhouse_1 - 172.19.0.11
i have tested many times, when i run
sudo docker-compose restart
“sentry_onpremise_relay_1” container has IP address for a moment and everything is ok,
and after that there is no IP address suddenly!!! and i will get “No route to host” error
BYK
October 13, 2020, 12:25pm
10
This suggests you have a custom and an invalid Docker configuration that is likely interfering. Moreover, the network created by Docker compose should not be directly reachable from the host machine so your wget
command failing is expected.
Relay not having an IP address might be because of the service not starting up and staying up correctly. I recommend looking at its logs to see what is going wrong.
relay log:
ERROR: relay has no credentials, which are required in managed mode. Generate some with "relay credentials generate" first.
is this related to IP address?
BYK
October 14, 2020, 9:14am
12
That error means Relay cannot start so it makes sense that it doesn’t have an IP address (a non-existent process cannot have an IP address). So you need to fix that issue. Seems like this step is not running or failing for you when installing Sentry:
if [[ ! -f "$RELAY_CREDENTIALS_JSON" ]]; then
echo ""
echo "Generating Relay credentials..."
# We need the ugly hack below as `relay generate credentials` tries to read the config and the credentials
# even with the `--stdout` and `--overwrite` flags and then errors out when the credentials file exists but
# not valid JSON. We hit this case as we redirect output to the same config folder, creating an empty
# credentials file before relay runs.
$dcr --no-deps -v $(pwd)/$RELAY_CONFIG_YML:/tmp/config.yml relay --config /tmp credentials generate --stdout > "$RELAY_CREDENTIALS_JSON"
echo "Relay credentials written to $RELAY_CREDENTIALS_JSON"
fi
i cannot fix the issue , where to look and what to fix?
how can i manually generate credentials?
BYK
October 28, 2020, 9:53am
14
You can run that failing step manually to generate the credentials manually:
docker-compose run --no-deps -v $(pwd)/relay/config.yml:/tmp/config.yml relay --config /tmp credentials generate --stdout
This will print the newly generated credentials to the console. You can then copy/paste that into relay/credentials.json
.