Can't access Sentry UI after installation

Hi,

I am having trouble reaching the Sentry UI after installation following the setup guide at https://github.com/getsentry/onpremise. I’m running Sentry on an EC2 instance.

I have left all config files in their default settings. Running lsof shows

docker-pr 27242     root    4u  IPv6 1846704      0t0  TCP *:9000 (LISTEN)

I ran docker-compose logs -f web and the logging output seems normal however when I run docker-compose logs -f relay, I get the following logs:

Attaching to sentry_onpremise_relay_1
relay_1                        |   caused by: Failed to connect to host: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        |   caused by: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        |   caused by: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        | 2020-09-09T11:49:55Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        |   caused by: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        |   caused by: Failed resolving hostname: no record found for name: web.eu-west-2.compute.internal. type: AAAA class: IN
relay_1                        | 2020-09-09T11:49:58Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/1001: Disconnected (after 6418ms in state UP)
relay_1                        | 2020-09-09T11:49:58Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                        | 2020-09-09T11:49:58Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/1001: Disconnected (after 3091ms in state UP)
relay_1                        | 2020-09-09T11:49:58Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
relay_1                        | 2020-09-09T11:49:58Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/1001: Connect to ipv4#172.18.0.18:9092 failed: Connection refused (after 6ms in state CONNECT)
relay_1                        | 2020-09-09T11:49:59Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Timeout while waiting for response
relay_1                        | 2020-09-09T11:49:59Z [rdkafka::client] ERROR: librdkafka: Global error: Resolve (Local: Host resolution failure): kafka:9092/1001: Failed to resolve 'kafka:9092': Name or service not known (after 59ms in state CONNECT)
relay_1                        | 2020-09-09T11:50:00Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        | 2020-09-09T11:50:01Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/1001: Connect to ipv4#172.18.0.18:9092 failed: Connection refused (after 11ms in state CONNECT)
relay_1                        | 2020-09-09T11:50:02Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/1001: Connect to ipv4#172.18.0.18:9092 failed: Connection refused (after 3171ms in state CONNECT)
relay_1                        | 2020-09-09T11:50:02Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        | 2020-09-09T11:50:06Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        | 2020-09-09T11:50:11Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        | 2020-09-09T11:50:18Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        | 2020-09-09T11:50:30Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1                        |   caused by: Failed to connect to host: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)
relay_1                        |   caused by: Connection refused (os error 111)

Not quite sure what’s going on here but any help would be appreciated! Thanks.

This issue should still not prevent you from seeing the UI. What do your nginx logs say?

There are no nginx logs being printed. I just get the following:

Attaching to sentry_onpremise_nginx_1

All docker containers are running:

CONTAINER ID        IMAGE                                                             COMMAND                  CREATED             STATUS                    PORTS                          NAMES
aeff74630a07        nginx:1.16                                                        "nginx -g 'daemon of…"   45 minutes ago      Up 31 minutes             0.0.0.0:9000->80/tcp           sentry_onpremise_nginx_1
c6a093cb1973        sentry-onpremise-local                                            "/bin/sh -c 'exec /d…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_post-process-forwarder_1
8756a45e00e9        sentry-onpremise-local                                            "/bin/sh -c 'exec /d…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_ingest-consumer_1
8b1fc4b1b91d        sentry-onpremise-local                                            "/bin/sh -c 'exec /d…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_worker_1
10c87c303a83        sentry-onpremise-local                                            "/bin/sh -c 'exec /d…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_web_1
a97c7bf40e40        sentry-onpremise-local                                            "/bin/sh -c 'exec /d…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_cron_1
f7a64f1941f3        sentry-cleanup-onpremise-local                                    "/entrypoint.sh '0 0…"   45 minutes ago      Up 31 minutes             9000/tcp                       sentry_onpremise_sentry-cleanup_1
ddc264fb0ccd        getsentry/relay:latest                                            "/bin/bash /docker-e…"   45 minutes ago      Up 31 minutes             3000/tcp                       sentry_onpremise_relay_1
c25c125bfa20        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 31 minutes             1218/tcp                       sentry_onpremise_snuba-sessions-consumer_1
9547ea1c4924        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 31 minutes             1218/tcp                       sentry_onpremise_snuba-api_1
2acb730e2fa5        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 30 minutes             1218/tcp                       sentry_onpremise_snuba-replacer_1
a38defb2f328        snuba-cleanup-onpremise-local                                     "/entrypoint.sh '*/5…"   45 minutes ago      Up 31 minutes             1218/tcp                       sentry_onpremise_snuba-cleanup_1
9f7de1994196        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 30 minutes             1218/tcp                       sentry_onpremise_snuba-consumer_1
e0c30ed1d806        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 31 minutes             1218/tcp                       sentry_onpremise_snuba-transactions-consumer_1
7b32c40a81c9        getsentry/snuba:latest                                            "./docker_entrypoint…"   45 minutes ago      Up 30 minutes             1218/tcp                       sentry_onpremise_snuba-outcomes-consumer_1
259cced9a2a4        confluentinc/cp-kafka:5.5.0                                       "/etc/confluent/dock…"   45 minutes ago      Up 31 minutes             9092/tcp                       sentry_onpremise_kafka_1
015b5a59accf        getsentry/symbolicator:eac35a6058c7749bdf20ed219a377e49e02d0b76   "/bin/bash /docker-e…"   45 minutes ago      Up 31 minutes             3021/tcp                       sentry_onpremise_symbolicator_1
515d12b6b803        confluentinc/cp-zookeeper:5.5.0                                   "/etc/confluent/dock…"   45 minutes ago      Up 31 minutes             2181/tcp, 2888/tcp, 3888/tcp   sentry_onpremise_zookeeper_1
634ca62c3d1e        symbolicator-cleanup-onpremise-local                              "/entrypoint.sh '55 …"   45 minutes ago      Up 31 minutes             3021/tcp                       sentry_onpremise_symbolicator-cleanup_1
5644766141c7        memcached:1.5-alpine                                              "docker-entrypoint.s…"   45 minutes ago      Up 31 minutes             11211/tcp                      sentry_onpremise_memcached_1
236020455e52        postgres:9.6                                                      "docker-entrypoint.s…"   45 minutes ago      Up 31 minutes             5432/tcp                       sentry_onpremise_postgres_1
0b12700991b5        yandex/clickhouse-server:20.3.9.70                                "/entrypoint.sh"         45 minutes ago      Up 31 minutes             8123/tcp, 9000/tcp, 9009/tcp   sentry_onpremise_clickhouse_1
2f24e60e0407        tianon/exim4                                                      "docker-entrypoint.s…"   45 minutes ago      Up 31 minutes             25/tcp                         sentry_onpremise_smtp_1
e2ba83f9b7d6        redis:5.0-alpine                                                  "docker-entrypoint.s…"   45 minutes ago      Up 31 minutes             6379/tcp                       sentry_onpremise_redis_1

Your log shows that port 9000 is occupied, plese check who is occupying port 80.

Port 80 should be handled by nginx, who will further forward to port 9000. In any case, port 9000 should not be accessed by user directly.

I don’t think port 80 is being used.

netstat -tulpn | grep :80

tcp        0      0 ***.**.*.***:8081       0.0.0.0:*               LISTEN      3146/freeswitch
tcp        0      0 ***.**.*.***:8082       0.0.0.0:*               LISTEN      3146/freeswitch
tcp6       0      0 ::1:8081                :::*                    LISTEN      3146/freeswitch
tcp6       0      0 ::1:8082                :::*                    LISTEN      3146/freeswitch
tcp6       0      0 :::8021                 :::*                    LISTEN      3146/freeswitch

And this is for port 9000

tcp6       0      0 :::9000                 :::*                    LISTEN      18857/docker-proxy

please check your docker-compose.yml file, the nginx part, which port it is using?

I don’t think there’s a port clash or anything as otherwise we’d see a binding error.

@stef_van - do you get an http response back or do you just get a connection refused or something like that?

the port in nginx session is specifying how Sentry is accessed.

So if it is such as

  nginx:
<< : *restart_policy
ports:
  - '9000:80/tcp'
image: 'nginx:1.16'

then it should be accessible by http(s)://your_url:9000

If it is such as

  nginx:
   << : *restart_policy
    ports:
      - '80:80/tcp'
    image: 'nginx:1.16'

Sentry is accesed by http://your_url/

so please check the port and your way of access

Sorry guys I was unable to remote login to my EC2 instance for the past week and am finally able to today.

Great news, I realised that the instance may not be accepting inbound requests so I added a new inbound rule to the instance to accept all requests from my IP, and I am now able to access the Sentry UI through its public IP and port! Thanks @liangrong and @BYK for your help! :grin:

I am now trying to change the url that I can access Sentry at. I have done it through the UI and also tried changing it through config.yml by defining system.url-prefix, but the new url is not accessible. I even restarted the docker services after this change. Any ideas on what I could try for this?

1 Like

When you say it is not accessible, what do you mean? If you don’t get any response, I’d say it is a configuration issue with your network or load balancer as Sentry would respond even with a wrong URL. That setting is to have correct redirects and links in various places (like emails or DSNs).

I don’t get any response, just ERR_CONNECTION_REFUSED.

So if I was to change the system.url-prefix in config.yml to http://sentry.example.com, I would be able to access the Sentry UI through that URL without any other changes?

Then I’m guessing Sentry is not bound to the correct port (or your load balancer is not working)

As long as something is listening on port 80 on that domain/IP and forwarding requests to Sentry, or that something is Sentry itself, yes.