Got Many 502 http error when Accessing Sentry Web Menu

Hi Folks,

Recently we just installed Sentry Onpremise version 21.9.0. Installation has been successful and we can access the web page. We also has enabled SSO and we can login to web using SSO.

We found several errors when access several menu/page on sentry web.
Mostly the error is 502 error and from console inspect on web browser the url that sent 502 error is /api/ for example:

And seems like our sentry is not functioning properly. We also found some page when accessed prompted a crash report.

Tried to find the workaround and solution. Some said to restart the nginx but on our environment, the error still persisted.

Is there someone has experienced the same issue before and how to troubleshoot it ?

Thank You

These requests should not be going to Sentry but to relay, that’s why you are having issues. Looks like you are not using our Nginx config from the onpremise repo as it does this routing out of the box.

Hi @BYK ,

Actually our nginx.conf is the same like in the repo nginx.conf

ssm-user@ip-10-1-6-126 nginx]$ cat nginx.conf
user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;


events {
        worker_connections 1024;
}


http {
        default_type application/octet-stream;

        log_format main '$remote_addr - $remote_user [$time_local] "$request" '
        '$status $body_bytes_sent "$http_referer" '
        '"$http_user_agent" "$http_x_forwarded_for"';

        access_log /var/log/nginx/access.log main;

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        reset_timedout_connection on;

        keepalive_timeout 75s;

        gzip off;
        server_tokens off;

        server_names_hash_bucket_size 64;
        types_hash_max_size 2048;
        types_hash_bucket_size 64;
        client_max_body_size 100m;

        proxy_http_version 1.1;
        proxy_redirect off;
        proxy_buffering off;
        proxy_next_upstream error timeout invalid_header http_502 http_503 non_idempotent;
        proxy_next_upstream_tries 2;

        # Remove the Connection header if the client sends it,
        # it could be "close" to close a keepalive connection
        proxy_set_header Connection '';
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Request-Id $request_id;
        proxy_read_timeout 30s;
        proxy_send_timeout 5s;

        upstream relay {
                server relay:3000;
        }

        upstream sentry {
                server web:9000;
        }

        server {
                listen 80;

                location /api/store/ {
                        proxy_pass http://relay;
                }
                location ~ ^/api/[1-9]\d*/ {
                        proxy_pass http://relay;
                }
                location / {
                        proxy_pass http://sentry;
                }
        }
}

From Docker logs sentry_onpremise_nginx_1, the error is quite similar like from web console inspect

021/10/04 08:20:53 [error] 22#22: *1439 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.19.213, server: , request: "POST /api/1/events/27fb97ed0ea34ed08f3178f2de8a8e48/attachments/?sentry_key=key&sentry_version=7&sentry_client=rrweb HTTP/1.1", upstream: "http://172.18.0.27:3000/api/1/events/27fb97ed0ea34ed08f3178f2de8a8e48/attachments/?sentry_key=key&sentry_version=7&sentry_client=rrweb", host: "sentry.company.net", referrer: "https://sentry.company.net/organizations/happyfresh/projects/name/?project=2"

So basically when doing first installation, we just change some parameters on config.yaml and sentry.conf.py to use external redis cluster and external postgres DB. Also doing some changes on mail configuration. But the rest should be the same.

Maybe there is something missing on our side ?

Just want to update, we also faced some error like this:

This error prompted when we accessed some of the page, for example: Organizations > Projects

Looks like a connection/DNS issue with Relay to me. Can you share your relay logs for further inspection?

Hi @BYK,

Below are some latest docker logs from relay:

2021-10-06T04:02:31Z [r2d2] ERROR: failed to lookup address information: Name or service not known
2021-10-06T04:02:36Z [relay_log::utils] ERROR: could not initialize redis cluster client
  caused by: failed to pool redis connection
  caused by: timed out waiting for connection: failed to lookup address information: Name or service not known
2021-10-06T04:02:58Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111)
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 1ms in state CONNECT)
2021-10-06T04:02:58Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111)
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 1ms in state CONNECT)
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 7ms in state CONNECT)
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 7ms in state CONNECT)
2021-10-06T04:02:58Z [rdkafka::client] ERROR: librdkafka: Global error: AllBrokersDown (Local: All broker connections are down): 1/1 brokers are down
2021-10-06T04:02:59Z [rdkafka::client] ERROR: librdkafka: FAIL [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-10-06T04:02:59Z [rdkafka::client] ERROR: librdkafka: Global error: BrokerTransportFailure (Local: Broker transport failure): kafka:9092/bootstrap: Connect to ipv4#172.18.0.27:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
2021-10-06T04:02:59Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
  caused by: error sending request for url (http://web:9000/api/0/relays/register/challenge/): error trying to connect: tcp connect error: Connection refused (os error 111)

Actually we changed the redis configuration to use external redis cluster. But for kafka we still use internal kafka that run from sentry docker-compose.yml. We also has changed redis related configuration on config.yml and sentry.conf.py

So there seems to be multiple issues here:

  1. Relay cannot resolve the address of redis, hence not being able to connect to it
  2. It cannot authenticate with Sentry web service, getting a “Connection refused” error, indicating that the service is not healthy or not listening on the default port that it should be listening to
  3. It cannot connect to kafka with the same “Connection refused” error.

Adding these up, it definitely looks like a network and routing issue to me in your setup.