Https problem: Bad handshake, ROOT_CA

After enabling HTTPS I noticed following errors in worker container:
MaxRetryError: HTTPSConnectionPool(host='mydomain.int.com', port=443): Max retries exceeded with url: /api/1/store/ (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))

My understanding is that senty containers (sentry_onpremise_*) do not know my companys ROOT_CA. One of the solutions I found is to modify sentry/Dockerfile so that ROOT_CA is copied into certificates cerfiti is using: /usr/local/lib/python3.7/site-packages/certifi/cacert.pem.

Is this the way to go? I still dont know why celery worker uses public address. I was expecting that celery resolves the container name with docker dns as one line in nginx config states:
#use the docker DNS server to resolve ips for relay and sentry containers
resolver 127.0.0.11 ipv6=off;

Here is config of nginx running in front of sentry:

server {
    # SSL configuration
    listen 443 ssl default_server;
    listen 80;
    ssl_certificate /etc/nginx/cert.crt;
    ssl_certificate_key /etc/nginx/cert.key;
    ssl_session_cache builtin:1000  shared:SSL:10m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    # Note: You should disable gzip for SSL traffic.
    # See: https://bugs.debian.org/773332
    gzip off;

    # use the docker DNS server to resolve ips for relay and sentry containers
    resolver 127.0.0.11 ipv6=off;
    client_max_body_size 100M;

    proxy_redirect off;

    location /api/store/ {
        proxy_pass http://relay;
    }
    location ~ ^/api/[1-9]\d*/ {

        proxy_pass http://relay;
    }

    location / {
       proxy_pass http://sentry;
       proxy_read_timeout      90;
       proxy_set_header        Host $host;
       proxy_set_header        X-Real-IP $remote_addr;
       proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header        X-Forwarded-Host $host:$server_port;
       proxy_set_header        X-Forwarded-Proto $scheme;
    }

Can you share your config file too?

Here it is:
# While a lot of configuration in Sentry can be changed via the UI, for all
# new-style config (as of 8.0) you can also declare values here in this file
# to enforce defaults or to ensure they cannot be changed via the UI. For more
# information see the Sentry documentation.

###############
# Mail Server #
###############

# mail.backend: 'smtp'  # Use dummy if you want to disable email entirely
# mail.backend: 'smtp'
mail.host: 'mail.mydomain.int.com'
mail.port: 25
mail.username: ''
mail.password: ''
mail.use-tls: false
# The email address to send on behalf of
mail.from: 'sentry@mydomain.int.com'

# If you'd like to configure email replies, enable this.
# mail.enable-replies: true

# When email-replies are enabled, this value is used in the Reply-To header
# mail.reply-hostname: 'sentry.mydomain.int.com'

# If you're using mailgun for inbound mail, set your API key and configure a
# route to forward to /api/hooks/mailgun/inbound/
# Also don't forget to set `mail.enable-replies: true` above.
# mail.mailgun-api-key: ''

###################
# System Settings #
###################

# If this file ever becomes compromised, it's important to regenerate your a new key
# Changing this value will result in all current sessions being invalidated.
# A new key can be generated with `$ sentry config generate-secret-key`
system.secret-key: 'SecretKey'

# The ``redis.clusters`` setting is used, unsurprisingly, to configure Redis
# clusters. These clusters can be then referred to by name when configuring
# backends such as the cache, digests, or TSDB backend.
# redis.clusters:
#   default:
#     hosts:
#       0:
#         host: 127.0.0.1
#         port: 6379

################
# File storage #
################

# Uploaded media uses these `filestore` settings. The available
# backends are either `filesystem` or `s3`.

filestore.backend: 'filesystem'
filestore.options:
  location: '/data/files'
dsym.cache-path: '/data/dsym-cache'
releasefile.cache-path: '/data/releasefile-cache'

# filestore.backend: 's3'
# filestore.options:
#   access_key: 'AKIXXXXXX'
#   secret_key: 'XXXXXXX'
#   bucket_name: 's3-bucket-name'

system.url-prefix: 'https://sentry.mydomain.int.com'
system.internal-url-prefix: 'http://web:9000'
symbolicator.enabled: true
symbolicator.options:
  url: "http://symbolicator:3021"

transaction-events.force-disable-internal-project: true

######################
# GitHub Integration #
######################

# github-app.id: GITHUB_APP_ID
# github-app.name: 'GITHUB_APP_NAME'
# github-app.webhook-secret: 'GITHUB_WEBHOOK_SECRET' # Use only if configured in GitHub
# github-app.client-id: 'GITHUB_CLIENT_ID'
# github-app.client-secret: 'GITHUB_CLIENT_SECRET'
# github-app.private-key: |
#   -----BEGIN RSA PRIVATE KEY-----
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   -----END RSA PRIVATE KEY-----

Hmm your system.internal-url-prefix setting seems fine unless you have an override for that in your sentry.conf.py file. What stands out to me is the lack of port numbers in your proxy_pass directives in your nginx config. /api/1/store/ should be going to Relay so maybe that’s the issue here?