Using standalone nginx with Sentry 10

Hi,

I’ve been using Sentry 9 on docker with external nginx balancer (external meaning it was installed on the server as a service, not in docker container). I was using uwsgi_pass to proxy requests to sentry.

But in Sentry 10 this approach doesn’t seem to work anymore. The only viable option I’ve found is to use sentry-onpremise’s built-in nginx container, but that’s not flexible enough for me.

Is there any way to use standalone nginx?

Hi @grapes,

We’ve introduced the built-in Nginx container with Sentry 20.7.0 I think (2 versions after 10) but I get what you are saying. You should be able to modify the built-in nginx.conf file to suit your needs or just pass the requests to this Nginx container.

Do any of these work for you? If not, can you provide a bit more info regarding the why?

I have some other web services running on my server, so I cannot expose nginx container to external network - 80 and 443 ports are already in use by standalone nginx.

pass the requests to this Nginx container

This is what I ended up doing. But this approach (nginx -> nginx -> sentry) looks overcomplicated. Keeping two nginx configs is not a good thing - easy to forget or miss something important.

I also tried disabling nginx container and using its config in my standalone nginx, along with mapping docker 9000 and 3000 ports to localhost.

# docker-compose.yml
...
  web:
    ports:
      - '9000:9000'
...
  relay:
    ports:
      - '3000:3000'
# my standalone nginx conf
location /api/store/ {
	proxy_pass http://127.0.0.1:3000;
}
location ~ ^/api/[1-9]\d*/ {
	proxy_pass http://127.0.0.1:3000;
}
location / {
	proxy_pass http://127.0.0.1:9000;
}

It worked with web, but not with relay (I’m a docker noob, so most likely I did something wrong).

1 Like

I don’t know what your other services are, of course, but if they are all in Docker containers of their own, you might consider using Traefik instead of Nginx. (Disclaimer: I am not affiliated with Traefik, just a happy user.)

I tried to do the same thing as you did with Nginx, but got frustrated by the configuration headaches.

With Traefik, you configure it using Docker labels.

Here is how I have modified the docker-compose.yml to do this:

  nginx:
    << : *restart_policy
    # Don't expose port since we are using Traefik
    #ports:
    #  - '9000:80/tcp'
    image: 'nginx:1.16'
    volumes:
      - type: bind
        read_only: true
        source: ./nginx
        target: /etc/nginx
    depends_on:
      - web
      - relay
    labels:
      - traefik.enable=true
      - traefik.http.routers.sentry.entrypoints=https
      - traefik.http.routers.sentry.rule=Host(`sentry.example.com`)
      - traefik.http.routers.sentry.tls=true
      - traefik.http.routers.sentry.tls.certresolver=letsencrypt
      - traefik.http.services.sentry.loadbalancer.server.port=80

And here is the configuration for Traefik itself:

docker-compose.yml

services:
  traefik:
    image: traefik:latest
    container_name: traefik
    restart: always
    ports:
    - "80:80"
    - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"  # So that Traefik can listen to the Docker events
      - "./traefik/traefik.toml:/traefik.toml"
      - "./traefik/acme.json:/acme.json" # Traefik uses this to store LetsEncrypt certificates
      - "./traefik/conf:/conf" # This allows setting other configuration, including Basic Auth user credentials for the Traefik monitoring dashboard
    labels:
      - traefik.enable=true

      # These allow you to see the Traefik monitoring dashboard. You can remove them if you don't need it.
      - traefik.http.routers.traefik.entrypoints=https
      - traefik.http.routers.traefik.rule=Host(`traefik-dashboard.example.com`)
      - traefik.http.routers.traefik.tls=true
      - traefik.http.routers.traefik.tls.certresolver=letsencrypt
      - traefik.http.routers.traefik.service=api@internal
      - traefik.http.routers.traefik.middlewares=traefik-dashboard-auth@file

traefik.toml

[entryPoints]
  [entryPoints.http]
    address = ":80"
    [entryPoints.http.http.redirections.entryPoint]
      to = "https"
      scheme = "https"

  [entryPoints.https]
    address = ":443"

[providers]
  providersThrottleDuration = "2s"

  [providers.file]
    directory = "/conf"

  [providers.docker]
    watch = true
    endpoint = "unix:///var/run/docker.sock"
    swarmModeRefreshSeconds = "15s"
    exposedbydefault = false

[certificateResolvers.letsencrypt.acme]
  email = "support@example.com"
  storage = "acme.json"
  [certificatesResolvers.letsencrypt.acme.httpChallenge]
    entryPoint = "http"

[api]
  dashboard = true

[log]
  level = "ERROR"

conf/traefik-dashboard-auth.toml

[http.middlewares]
  [http.middlewares.traefik-dashboard-auth.basicAuth]
  users = [
    "my_user_name:$apr1$my_htpasswd_encoded_password"
  ]
2 Likes

Are you able to share your nginx and relay logs for investigation?

@grapes
Hi did you managed to get it working over nginx > nginx > uswgi ?

Hi! Have the same case (updating from dev 10.1 where wasn’t internal Nginx service), trying to configure external web server with minimal changes in repo files.
Confused why you’re exposing 80 port from Docker, usually this is external web. Then why not 443?
We could replace nginx.conf, and it could work without changes in docker-compose.yml files.
The basic use case is when we have one server for all, including database, web server, etc.
What is the best way to configure Nginx for one machine?

  • 2 nginx. One external as OS service (listens 80 and 443, SSL) for redirecting to 80 inside Decker’s Nginx
  • trying to reconfigure internal Docker via hacks in configuration management (replace strings, tie on version, etc), add SSL certs to docker mounted space
  • use external (own or SaaS) load balancer for closed/internal network. But not everyone needs infrastructure like this, for my case is better just one machine.
  • use Sentry SaaS solution instead of on premise.

I think I’m wrong, 80 is internal port and I can use Nginx service as usually - right?
ports:
- ‘$SENTRY_BIND:80/tcp’

That’s right. Nginx image exposes port 80 by default and we map it to 9000 by default but that can be changed.

Ignore the Nginx instance that we ship and add another layer in front of it as a load balancer. It can be another Nginx or anything else. Also do the TLS termination there.

This is obviously what we recommend when things get complicated :smiley: That said if you can sustain Sentry with one machine without tuning, you probably don’t need SaaS unless you don’t want to deal with all this stuff.

1 Like