CSRF Verification Failed since Sentry upgrade

My on-premise Sentry docker no longer works for me. I took out my KEMP SSL proxy, but the http direct still gives me CSRF Verification Failed errors. In the logs it is
[WARNING] django.security.csrf: Forbidden (CSRF cookie not set.): /api/2/store/ (status_code=403 request=<WSGIRequest: POST u’/api/2/store/?sentry_key=key&sentry_version=7’>)

I see references to this error and this error being due to misconfiguration of nginx.conf, however I poke around the sentry-web docker on my machine and there is nothing related to nginx there at all. I don’t quite get why there are no references to nginx in my environment and I have just upgraded, but there are in the official source code.

Nginx is set up in a different container. You will need to expose the container nginx, not the web container. Right now your events are sent into the wrong container’s listening port.

That’s something I’m not getting - my docker-compose.yml doesn’t match the source code, no nginx. I thought install.sh should take care of all of that?

version: ‘3.4’
x-restart-policy: &restart_policy
restart: unless-stopped
x-sentry-defaults: &sentry_defaults
<< : *restart_policy
build:
context: ./sentry
args:
- SENTRY_IMAGE
image: sentry-onpremise-local
depends_on:
- redis
- postgres
- memcached
- smtp
- snuba-api
- snuba-consumer
- snuba-replacer
- symbolicator
- kafka
environment:
SNUBA: ‘http://snuba-api:1218
volumes:
- ‘sentry-data:/data’
x-snuba-defaults: &snuba_defaults
<< : *restart_policy
depends_on:
- redis
- clickhouse
- kafka
image: ‘getsentry/snuba:latest’
environment:
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: ‘kafka:9092’
REDIS_HOST: redis
# TODO: Remove these after getsentry/snuba#353
UWSGI_MAX_REQUESTS: ‘10000’
UWSGI_DISABLE_LOGGING: ‘true’
UWSGI_ENABLE_THREADS: ‘true’
UWSGI_DIE_ON_TERM: ‘true’
UWSGI_NEED_APP: ‘true’
UWSGI_IGNORE_SIGPIPE: ‘true’
UWSGI_IGNORE_WRITE_ERRORS: ‘true’
UWSGI_DISABLE_WRITE_EXCEPTION: ‘true’
services:
smtp:
<< : *restart_policy
image: tianon/exim4
volumes:
- ‘sentry-smtp:/var/spool/exim4’
- ‘sentry-smtp-log:/var/log/exim4’
memcached:
<< : *restart_policy
image: ‘memcached:1.5-alpine’
redis:
<< : *restart_policy
image: ‘redis:5.0-alpine’
volumes:
- ‘sentry-redis:/data’
postgres:
<< : *restart_policy
image: ‘postgres:9.6’
volumes:
- ‘sentry-postgres:/var/lib/postgresql/data’
zookeeper:
<< : *restart_policy
image: ‘confluentinc/cp-zookeeper:5.1.2’
environment:
ZOOKEEPER_CLIENT_PORT: ‘2181’
CONFLUENT_SUPPORT_METRICS_ENABLE: ‘false’
ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: ‘WARN’
ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: ‘WARN’
volumes:
- ‘sentry-zookeeper:/var/lib/zookeeper/data’
- ‘sentry-zookeeper-log:/var/lib/zookeeper/log’
- ‘sentry-secrets:/etc/zookeeper/secrets’
kafka:
<< : *restart_policy
depends_on:
- zookeeper
image: ‘confluentinc/cp-kafka:5.1.2’
environment:
KAFKA_ZOOKEEPER_CONNECT: ‘zookeeper:2181’
KAFKA_ADVERTISED_LISTENERS: ‘PLAINTEXT://kafka:9092’
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: ‘1’
CONFLUENT_SUPPORT_METRICS_ENABLE: ‘false’
KAFKA_LOG4J_LOGGERS: ‘kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN’
KAFKA_LOG4J_ROOT_LOGLEVEL: ‘WARN’
KAFKA_TOOLS_LOG4J_LOGLEVEL: ‘WARN’
volumes:
- ‘sentry-kafka:/var/lib/kafka/data’
- ‘sentry-kafka-log:/var/lib/kafka/log’
- ‘sentry-secrets:/etc/kafka/secrets’
clickhouse:
<< : *restart_policy
image: ‘yandex/clickhouse-server:19.11’
ulimits:
nofile:
soft: 262144
hard: 262144
volumes:
- ‘sentry-clickhouse:/var/lib/clickhouse’
snuba-api:
<< : *snuba_defaults
snuba-consumer:
<< : *snuba_defaults
command: consumer --auto-offset-reset=latest --max-batch-time-ms 750
snuba-replacer:
<< : *snuba_defaults
command: replacer --auto-offset-reset=latest --max-batch-size 3
snuba-cleanup:
<< : snuba_defaults
image: snuba-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘getsentry/snuba:latest’
command: '"
/5 * * * * gosu snuba snuba cleanup --dry-run False"’
symbolicator:
<< : *restart_policy
image: ‘getsentry/symbolicator:latest’
volumes:
- ‘sentry-symbolicator:/data’
command: run
symbolicator-cleanup:
image: symbolicator-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘getsentry/symbolicator:latest’
command: ‘“55 23 * * * gosu symbolicator symbolicator cleanup”’
volumes:
- ‘sentry-symbolicator:/data’
web:
<< : *sentry_defaults
ports:
- ‘9000:9000/tcp’
cron:
<< : *sentry_defaults
command: run cron
worker:
<< : *sentry_defaults
command: run worker
post-process-forwarder:
<< : *sentry_defaults
# Increase --commit-batch-size 1 below to deal with high-load environments.
command: run post-process-forwarder --commit-batch-size 1
sentry-cleanup:
<< : *sentry_defaults
image: sentry-cleanup-onpremise-local
build:
context: ./cron
args:
BASE_IMAGE: ‘sentry-onpremise-local’
command: ‘“0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS”’
volumes:
sentry-data:
external: true
sentry-postgres:
external: true
sentry-redis:
external: true
sentry-zookeeper:
external: true
sentry-kafka:
external: true
sentry-clickhouse:
external: true
sentry-symbolicator:
external: true
sentry-secrets:
sentry-smtp:
sentry-zookeeper-log:
sentry-kafka-log:
sentry-smtp-log:

Please pull your onpremise repo then :slight_smile:

I’m back working - I pulled the latest, then still had trouble with the api calls with 502 unknown gateway, which were downlevel at relay/tcp 3000. That was giving an issue with permissions on relay/credentials.json, so I chmod’ed. That was still a no go, so I deleted the file. That finally got me going.
What a ride

Thanks for the suggestions

Are you sure that after pulling you ran install.sh again? It seems like that should have taken care of everything.

It happened again, using the latest onpremise branch right now, exposing the nginx container fixed the issue for us.