I’ve just configured an AWS cluster for running a production scale Sentry self-hosted application.
The configuration is:
HAProxy for load balancing and TLS termination
Shared Redis instance
Shared Postgres database
Two Sentry application instances (both share sentry.conf.py and config.yml)
The two instances clearly share data - all settings are identical. However, when sending a test event via a simple Python application, the event is not shared. It arrives to one instance, but not the other. Which instance it arrives in simply depends on the load balancer sending requests roundrobin.
Any ideas for why this might be?
Running latest stable on all Sentry On-Premise 20.10.1, Ubuntu 20.04, Redis 6.0.8, Docker 1.27.4, HAProxy 2.2, Postgres 12.4
I don’t understand why you have 2 instances that share their data so it’s hard to make recommendations. If it is for redundancy, yeah, I’d go with one for each but Clickhouse is the datastore so it should be common/shared.
Relay and Snuba are stateless services but they do need to know their backends and kafka instances, so I’d go with single instances for these for now.