Sentry AWS cluster doesn't share events

Hello!

I’ve just configured an AWS cluster for running a production scale Sentry self-hosted application.

The configuration is:

  • HAProxy for load balancing and TLS termination
  • Shared Redis instance
  • Shared Postgres database
  • Two Sentry application instances (both share sentry.conf.py and config.yml)

The two instances clearly share data - all settings are identical. However, when sending a test event via a simple Python application, the event is not shared. It arrives to one instance, but not the other. Which instance it arrives in simply depends on the load balancer sending requests roundrobin.

Any ideas for why this might be?

Running latest stable on all Sentry On-Premise 20.10.1, Ubuntu 20.04, Redis 6.0.8, Docker 1.27.4, HAProxy 2.2, Postgres 12.4

Update: I’m enjoying this post regarding Data persistence – Data volumes persistence

It seems that Clickhouse is an essential component

It sure is :smiley: So are all the workers, Relay, and Snuba

Would you recommend a shared instance of Clickhouse, Relay, and Snuba each?

(Thank you!)

I don’t understand why you have 2 instances that share their data so it’s hard to make recommendations. If it is for redundancy, yeah, I’d go with one for each but Clickhouse is the datastore so it should be common/shared.

Relay and Snuba are stateless services but they do need to know their backends and kafka instances, so I’d go with single instances for these for now.

The two instances are for load balancing :slightly_smiling_face: Ideally we would like to scale the number of instances to accept requests round robin.

Appreciate the input!

1 Like

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.