What I want to do is to have sentry web running in multiple regions, and have a RabbitMQ broker for celery with the shovel plugin. One of my regions is a primary and that is where actual data processing, all the sentry worker and sentry cron and the main UI is situated. The shovel will take all my data and funnel it into my primary region, where it is processed.
As far as I understand from the code, for collecting error traces and breadcrumbs the sentry web instances in non-primary regions only need to be able to run the /api/ endpoint.
The /api endpoint code just dumps things into the celery broker to leave things for data processing.
The sentry UI frontend just lifts things from the database, and displays it, so it is detached from requiring any data from the secondary sentry web instances.
I am going to ignore frontend log collection for right now, and get my frontend logs piped into my primary directly.
Does this seem like it would work? Or should I got back to the drawing board?
So, yeah, you can sorta do this. But it’s not that trivial.
The sentry web worker, which will be accepting data, does need access to shared data. It has to query the database, as well as redis, before events are shoved into the broker.
So you’re going to incur latency on those datastores. My hunch is that you’ll have a bad time here, especially if these regions aren’t actually close together.
You might wanna keep an eye out for a new project we’re working on that will help here.
If this is something that will ship out soon, I’d like to contribute to it too. Seems cool.
The reason why I was looking into scaling it across regions is because latencies per-region are pretty fair, within the region, and horrendous across regions and over IPSec public internet tunnels.
So… really looking at either having a full fledged isolated cluster per region or having one master region and relying on a persistent queue to keep things tight.
I don’t want to motivate people to adopt sentry because the existing solution has bad latencies and packet losses too.