How many containers with consumers i should run for Sentry?

I run 21.3.0 Sentry in production and follow this onpremise/docker-compose.yml at 48c855aa3def4557ef799d878c75832662b5c67d · getsentry/onpremise · GitHub file for deploy different containers for Sentry.
In this file, all containers is uniq and have only 1 copy.
I see slow updating my events in Sentry, redis cache is growning very fast. I try to tune it and face this issue:
how many different containers i can run?
For example, i run 3 snuba-transactions-consumer containers and see, that only one is working (suppose, any other "consumer"containers also work only in single mode?)
sentry-web and worker is clear, as many as posible :slight_smile:
what about symbolicator? snuba? relay? sentry consumers?

Thanks for any support. It will be great to find this info in docs :slight_smile:

1 Like

I think this may help you:

This link is very interesting. I will try tomorrow run this type of workers.
But my question is about another components.
How many consumer containers i can run? How many snuba?

For example, i have 3 container with snuba_transactions_consumer, but only one of them write log about processing
Like this:

Sep 21 23:22:14 snuba-transactions-consumer03 [792]: 2021-09-21 20:22:14,178 Completed processing <Batch: 20 messages, open for 2.38 seconds>.

I don’t know why you are focusing on Snuba consumers. They already have built-in multi-process support and are very unlikely to be your bottlenecks. If you are having issues with a large redis, that is a clear indication of the need for more and dedicated workers as redis is used as the job pool for them.

I focused on this consumers becouse first my affort was increase all components for speed up sentry. I add more sentry-workers, consumers etc and find, that increasing consumers have no influence on queue. I re-check logs and find, that only one consumer work like expected.
Now my redis is fine, but i still want to know - how many copy of sentry components i can run?
Many times ago sentry has only worker, web and cron.
Now it have 18 different containers ( relay, sentry-ingest-consumer, sentry-post-process-forwarder, sentry-subscription-consumer-events, sentry-subscription-consumer-transactions, sentry-web, sentry_cron, sentry_worker, snuba-consumer, snuba-outcomes-consumer, snuba-replacer, snuba-sessions-consumer, snuba-subscription-consumer-events, snuba-subscription-consumer-transactions, snuba-transactions-consumer, snuba, symbolicator etc)

As i see, i can run consumers (all containers communicated with kafka) in one copy. Two containers doesnt work faster? Also snuba-replacer? sentry-post-process-forwarder?

You can scale many components, including Snuba. You just need to adjust some settings accordingly but I don’t know enough about those. I think @fpacifici can help.

most Snuba consumers (transactions-consumer, errors-consumer, outcomes-consumer, sessions-consumer) can be scaled out either by adding more containers or by adding processes to a single container.

Adding containers
First you need to increase the number of partitions in Kafka for the topic relevant to them. Each Kafka partition can only be consumed by one consumer at a time in order to preserve Kafka in order delivery. So you need to increase the partitions and then increase the number of consumers otherwise only one will do the work.
Errors and Transactions are semantically partitioned by project id so a single Sentry project id will always be in the same partition. Scaling out by adding consumers helps when you have a lot of projects it does not help for a single project. (outcomes and sessions do not have this constraint).

Adding processes
In order to increase capacity for a single partition (also works when you have many projects to an extent) you can increase the number of processes that are consuming messages by setting these three parameters in the consumer CLI:


block sizes are in bytes and contain the size of the shared memory area the consumer will use. The right size depends on your load and event size.

Though I would start scaling the consumer only when you experience backlog in Kafka, which means the consumer cannot keep up. Redis, as mentioned above, is unrelated.



This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.