Hi, I’m using sentry open source 9.1.2 version. Recently i’m getting lot of 429 errors in sentry. I have raised the limit to 3000 events per minute by running two sentry web services(with 10 workers each) to handle the traffic. Can anyone help me here?
Instead of adding a new web instances, adding more dedicated workers would be more effective I think. You can find more information about this over at How to clear backlog and monitor it
Hi, thanks for the reply, i have increased the count of workers but i’m facing an issue with the “events.preprocess_event” queue the count is keep on increasing. I have four workers running with cpu average is less than 60 and aws elasticache redis and rds also having sufficient amount of cpu & memory to handle the load but the count of this queue preprocess is keep on increasing and making it hard for us to check the issues without delay. Do sentry set any default ttl on key’s or do i need to take care of that.
@senduri - are you using dedicated workers per queue or you just increased the number of workers? If it is the latter, I think this result is not surprising. I recommend running the extra workers with
-q events.preprocess_event so they are dedicated to this queue and nothing else.
Thanks for the quick reply, i have assigned one worker to this specific queue along with increasing the worker count as per your earlier comments and just like you mentioned the count is decreasing but at the same time “events.save_event” queue count is building up.can you confirm me if that is expected?
Yup, you are just uncovering the nested worker chain Sentry has and how the pressure/load across its pipes Now you probably need more dedicated workers for that queue.
Thanks for your reply, will add more workers to ease the pressure(more money to bezos),hope i no need to trouble you after this. Thanks once again.
We are happy to take that money instead of Bezos, just sayin’
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.