I am trying to prevent sentry from grouping my errors. Specifically I log to sentry ( captureMessage() ) when a task is taking too long. The code is as simple as:
from celery.utils.log import get_task_logger
task_logger = get_task_logger(__name__)
....
if task_has_exceeded_time_limit:
task_logger.warning(f'Hanging job: Job processing time has exceeded - {restart_threshold}. {log_data}')
client.captureMessage(f'Hanging job: Job processing time has exceeded: {job_id}', extra={
'restart_threshold': f'{restart_threshold} seconds',
'log_data': log_data
})
A daemon will check this task on a regular interval. As long as the job is still “taking too long”, I want it to alert me. As of now, I have I have one sentry alert where there are 10k events. Instead, I want each of these events to be an individual Sentry alert (as if they were all different errors/alerts). I cannot find anything that would work. The closest thing I can find is https://docs.sentry.io/data-management/event-grouping/sdk-fingerprinting/?platform=python#group-errors-more-granularly. However, this does not help because I am not “splitting” these errors in to further subcategories; I simply do not want the grouping algorithm to be enabled for this one particular section of my code.
Is this possible to do in Sentry?