Slack integration not working on Sentry On-premise v10 dev

The global integration doesn’t seem to work… Legacy integration works only for test plugin. No events related alerts :frowning: Anyone has similar problem?

update: Also mail doesn’t work (test mail works) So the integration works on both slack and mail but somehow the events are not sent? We use mailgun for our mail service and I checked there were only test mail logs present.

@cwang - If you share more details about your setup and configuration, that may help other people identify the issue.

We followed the github guide with the only modification of using mailgun. Things we did,

  1. Git clone the on-premise repo
  2. Modify config to use mailgun
  3. run ./install.sh

We also run Sentry behind nginx with SSL but I doubt it’s any network related issue. We can send test mail and test notification just fine. The only thing that’s not working is that events are not triggering alerts (don’t see them in audits or logs).

Instance: Single GCP instance (standard n1 with 4 core and 16g ram)
Sentry Version: v10 dev

Update: It looks like it’s not just slack integration issue. We just found out on Organization Stats page we have 0 events listed there… @BYK Have you seen this issue before? Any suggestion on further debugging?

@cwang - thanks for sharing all this info!

Regarding alerts, maybe you need to configure actions on alerts: https://docs.sentry.io/workflow/notifications/alerts/#action-types?

0 events in organization stats look weird. Can you get the logs for your workers and Snuba to ensure they work correctly and ingest events?

@BYK Logs for worker, nothing looks suspicious - only some source.disabled warning.

Attaching to onpremise_worker_1
worker_1 | 08:13:45 [WARNING] sentry.utils.geo: settings.GEOIP_PATH_MMDB not configured.
worker_1 | 08:13:54 [INFO] sentry.plugins.github: apps-not-configured
worker_1 | 08:13:55 [INFO] sentry.bgtasks: bgtask.spawn (task_name=u’sentry.bgtasks.clean_dsymcache:clean_dsymcache’)
worker_1 | System check identified some issues:
worker_1 |
worker_1 | WARNINGS:
worker_1 | ?: (urls.W002) Your URL pattern ‘/$’ has a regex beginning with a ‘/’. Remove this slash as it is unnecessary.
worker_1 |
worker_1 | -------------- celery@212ff886a109 v3.1.18 (Cipater)
worker_1 | ---- **** -----
worker_1 | — * *** * – Linux-5.0.0-1025-gcp-x86_64-with-debian-10.1
worker_1 | – * - **** —
worker_1 | - ** ---------- [config]
worker_1 | - ** ---------- .> app: sentry:0x7fbe9031c610
worker_1 | - ** ---------- .> transport: redis://redis:6379/0
worker_1 | - ** ---------- .> results: disabled
worker_1 | - *** — * — .> concurrency: 4 (prefork)
worker_1 | – ******* ----
worker_1 | — ***** ----- [queues]
worker_1 | -------------- .> activity.notify exchange=default(direct) key=activity.notify
worker_1 | .> alerts exchange=default(direct) key=alerts
worker_1 | .> app_platform exchange=default(direct) key=app_platform
worker_1 | .> assemble exchange=default(direct) key=assemble
worker_1 | .> auth exchange=default(direct) key=auth
worker_1 | .> buffers.process_pending exchange=default(direct) key=buffers.process_pending
worker_1 | .> cleanup exchange=default(direct) key=cleanup
worker_1 | .> commits exchange=default(direct) key=commits
worker_1 | .> counters-0 exchange=counters(direct) key=
worker_1 | .> default exchange=default(direct) key=default
worker_1 | .> digests.delivery exchange=default(direct) key=digests.delivery
worker_1 | .> digests.scheduling exchange=default(direct) key=digests.scheduling
worker_1 | .> email exchange=default(direct) key=email
worker_1 | .> events.preprocess_event exchange=default(direct) key=events.preprocess_event
worker_1 | .> events.process_event exchange=default(direct) key=events.process_event
worker_1 | .> events.reprocess_events exchange=default(direct) key=events.reprocess_events
worker_1 | .> events.reprocessing.preprocess_event exchange=default(direct) key=events.reprocessing.preprocess_event
worker_1 | .> events.reprocessing.process_event exchange=default(direct) key=events.reprocessing.process_event
worker_1 | .> events.save_event exchange=default(direct) key=events.save_event
worker_1 | .> files.delete exchange=default(direct) key=files.delete
worker_1 | .> incidents exchange=default(direct) key=incidents
worker_1 | .> integrations exchange=default(direct) key=integrations
worker_1 | .> merge exchange=default(direct) key=merge
worker_1 | .> options exchange=default(direct) key=options
worker_1 | .> reports.deliver exchange=default(direct) key=reports.deliver
worker_1 | .> reports.prepare exchange=default(direct) key=reports.prepare
worker_1 | .> search exchange=default(direct) key=search
worker_1 | .> sleep exchange=default(direct) key=sleep
worker_1 | .> stats exchange=default(direct) key=stats
worker_1 | .> triggers-0 exchange=triggers(direct) key=
worker_1 | .> unmerge exchange=default(direct) key=unmerge
worker_1 | .> update exchange=default(direct) key=update
worker_1 |

Partial logs for snuba_api - again, nothing looks suspicious…

Attaching to onpremise_snuba-api_1
snuba-api_1 | Running Snuba API server with default arguments: --socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive
snuba-api_1 | + ‘[’ api = bash ‘]’
snuba-api_1 | + ‘[’ a = - ‘]’
snuba-api_1 | + ‘[’ api = api ‘]’
snuba-api_1 | + ‘[’ 1 -gt 1 ‘]’
snuba-api_1 | + _default_args=‘–socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive’
snuba-api_1 | + echo ‘Running Snuba API server with default arguments: --socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive’
snuba-api_1 | + set – uwsgi --master --manage-script-name --wsgi-file snuba/views.py --die-on-term --socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive
snuba-api_1 | + set – uwsgi --master --manage-script-name --wsgi-file snuba/views.py --die-on-term --socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive
snuba-api_1 | + snuba uwsgi --help
snuba-api_1 | + exec gosu snuba uwsgi --master --manage-script-name --wsgi-file snuba/views.py --die-on-term --socket /tmp/snuba.sock --http 0.0.0.0:1218 --http-keepalive
snuba-api_1 | *** Starting uWSGI 2.0.17 (64bit) on [Fri Nov 22 08:13:41 2019] ***
snuba-api_1 | compiled with version: 8.3.0 on 22 November 2019 01:30:22
snuba-api_1 | os: Linux-5.0.0-1025-gcp #26~18.04.1-Ubuntu SMP Mon Nov 11 13:09:18 UTC 2019
snuba-api_1 | nodename: cccdeebed746
snuba-api_1 | machine: x86_64
snuba-api_1 | clock source: unix
snuba-api_1 | pcre jit disabled
snuba-api_1 | detected number of CPU cores: 4
snuba-api_1 | current working directory: /usr/src/snuba
snuba-api_1 | detected binary path: /usr/local/bin/uwsgi
snuba-api_1 | your memory page size is 4096 bytes
snuba-api_1 | detected max file descriptor number: 1048576
snuba-api_1 | lock engine: pthread robust mutexes
snuba-api_1 | thunder lock: disabled (you can enable it with --thunder-lock)
snuba-api_1 | uWSGI http bound on 0.0.0.0:1218 fd 3
snuba-api_1 | uwsgi socket 0 bound to UNIX address /tmp/snuba.sock fd 6
snuba-api_1 | Python version: 3.7.5 (default, Nov 15 2019, 02:40:28) [GCC 8.3.0]
snuba-api_1 | Python main interpreter initialized at 0x55838134fe80
snuba-api_1 | python threads support enabled
snuba-api_1 | your server socket listen backlog is limited to 100 connections
snuba-api_1 | your mercy for graceful operations on workers is 60 seconds
snuba-api_1 | mapped 145808 bytes (142 KB) for 1 cores
snuba-api_1 | *** Operational MODE: single process ***
snuba-api_1 | initialized 38 metrics
snuba-api_1 | WSGI app 0 (mountpoint=‘’) ready in 3 seconds on interpreter 0x55838134fe80 pid: 1 (default app)
snuba-api_1 | *** uWSGI is running in multiple interpreter mode ***
snuba-api_1 | spawned uWSGI master process (pid: 1)
snuba-api_1 | spawned uWSGI worker 1 (pid: 14, cores: 1)
snuba-api_1 | metrics collector thread started
snuba-api_1 | spawned uWSGI http 1 (pid: 17)
snuba-api_1 | …The work of process 14 is done. Seeya!
snuba-api_1 | worker 1 killed successfully (pid: 14)
snuba-api_1 | Respawned uWSGI worker 1 (new pid: 18)
snuba-api_1 | …The work of process 18 is done. Seeya!
snuba-api_1 | worker 1 killed successfully (pid: 18)
snuba-api_1 | Respawned uWSGI worker 1 (new pid: 19)

Partial logs for snuba-consumer - this one might be it, there is a lot of “Error submitting packet, dropping the packet and closing the socket” and " Flushing 1 items (from {TopicPartition(topic=‘events’, partition=0): Offsets(lo=1, hi=1)}): forced:False size:False time:True" error. Could you check please?

Attaching to onpremise_snuba-consumer_1
snuba-consumer_1 | + ‘[’ consumer = bash ‘]’
snuba-consumer_1 | + ‘[’ c = - ‘]’
snuba-consumer_1 | + ‘[’ consumer = api ‘]’
snuba-consumer_1 | + snuba consumer --help
snuba-consumer_1 | + set – snuba consumer --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1 | + exec gosu snuba snuba consumer --auto-offset-reset=latest --max-batch-time-ms 750
snuba-consumer_1 | 2019-11-22 08:14:02,840 New streams assigned: {TopicPartition(topic=‘events’, partition=0): 0}
snuba-consumer_1 | 2019-11-22 08:14:08,465 Flushing 1 items (from {TopicPartition(topic=‘events’, partition=0): Offsets(lo=0, hi=0)}): forced:False size:False time:True
snuba-consumer_1 | 2019-11-22 08:14:08,465 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:08,539 Worker flush took 72ms
snuba-consumer_1 | 2019-11-22 08:14:08,539 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:19,424 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:20,425 Flushing 1 items (from {TopicPartition(topic=‘events’, partition=0): Offsets(lo=1, hi=1)}): forced:False size:False time:True
snuba-consumer_1 | 2019-11-22 08:14:20,445 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:20,445 Worker flush took 20ms
snuba-consumer_1 | 2019-11-22 08:14:20,446 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:22,440 Flushing 1 items (from {TopicPartition(topic=‘events’, partition=0): Offsets(lo=2, hi=2)}): forced:False size:False time:True
snuba-consumer_1 | 2019-11-22 08:14:22,441 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:14:22,459 Worker flush took 17ms
snuba-consumer_1 | 2019-11-22 08:14:22,459 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:17:26,591 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:17:27,591 Flushing 1 items (from {TopicPartition(topic=‘events’, partition=0): Offsets(lo=3, hi=3)}): forced:False size:False time:True
snuba-consumer_1 | 2019-11-22 08:17:27,614 Error submitting packet, dropping the packet and closing the socket
snuba-consumer_1 | 2019-11-22 08:17:27,615 Worker flush took 22ms

Partial logs for snuba-replacer, similar to above

Attaching to onpremise_snuba-replacer_1
snuba-replacer_1 | + ‘[’ replacer = bash ‘]’
snuba-replacer_1 | + ‘[’ r = - ‘]’
snuba-replacer_1 | + ‘[’ replacer = api ‘]’
snuba-replacer_1 | + snuba replacer --help
snuba-replacer_1 | + set – snuba replacer --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1 | + exec gosu snuba snuba replacer --auto-offset-reset=latest --max-batch-size 3
snuba-replacer_1 | 2019-11-22 08:14:02,766 New streams assigned: {TopicPartition(topic=‘event-replacements’, partition=0): 0}
snuba-replacer_1 | 2019-11-22 08:51:11,008 Flushing 0 items (from {TopicPartition(topic=‘event-replacements’, partition=0): Offsets(lo=0, hi=0)}): forced:False size:Fal
se time:True
snuba-replacer_1 | 2019-11-22 08:51:11,010 Error submitting packet, dropping the packet and closing the socket
snuba-replacer_1 | 2019-11-22 09:51:26,114 Flushing 1 items (from {TopicPartition(topic=‘event-replacements’, partition=0): Offsets(lo=1, hi=1)}): forced:False size:Fal
se time:True
snuba-replacer_1 | 2019-11-22 09:51:26,116 Error submitting packet, dropping the packet and closing the socket
snuba-replacer_1 | 2019-11-22 09:51:26,185 Replacing 4 rows took 32ms
snuba-replacer_1 | 2019-11-22 09:51:26,185 Error submitting packet, dropping the packet and closing the socket
snuba-replacer_1 | 2019-11-22 09:51:26,186 Worker flush took 69ms
snuba-replacer_1 | 2019-11-22 09:51:26,186 Error submitting packet, dropping the packet and closing the socket

Partial logs on snuba-cleanup

Attaching to onpremise_snuba-cleanup_1
snuba-cleanup_1 | SHELL=/bin/bash
snuba-cleanup_1 | BASH_ENV=/container.env
snuba-cleanup_1 | */5 * * * * gosu snuba snuba cleanup --dry-run False > /proc/1/fd/1 2>/proc/1/fd/2
snuba-cleanup_1 | 2019-11-22 08:15:03,081 Dropped 0 partitions on clickhouse
snuba-cleanup_1 | 2019-11-22 08:20:02,914 Dropped 0 partitions on clickhouse
snuba-cleanup_1 | 2019-11-22 08:25:02,706 Dropped 0 partitions on clickhouse
snuba-cleanup_1 | 2019-11-22 08:30:03,517 Dropped 0 partitions on clickhouse
snuba-cleanup_1 | 2019-11-22 08:35:03,339 Dropped 0 partitions on clickhouse
snuba-cleanup_1 | 2019-11-22 08:40:03,117 Dropped 0 partitions on clickhouse

Some additional info - events are showing up fine, but stats we have zeros. Please see below screenshots

This is harmless. It is coming from the internal metrics system that assumes a daemon to be in place, which is not there for on-premise. We’ll turn it off in a future version.

So if it is only the stats, I think these are stored in Redis. Is everything good on that one? Maybe it keeps restarting or cannot persist its data?

@BYK We’ve re-deployed Sentry and the stats are working now. However, the email and slack notification are still not working. We use mailgun and from Sentry workers’s logs I can see mails being sent, but mailgun has no logs of such deliveries.

As for slack, we’ve set up the global integration but no luck with it. So we tried legacy integration - the test plugin succeeded (messages received in slack) but events are not triggering the alerts :frowning:

For debugging the mail issue, we need the mail configuration settings from your end. Regarding Slack, I honestly have no idea. I’ll ping @scefali in case he knows something.

We use mailgun with following settings:

mail.backend: ‘smtp’
mail.host: ‘smtp.mailgun.org
mail.port: 465
mail.username: ‘username’
mail.password: ‘password’
mail.use-tls: true

When I run
docker-compose logs worker

I get

worker_1 | 14:25:55 [INFO] sentry.mail: mail.sent (size=9903 message_id=u’20191201142551.300.80006@somewhere.com’)
worker_1 | 14:25:57 [INFO] sentry.mail: mail.sent (size=9860 message_id=u’20191201142550.246.9203@somewhere.com’)
worker_1 | 14:25:58 [INFO] sentry.mail: mail.sent (size=9899 message_id=u’20191201142551.246.49981@somewhere.com’)
worker_1 | 14:26:06 [INFO] sentry.mail: mail.sent (size=9865 message_id=u’20191201142550.300.78570@somewhere.com’)
worker_1 | 14:26:12 [INFO] sentry.mail: mail.queued (message_to=(u’someone@somewhere.com’,) project_id=2L user_id=2 group_id=553L message_type=u’notify.activity.assigned’ message_id=u’20191201142612.239.26134@somewhere.com’)
worker_1 | 14:26:13 [INFO] sentry.mail: mail.queued (message_to=(u’someone@somewhere.com’,) project_id=2L user_id=2 group_id=553L message_type=u’notify.activity.assigned’ message_id=u’20191201142612.239.13058@somewhere.com’)
worker_1 | 14:26:13 [INFO] sentry.mail: mail.queued (message_to=(u’someone@somewhere.com’,) project_id=2L user_id=2 group_id=553L message_type=u’notify.activity.assigned’ message_id=u’20191201142613.239.24007@somewhere.com’)
worker_1 | 14:26:17 [INFO] sentry.mail: mail.sent (size=9814 message_id=u’20191201142613.239.24007@somewhere.com’)
worker_1 | 14:26:19 [INFO] sentry.mail: mail.sent (size=9813 message_id=u’20191201142612.239.26134@somewhere.com’)
worker_1 | 14:26:20 [INFO] sentry.mail: mail.sent (size=9760 message_id=u’20191201142612.239.13058@somewhere.com’)

However, when I checked mailgun sent logs, I didn’t see any of those :frowning:

Quick question, have you rebuilt your worker images with the new config? Maybe they are still using the default config and that’s why you are not seeing the mails as they are not sent to mailgun?

We did - the test email sent successfully. Just events triggered alerts are not sent :frowning:

How about the mail.from: '??' setting? Maybe that is left as the default, which is root@localhost which causes Mailgun to reject?

We’ve done some further investigation - the email alerts are triggered when issues status changed but not when first appeared. I strongly suspect it might be an event handling issue…

1 Like

Anything we can help here? Do you have any further logs? You may switch to debug-level logging by following this: https://docs.sentry.io/server/config/#logging

I’ll try debug-level logging and report back :slight_smile:

Updates:

For emails:
We’ve pretty much narrowed down the issue to new events. All status changes on existing events are triggering emails just fine, but for new events the worker has no log at all.

For slack:
Pretty much not working at all…

2 Likes

@cwang - can you look at this: https://github.com/getsentry/onpremise/issues/287

Could it be that the users somehow got disabled for alerts?

@cwang - just merged a fix that should address both of these issues for now: https://github.com/getsentry/onpremise/pull/309

Slack may still have some issues but at least it should work. Emails should also work, without any issues.

can confirm working now

Thanks! Is it just the e-mails or Slack too?

Yes, it works too :slight_smile:

1 Like