Events seems to be processed, but not showing on web

It seems I have an issue with issues not showing up in Sentry 21.5.0.dev0 onpremise installation.

To start with, the performance monitoring is working fine, showing up all the telemetry. But when it comes to other events they seem to be processed (eg. I receive notifications from integrations), but they are nowhere to be found:

There is either nothing worrying in logs or I don’t know what to look for. The only thing that is slightly suspicious for me is that message:

ingest-consumer_1                           | 09:15:16 [INFO] batching-kafka-consumer: Flushing 4 items (from {('ingest-events', 0): [1122, 1124], ('ingest-transactions', 0): [1183, 1183]}): forced:False size:False time:True
ingest-consumer_1                           | 09:15:16 [INFO] batching-kafka-consumer: Worker flush took 123ms

I don’t remember if that always appeared here.

What I’ve tried so far:

  1. restarting whole setup (docker-compose down && up)

  2. upgrading via install.sh (currently I’m on Sentry 21.5.0.dev0)

  3. removing following volumes and upgrading/restarting

    docker volume rm sentry_onpremise_sentry-zookeeper-log
    docker volume rm sentry_onpremise_sentry-clickhouse-log
    docker volume rm sentry_onpremise_sentry-kafka-log
    docker volume rm sentry_onpremise_sentry-secrets
    docker volume rm sentry_onpremise_sentry-smtp
    docker volume rm sentry_onpremise_sentry-smtp-log
    docker volume rm sentry-clickhouse
    docker volume rm sentry-redis
    docker volume rm sentry-kafka
    docker volume rm sentry-symbolicator
    docker volume rm sentry-zookeeper

The third step obviously left me with no past issues, from then on only performance events are showing up.

The logs from trying to capture message manually (sentry_sdk.capture_message()) look like that:

nginx_1                                     | 10.13.1.1 - - [28/Apr/2021:09:21:36 +0000] "POST /api/15/store/ HTTP/1.0" 200 41 "-" "-" "REDACTED, REDACTED"
nginx_1                                     | 10.13.1.1 - - [28/Apr/2021:09:21:36 +0000] "POST /api/15/store/ HTTP/1.0" 200 41 "-" "-" "REDACTED, REDACTED"
ingest-consumer_1                           | 09:21:37 [INFO] batching-kafka-consumer: Flushing 2 items (from {('ingest-events', 0): [1251, 1252]}): forced:False size:False time:True
ingest-consumer_1                           | 09:21:37 [INFO] batching-kafka-consumer: Worker flush took 46ms
snuba-transactions-consumer_1               | 2021-04-28 09:21:39,052 Completed processing <Batch: 2 messages, open for 1.01 seconds>.
clickhouse_1                                | 2021.04.28 09:21:39.063485 [ 91 ] {e870dbc7-e788-4f9e-977e-174ecc18c9e5} <Information> executeQuery: Read 2 rows, 124.00 B in 0.015 sec., 134 rows/sec., 8.16 KiB/sec.
clickhouse_1                                | 2021.04.28 09:21:39.066544 [ 91 ] {} <Information> HTTPHandler: Done processing query
snuba-outcomes-consumer_1                   | 2021-04-28 09:21:39,073 Completed processing <Batch: 2 messages, open for 1.04 seconds>.
snuba-consumer_1                            | 2021-04-28 09:21:39,103 Completed processing <Batch: 2 messages, open for 1.07 seconds>.
clickhouse_1                                | 2021.04.28 09:21:39.098959 [ 89 ] {8a65cae4-3cfb-4adf-a381-713ca264c520} <Information> executeQuery: Read 2 rows, 7.49 KiB in 0.053 sec., 37 rows/sec., 140.15 KiB/sec.
clickhouse_1                                | 2021.04.28 09:21:39.099720 [ 89 ] {} <Information> HTTPHandler: Done processing query
clickhouse_1                                | 2021.04.28 09:21:39.967588 [ 101 ] {} <Information> TCPHandler: Processed in 0.010 sec.
nginx_1                                     | 10.13.1.1 - - [28/Apr/2021:09:21:40 +0000] "POST /api/15/store/ HTTP/1.0" 200 41 "-" "-" "REDACTED, REDACTED"
ingest-consumer_1                           | 09:21:41 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-events', 0): [1253, 1253]}): forced:False size:False time:True
ingest-consumer_1                           | 09:21:41 [INFO] batching-kafka-consumer: Worker flush took 25ms
snuba-subscription-consumer-events_1        | 2021-04-28 09:21:42,652 Flushing 2 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=2472, hi=2474)})
snuba-transactions-consumer_1               | 2021-04-28 09:21:42,656 Completed processing <Batch: 1 message, open for 1.01 seconds>.
snuba-subscription-consumer-events_1        | 2021-04-28 09:21:42,653 Worker flush took 0ms
clickhouse_1                                | 2021.04.28 09:21:42.693515 [ 90 ] {086a0dc1-cdcb-4853-b089-072d51b557ba} <Information> executeQuery: Read 1 rows, 71.00 B in 0.032 sec., 31 rows/sec., 2.19 KiB/sec.
clickhouse_1                                | 2021.04.28 09:21:42.697205 [ 90 ] {} <Information> HTTPHandler: Done processing query
snuba-outcomes-consumer_1                   | 2021-04-28 09:21:42,702 Completed processing <Batch: 1 message, open for 1.05 seconds>.
clickhouse_1                                | 2021.04.28 09:21:42.728634 [ 91 ] {718666db-eac4-4f1e-b33f-a3bb2b336f47} <Information> executeQuery: Read 1 rows, 3.91 KiB in 0.063 sec., 15 rows/sec., 62.33 KiB/sec.
clickhouse_1                                | 2021.04.28 09:21:42.730243 [ 91 ] {} <Information> HTTPHandler: Done processing query
snuba-consumer_1                            | 2021-04-28 09:21:42,739 Completed processing <Batch: 1 message, open for 1.09 seconds>.
clickhouse_1                                | 2021.04.28 09:21:42.949681 [ 101 ] {} <Information> TCPHandler: Processed in 0.010 sec.
snuba-subscription-consumer-transactions_1  | 2021-04-28 09:21:43,651 Flushing 2 items (from {Partition(topic=Topic(name='events'), index=0): Offsets(lo=2472, hi=2474)})
snuba-subscription-consumer-transactions_1  | 2021-04-28 09:21:43,652 Worker flush took 0ms

Any help with that will be appreciated.

1 Like

We had a similar problem upgrading from 21.2 to 21.3. The issue for us was that switching to 21.3 made Sentry query for events from errors_local despite our consumers not writing to that Clickhouse table. We fixed this by:

  1. Upgrading to 21.4 from 21.3, which includes a migration to backfill errors_local from sentry_local
  2. Switching snuba-consumer and snuba-replacer to write to errors rather than events
3 Likes

@rma-stripe, thank you very much, that is an exact solution to my problem

It was exactly my case, and it turns out, that I’ve been shuffling the images back and forth, but I kept the old docker-compose.yml where the storage for snuba-consumer and snuba-replacer pointed to --storage events, just like in your case.

So tl;dr for anyone - If you are upgrading sentry, please make sure you use the latest version from the repo, and not only changing images, as I did :wink:

2 Likes

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.