No data in Performance Monitoring

Hi everyone,

I’ve added Sentry’s PHP SDK for Error Tracking and the Sentry CDN for Performance Monitoring, and although attaching the following code in my footer (after the CDN is loaded) I am only getting JavaScript errors, no performance related data? I tried to find more information in the Docs about how to enable more performance related items, but it seemed unclear, and maybe I’m doing something wrong? Any feedback?

  <script>
  Sentry.init({
    dsn: '[REDACTED],
    integrations: [
      new Sentry.Integrations.Tracing(),
    ],
    tracesSampleRate: 1.0,
  });
</script>

Thank you,
Brandin.

I cant comment on if and how you enable this in self-hosted, but as a first step you can confirm that its sending an event at all by looking in your browsers network log.

As far as I can tell, it is sending an event, the API is returning a 200 status code:
image

Which version of Sentry on-premise are you using? v20.7.0 had a bug related to performance but we quickly resolved it with v20.7.1 so that or something more recent should get you the performance information.

Also make sure you updated your config file from https://github.com/getsentry/onpremise/blob/4dbfcbcebe9d7ea54bd009bb85a331dc6ef51295/sentry/sentry.conf.example.py as it has some changes to enable performance.

Hi BYK!

I am on 20.7.0, so that makes sense. I’ll complete this upgrade and report back.

Brandin.

1 Like

Hi again team - reporting back:

I did the upgrade to 20.8.0 (dev) and ensured my config file was updated but unfortunately no data (old or new) seems to be available. I confirmed after the change that the data is being sent to Sentry as expected and is returning a 200 code with an ID.

Any other thoughts you could have for me to look into I would appreciate!

Brandin.

Hi @brandinarsenault - since I was able to get this to work locally, I don’t know what else may be going on. What is the screen you are seeing on the performance tab?

Hello again. Here is all I see when I open the performance screen:

Transactions seem to be submitting to the API and console errors are being logged into the issues properly. Here is the code I am using:

Thank you,
Brandin.

Can you share your relay, nginx and sentry-web logs to verify this?

I actually have the same exact issue as @brandinarsenault. Im on 20.9.0

I created a simple test route in PHP (Laravel)

Route::get('/test1', function () {
    $transactionContext = new TransactionContext();
    $transactionContext->setName('External Call');
    $transactionContext->setOp('http.caller');
    $transaction = \Sentry\startTransaction($transactionContext);

    $spanContext = new SpanContext();
    $spanContext->setOp('functionX');
    $span1 = $transaction->startChild($spanContext);

    // Calling functionX
    echo 'Fun';
    $span1->finish();

    $transaction->finish();

    return 'Hello!';
});

I’ve also run the CLI command to test

root@6bf8ed752d57:/usr/local/app# php artisan sentry:test --transaction
[Sentry] DSN discovered!
[Sentry] Generating test Event
[Sentry] Sending test Event
[Sentry] Sending test Transaction
[Sentry] Event sent with ID: 62563ad6ffb24a8e8ed4eb1177e3a6a3

I see the event coming into nginx

10.255.0.13 - - [30/Sep/2020:20:51:53 +0000] "POST /api/37/envelope/ HTTP/1.1" 200 41 "-" "sentry.php.laravel/2.0.0" "<my ip address>"
10.255.0.4 - - [30/Sep/2020:20:51:53 +0000] "POST /api/37/envelope/ HTTP/1.1" 200 41 "-" "sentry.php.laravel/2.0.0" "<my ip address>"

There’s nothing in the relay log (its empty)

The sentry-web log just shows:

20:54:13 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/organizations/<org>/performance/?project=37' method=u'GET' ip_address=u'10.255.0.4')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/assistant/?v2' method=u'GET' ip_address=u'10.255.0.4')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/?detailed=0' method=u'GET' ip_address=u'10.255.0.2')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/internal/health/' method=u'GET' ip_address=u'10.255.0.13')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/?member=1' method=u'GET' ip_address=u'10.255.0.2')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/projects/?all_projects=1' method=u'GET' ip_address=u'10.255.0.13')
20:54:14 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/teams/' method=u'GET' ip_address=u'10.255.0.4')
20:54:15 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/broadcasts/' method=u'GET' ip_address=u'10.255.0.2')
20:54:15 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/projects/?per_page=50' method=u'GET' ip_address=u'10.255.0.13')
20:54:15 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/tags/?statsPeriod=14d&use_cache=1' method=u'GET' ip_address=u'10.255.0.4')
20:54:16 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/events-stats/?interval=5m&project=37&query=event.type%3Atransaction&yAxis=apdex(300)&yAxis=epm()&statsPeriod=24h' method=u'GET' ip_address=u'10.255.0.2')
20:54:16 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/events-meta/?statsPeriod=24h&project=37&query=event.type%3Atransaction' method=u'GET' ip_address=u'10.255.0.13')
20:54:16 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/eventsv2/?statsPeriod=24h&project=37&field=transaction&field=project&field=epm()&field=p50()&field=p95()&field=failure_rate()&field=apdex(300)&field=count_unique(user)&field=user_misery(300)&sort=-epm&per_page=50&query=event.type%3Atransaction' method=u'GET' ip_address=u'10.255.0.4')
20:54:16 [INFO] sentry.superuser: superuser.request (user_id=3 url=u'http://<server>/api/0/organizations/<org>/tags/?statsPeriod=24h&use_cache=1&project=37' method=u'GET' ip_address=u'10.255.0.2')

This is from ingest-consumer:

20:54:16 [INFO] batching-kafka-consumer: Flushing 2 items (from {(u'ingest-transactions', 0): [42L, 43L]}): forced:False size:False time:True
20:54:16 [INFO] batching-kafka-consumer: Worker flush took 31ms

When I run http://<server>/api/0/organizations/<org>/eventsv2/?statsPeriod=24h&project=37&field=transaction&field=project&field=epm()&field=p50()&field=p95()&field=failure_rate()&field=apdex(300)&field=count_unique(user)&field=user_misery(300)&sort=-epm&per_page=50&query=event.type%3Atransaction

manually all I get is the following:

{"meta":{},"data":[]}

I can confirm this problem still exists for me but I haven’t had the time to check my logs yet.

I was able to figure it out. I’m running this in swarm. So my setup takes me out of install.sh which is the officially supported method from Sentry. Basically I cross compare what’s been done between the last tag and the latest tag and I slowly update everything in swarm (I can document this for others)

I went to the base sentry github and found this issue: https://github.com/getsentry/sentry/issues/20435

Which made me realize I was missing both of these entries in the stack file.

  # Kafka consumer responsible for feeding session data into Clickhouse
  snuba-sessions-consumer:
    <<: *snuba_defaults
    command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
  # Kafka consumer responsible for feeding transactions data into Clickhouse
  snuba-transactions-consumer:
    <<: *snuba_defaults
    command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750

Once I added them I monitored snuba-transactions-consumer and saw:

2020-09-30 22:18:40,616 New partitions assigned: {Partition(topic=Topic(name='events'), index=0): 106213}
2020-09-30 22:18:46,167 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:18:51,321 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:18:52,933 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:19:25,151 Completed processing <Batch: 2 messages, open for 1.15 seconds>.
2020-09-30 22:19:46,260 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:19:48,863 Completed processing <Batch: 2 messages, open for 1.01 seconds>.
2020-09-30 22:20:25,886 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:20:29,612 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:20:32,689 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:21:09,783 Completed processing <Batch: 2 messages, open for 1.00 seconds>.
2020-09-30 22:21:13,149 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:21:46,319 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:21:53,003 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:21:59,397 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:22:31,465 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:22:54,054 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:23:04,539 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:23:34,027 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:23:50,282 Completed processing <Batch: 1 message, open for 1.00 seconds>.
2020-09-30 22:24:09,427 Completed processing <Batch: 2 messages, open for 1.01 seconds>.

I then waited a bit and then I went and generated another transaction and I now have performance data.

Note that old performance data never came back. It seems that without the snuba’s the data is trashed which is fine

1 Like

I’d guess this is Kafka not storing the messages long enough as Relay publishes these events and they just stay there until they expire or a consumer processes them.

1 Like