Potential negative impact on capturing during HTTP requests


First of all I really like Sentry but there is one architectural fact I can’t really wrap my head around. I’m sure I lack some understanding of the topic, so perhaps someone might be able to explain it to me properly.

We’re using PHP and I really dislike the fact that I’m capturing errors and sending them to a different host (where Sentry will be running) during the HTTP request a customer is performing. At the moment we’re logging to files with specific context and use the ELK-stack with Filebeat to index and search them. What if the Sentry host is unavailable or slow to respond, this will directly impact the HTTP requests our customers are performing. Am I missing something that would negate this issue? Is there a way to send traces to Sentry without blocking our own HTTP requests?

Thanks in advance.


I think these are quite fair concerns that are already addressed in the SDK. The default HTTP transport registers a shutdown function to send the payloads to ensure this is done at the very end (https://github.com/getsentry/sentry-php/blob/e85c7480e41e6f417d4207a4f68dd5a09b429a63/src/Transport/HttpTransport.php#L57). It also makes these requests asynchronously (https://github.com/getsentry/sentry-php/blob/master/src/Transport/HttpTransport.php#L81) so this should really not affect any user-facing metric.

There’s also a spool interface with a MemorySpool implementation if you want to queue the events in memory and send them later on: https://github.com/getsentry/sentry-php/blob/e7fb87f840ab026e9ee9444831b6c7c571f06ed0/src/Spool/MemorySpool.php

@BYK Thanks for the explanation! This addresses my concerns perfectly.

1 Like