Almost, the Sentry server crashing and not accepting events in a timely manor is certainly the cause of your app going down.
But it’s more likely because all available requests handlers (php-fpm or available php-cgi threads) were saturated by all the hanging requests waiting for Sentry to either timeout or accept the event which caused the server to no longer accept any requests (or take a long time accepting them). Doubt much I/O is happening when sending Sentry events since it’s not using the disk to write and/or send them.
No. But is is bad warning occur in a “normal” request. Warnings should not happen in production is my advice (same goes for notices and any other error level). Your app should not generate any Sentry events in normal circumstances I would say.
Correct, it shouldn’t, but if it’s the “normal” for you app you could think about either not logging warnings until they’re resolved or apply sampling so not all events are sent back to Sentry.
I doubt it since the HttpTransport already waits until the end of the request to sent events, and it would not sent less or would batch events, so I don’t think it would have helped.
It can. But requires you to create you initialise transport, there are no options you can set.
We did recently merge a PR to set more sensible timeouts by default but it looks like that has not been released yet so keep an eye out for that or replace the transport so you can set your own timeouts.
Many events sent by the PHP client are usually not good for performance, we can’t really get around it since PHP is a single threaded language so we can only do so much to keep requests fast but still transmit all events without using something like an external queue (which we don’t want to). Sentry should let you know when something is wrong, so those warnings are probably a good thing to receive but is also a good thing to fix, possibly if that firehose of warnings is solved Sentry will work much better for you.
I hope my answers help a bit, let me know if you have follow up questions.