Getting 400s from other containers

Hello,

I use hosted Sentry in production. I’m currently working on dockerizing development environments, including running Sentry’s official docker images getsentry/onpremise, to get as close as possible to production.

On one side I have the composed getsentry/onpremise with all defaults.
The web interface works great and submitting events to it through sentry-cli from my host with the DSN works as expected.

On the other side I have a composed stack including PHP 5.6.x and Node 6.x LTS. As these are old versions, they need to use sentry/sentry^1 (for php; rather than the current 2.x) and raven^2 (for node; rather than the current unified javascript sdk).

These submit events properly to the hosted Sentry instance.

They are connected with the Sentry network using docker network connect onpremise_default exampleapp.

But when reporting to the local Docker DSN, it 400s, so never gets saved in Sentry. All that shows up in the container’s logs is:

  • For the PHP client: "POST /api/2/store/ HTTP/1.1" 400 233 "-" "sentry-php/1.10.0"
  • For the Node client: "POST /api/2/store/ HTTP/1.1" 400 233 "-" "-"

As comparison, the from sentry-cli send-event to the same DSN, the logs show:
"POST /api/2/store/ HTTP/1.1" 200 366 "-" "-".

I thought this could be some config/version thing because they are the old (deprecated) clients, even though those old clients work fine on the hosted DSN…

Would anyone have pointers as to what could be the issue?

Thanks for your help!

A 400 usually indicates an incorrect DSN or key. The older clients used to need private keys instead of the public ones, maybe that’s your issue?

The API returns an X-Sentry-Error response if it’s rejected by the store endpoint. Take a look at the message there.

That’s a good point, indeed comparing the DSN for the hosted Sentry (which gets events as expected), this one is of the format https://foo:bar@sentry.io/1234. I was using the one provided in the setup wizard, which is https://bar@sentry.io/1234 only.

So I updated the DSN with the foo:bar one from http://localhost:9000/settings/sentry/projects/exampleproject/keys/ under “DSN (Deprecated)”.

Unfortunately the same still happens :confused:

Note for anyone following along that I needed to change the DSN from the provided http://foo:bar@localhost:9000/123 to http://foo:bar@onpremise_web_1:9000/123 (where onpremise_web_1 is the container name).

Thanks, I’m looking for this, currently stepping into server/vendor/sentry/sentry/lib/Raven/Client.php to catch the response and see the extra headers. Unless I’m misunderstanding your suggestion?

Catching the response with all headers I’m getting this:

HTTP/1.1 400 BAD REQUEST
Content-Length: 26
X-XSS-Protection: 1; mode=block
Content-Language: en
X-Content-Type-Options: nosniff
Vary: Accept-Language
X-Frame-Options: deny
Content-Type: text/html

<h1>Bad Request (400)</h1>

I believe that version of the sdk has something like “getLastError” exposed which should automatically pick that up. I might be wrong about the exact function call.

If you’re seeing a response without that header though, then it’s likely the request isn’t getting to Sentry, or somehow is getting routed incorrectly. Skimming through the original post I dont know why either of those would be true though.

I believe that version of the sdk has something like “getLastError” exposed which should automatically pick that up. I might be wrong about the exact function call.

It does! But it’s not helpful :grimacing: $client->getLastError() returns an empty string and $client->getLastSentryError() returns null.

I was able to get the response with headers by adding $options[CURLOPT_HEADER] = true; in \Raven_Client::send_http_synchronous. Otherwise just the <h1>Bad Request (400)</h1> body was provided.

Actually I think that might be right.

This is sent from PHP (node is identical); it fails.

172.25.0.10 - - [15/Aug/2019:19:12:48 +0000] "POST /api/2/store/ HTTP/1.1" 400 233 "-" "sentry-php/1.10.0"

This is sent from sentry-cli. It works.

172.25.0.1 - - [15/Aug/2019:19:12:54 +0000] "POST /api/2/store/ HTTP/1.1" 200 366 "-" "-"

Notice that different IPs are shown.

We’ve already established that http://foo:bar@localhost:9000/123 does not resolve since it’s container-to-container. http://foo:bar@onpremise_web_1:9000/123 is what we’ve been trying (and failing) with.

But by comparing the above two responses and using the successful IP http://foo:bar@172.25.0.1:9000/123… Tadaa, it works!

So:

  • Fails: onpremise_web_1, which resolves to 172.25.0.10.
  • Works: 172.25.0.1. That’s the gateway.

We don’t want to actually hardcode the IPs though so I just have to figure out which name I should be using…

If this is from a container, I think http://foo:bar@web:9000/123 is what you need?

1 Like

It is. I should have known that :man_facepalming: Thank you!

1 Like

So to recap for anyone Google’ing and ending up here, there were two issues here:

  • Old SDKs need to use the-formatted http://foo:bar@localhost:9000/123 DSN. This is available in Project Settings -> Client Keys (DSN) -> DSN (Deprecated).

  • To send events from another Docker container:

    1. Connect them on the same network with
      docker network connect <sentry-network> <container-name>.
      By default, is onpremise_default. ` is whatever your other container is named.
    2. In the DSN, replace localhost from what Sentry provides and instead use the sentry service name, visible in docker-compose.yml , which is probably web.

Thanks to @BYK and @zeeg for guiding me towards the solution :slight_smile:

2 Likes