Max retries exceeded with url /tests/events/eventstream

I have sentry running in three seperate docker containers and it’s been nothing but a headache. Inexplicable undescriptive errors. Which is ironic for an application that keeps track of errors.

Eventually after 4 hours of tinkering I finally got it to show a HTML page, would you know it “API Error the team has been informed”. The upgrade script took way longer than necessary (partly because the memory management is so awful it was probably swapping half of what it needed).

Error on my part, I forgot to fix the port of the SMTP server. but nevertheless Sentry reported hapily that emails were being sent. So good job there, that really didn’t take long to figure out or anything.

Now the server has been running for about three hours and it managed to generate nearly 90.000 errrors in “internal”. No idea what internal is, the tutorial is literally not present.

Wanting to look into what errors these might be, I got another “internal server error”. Nice!

Resorting to a sane way of checking errors I looked at the docker logs of the worker container and it took so long to compute, I had a coffee, played a game of pool and went to a meeting before docker finally got me anything.

Overall a really horrendous experience.

Hi there. I’m sorry you were having issues running our free software. We do our best.

I could spend my time to address some of the points, but a lot of it can be summarized as “writing software is hard.”

Handling errors in an error handling product is even harder. Yeah, it kinda is ironic. But it tends to be an endless loop sometimes. So it’s a bit tricky to do better things. We can’t just like, hook up Sentry to it. We try to log to ourselves. But when things are misconfigured and broken, that then spirals out into other unforeseen issues.

My bad.

@tvanrielwendrich sorry you are having trouble running Sentry. We do provide a full-fledged docker-compose based setup over at so maybe that can help you figure out some of the things that are missing.

Sentry is a large application with a lot of moving parts so it is not easy to get and maintain a local instance, hence our offering of SaaS if this is not your cup of tea.

Finally, I’d like to remind you that this is a community forum for a free and open source software, hosted as a courtesy for people to help each other, not a venting channel. As much as we’d like to hear any troubles you have and try addressing them, we need them in concrete and clear form, and we won’t be tolerating abusive language or behavior.

docker-compose won’t work for me as docker-compose incorrectly assumes that you always need a virtual network making it impossible to access containers made by compose from another container that was not made by compose. I run multiple applications on the same server and I only have one port 443 to open. Instead, I use dnsdock which assigns a DNS zone for docker containers and then allows them to communicate using a clear DNS address. All applications on the machine are behind an nginx reverse proxy which itself is in docker too.

we won’t be tolerating abusive language or behavior.

Maybe this is the “Blunt Dutchmen” problem but at no point did I swear or call anyone out personally. I tried to come over reasonable but at the time of writing the post the software only gave me errors upon errors upon errors and got me frustrated.

As a software programmer I know writing software is not easy. And after writing my post I realised that the version in the docker library marked as “Stable” is not the stable version, but rather the development version.

My sincere excuses if this all is offensive to you. But it’s usual in docker to have the :latest tag contain the latest-stable version, in terms of git, the master branch. I might be completely addicted to docker to immediately assume that :latest is stable. but it sure seemed to me that the stable version is horribly broken, which after 7 hours of seeing errors, made me write this post.

After my post I saw the version stamp in the bottom left corner and made noticed the “dev” tag in there. That seemed off to me, since I never run development versions of something that I require to be stable.

I reverted to the :9.2.1 tag on the docker container and got most of it working. No recursive error reporting that clogged the whole server although the server is still having most of it’s RAM sitting idle.

I’m no Python expert but I think it might be possible to subcategorize the migration script and fork it to different processes to force the RAM to flush every once in a while. Otherwise it would be wise to take the sentry-cli approach and write it in Rust, to lower the hardware requirement of this software.

I didn’t mean to use that version but at least look at that setup to see how we configure multiple services to get Sentry up to remove the guesswork. You should be able to use that configuration with Docker Swarm, or even port it to Kubernetes if you want to. Or you can keep doing what you are doing now in all manual mode :slight_smile:

Calling out people or swearing are some of the most extreme versions of offensive or abusive speech and if you did any of those, you would have gotten a ban from the forums instead of an answer. If you keep in mind that Sentry works properly on and many other people using the open-source on-premise version, you can arrive at the conclusion that something is wrong with your specific setup. Coming into the forums and ranting about it is not the best way to seek help to remedy or understand those issues.

This part is definitely not offensive. The advertised and official Docker image is and the :latest tag does point to 9.1.2 along with 9.1 and 9 and is stable. I’m guessing you ended up with our experimental images, which are built from the latest master as nightlies. They are also stable but since there are significant infrastructure changes after version 9, they need a different, more involved setup (and new docs). The new setup is tracked at and the documentation updates will be in place once this becomes the default Sentry version.

This was due to the initial migrations requiring a lot of memory as you pointed out and we have just fixed that issue:

I’ll be updating the 3GB RAM recommendation in docs and enforcement in the install script soon, once we have better data around the new migrations system. Appreciate the feedback around this.