Best way to manage storage for AWS deployment

I have just started using On-Premise sentry for monitoring my applications, and I am worried about the AWS EC2 instance running out of disk space.

What is the best way to manage docker volumes for sentry, which seem to be ever growing, as I cannot find any option to archive and export data to S3 on a scheduled basis.
I have come across a lot of potential solutions, like attaching an EFS mount on the instance with its infinitely growing size, or using some third party tool (or potentially AWS Storage Gateway) to mount a S3 bucket in the same way.

This seems to be a basic issue that every sentry user should be coming across, but I cannot find any agreed upon way of handling this.

You can reduce the retention period or tune Kafka’s retention periods in the docker-compose.yaml file.

Also make sure the cleanup crons are working properly and regularly.

If I am reading the docker-compose.yml file correctly, the variables you are talking about are

  • KAFKA_LOG_RETENTION_HOURS : which is set to 24 hours by default

and SENTRY_EVENT_RETENTION_DAYS=90 in the .env file.

Are these correct? Would just updating these variables to another value and restarting the containers apply the changes?

Also, what will happen if the server runs out of disk space? Will it have any impact on the clients which are configured to be sending events to this now full server? I am hoping those clients continue running as is and do not fill up on their disk space with unsent Sentry events.

Yup. Reducing log retention hours will save you disk space but then in case of a kafka failure you’ll lose any data beyond the last X hours you’ve set there.

Clients will keep working without issues but very likely you’ll lose new events.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.