Persisting Volume Data on AWS

Hello Sentry community!

I’ve been setting up an onpremise sentry and mostly having a good ol’ time doing so. I’m doing this via. 2 CloudFormation stacks - a base storage stack (EFS), and serving stack (EC2 instance and a bunch of other resources to support it).

So I’ve been mucking about trying to think of a way to persist storage should the instance need redeploying and my first thought was to use EFS. It’s not limited to a single AZ like EBS and my instance is managed through an auto scaling group that works across 3 AZs.

I attempted to just mount an EFS resource under /data and change the docker storage to point to it, but it wasn’t working well (lots of errors & sentry refused to start), so I opted to pre-create sentry’s volumes as individual NFS mounts to the EFS resource instead. Each volume points to a different subdirectory of the EFS (/data, /kafka, /postgres, etc.). This has been pretty successful initially, but I’m worried about using network mounts where they may throttle the throughput of the app. The startup script already fills with timeout errors until containers are able to reach a ready state.

Sentry does start up (takes about 5m after the template has finished before it accepts traffic) and the web interface isn’t slow or buggy.

What I was wondering is if someone has a better way of setting this up & if my concerns with throughput aren’t right. (limiting it to a single EC2 instance. I would have preferred to use ECS but didn’t know if it would work due to needing an pre-run installation phase) .

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.