Changing SENTRY_EVENT_RETENTION_DAYS does not work

I updated retention days to 30 "SENTRY_EVENT_RETENTION_DAYS " in both .env and However when I restart the sentry-cleanup service I still see that it is using the default 90 days.

sentry-cleanup_1 | 2021-10-14T10:45:02.499252630Z SHELL=/bin/bash
sentry-cleanup_1 | 2021-10-14T10:45:02.499283539Z BASH_ENV=/container.env
sentry-cleanup_1 | 2021-10-14T10:45:02.499289045Z 0 0 * * * gosu sentry sentry cleanup --days 90 > /proc/1/fd/1 2>/proc/1/fd/2

The docker compose file uses a variable which is not set in the in the yml. I assume it is read from the environment.

<< : *sentry_defaults
image: sentry-cleanup-onpremise-local
context: ./cron
BASE_IMAGE: ‘sentry-onpremise-local’
command: ‘“0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS”’

Wow, this looks like a legit oversight thanks for reporting!!

Would you like to submit a PR to fix it yourself here: onpremise/docker-compose.yml at f2e2dc2bb3c0c2505d0a6044989bdd29c7905fef · getsentry/onpremise · GitHub

Found the solution. After updating the SENTRY_EVENT_RETENTION_DAYS .env file I have to run docker compose again so the changes are applied by stopping and recreating the container.

docker-compose up -d sentry-cleanup

In any case I don’t understand why we have SENTRY_EVENT_RETENTION_DAYS in if the change is no applied by that script.

1 Like

The script does not use the configuration variable and instead accepts a command line argument. The reason is you can run cleanup with many different parameters. This is why we united them under the env variable which we can share.

That said, if you think it should default to that value, you can submit a PR to Sentry first and then to the on-premise repo to remove the env variable.

The cleanup script ran last night successfully however I noticed that the size on disk never goes down. Looking at the tables in the database the ‘nodestore_node’ table keeps growing. In the past two days it has grown from 575 GB to 590 GB. It does not seem to go down in size even after cleanup. Does one of the other options help clean this table? We do not have infinite storage and would like to see this table stabilise however for the past several months it keeps growing.

The cleanup job does nothing about node store. This doc should help you: