Changing SENTRY_EVENT_RETENTION_DAYS does not work

I updated retention days to 30 "SENTRY_EVENT_RETENTION_DAYS " in both .env and However when I restart the sentry-cleanup service I still see that it is using the default 90 days.

sentry-cleanup_1 | 2021-10-14T10:45:02.499252630Z SHELL=/bin/bash
sentry-cleanup_1 | 2021-10-14T10:45:02.499283539Z BASH_ENV=/container.env
sentry-cleanup_1 | 2021-10-14T10:45:02.499289045Z 0 0 * * * gosu sentry sentry cleanup --days 90 > /proc/1/fd/1 2>/proc/1/fd/2

The docker compose file uses a variable which is not set in the in the yml. I assume it is read from the environment.

<< : *sentry_defaults
image: sentry-cleanup-onpremise-local
context: ./cron
BASE_IMAGE: ‘sentry-onpremise-local’
command: ‘“0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS”’

Wow, this looks like a legit oversight thanks for reporting!!

Would you like to submit a PR to fix it yourself here: onpremise/docker-compose.yml at f2e2dc2bb3c0c2505d0a6044989bdd29c7905fef · getsentry/onpremise · GitHub

Found the solution. After updating the SENTRY_EVENT_RETENTION_DAYS .env file I have to run docker compose again so the changes are applied by stopping and recreating the container.

docker-compose up -d sentry-cleanup

In any case I don’t understand why we have SENTRY_EVENT_RETENTION_DAYS in if the change is no applied by that script.

1 Like

The script does not use the configuration variable and instead accepts a command line argument. The reason is you can run cleanup with many different parameters. This is why we united them under the env variable which we can share.

That said, if you think it should default to that value, you can submit a PR to Sentry first and then to the on-premise repo to remove the env variable.

The cleanup script ran last night successfully however I noticed that the size on disk never goes down. Looking at the tables in the database the ‘nodestore_node’ table keeps growing. In the past two days it has grown from 575 GB to 590 GB. It does not seem to go down in size even after cleanup. Does one of the other options help clean this table? We do not have infinite storage and would like to see this table stabilise however for the past several months it keeps growing.

The cleanup job does nothing about node store. This doc should help you:

It seems the solution provided in the trouble shooting guide needs to be updated. As it does not work as expected. The pg_repack command fails with an error

$ docker-compose run -T postgres bash -c "apt update && apt install -y --no-install-recommends postgresql-9.6-repack && su postgres -c 'pg_repack -E info -t nodestore_node'"

ERROR: pg_repack failed with error: pg_repack 1.3.4 is not installed in the database

To fix this issues I added the extension to the database.

$ docker-compose exec postgres bash -c "psql -U postgres -c 'CREATE EXTENSION pg_repack' -d postgres"

Then I reran the pg_repack for nodestore and got a different error.

$ docker-compose exec postgres bash -c "su postgres -c 'pg_repack -E info -t nodestore_node'"
INFO: repacking table "nodestore_node"
ERROR: query failed: ERROR:  unexpected index definition: CREATE INDEX nodestore_node_timestamp_a6fca047 ON public.nodestore_node USING btree ("timestamp") TABLESPACE pg_default
DETAIL: query was: SELECT indexrelid, repack.repack_indexdef(indexrelid, indrelid, $2, FALSE)  FROM pg_index WHERE indrelid = $1 AND indisvalid
ERROR: 254

It seems that this version of pg_repack is not able to parse schema qualified indexdef. See link below for details.

The solution was to install a newer version of pg_repack. For me postgresql-9.6-repack_1.4.7-1 worked. After installing this version I was able to run repack to free up space.

docker-compose run -exec postgres bash -c "su postgres -c 'pg_repack -E info -t nodestore_node'"

The above command runs in the existing postgres service. I think the document should be updated inline with my finding. We have version 20.11.1 4468076 of selfhosted Sentry.

Ah, thanks for the heads up @mjaferDo. Would you like to get this change implemented yourself here: Sign in to GitHub · GitHub

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.