Postgres logs is filled with errors

2016-11-27T11:27:28.452733-05:00 sentry7-pg-prod-7 postgres[24261]: [11-3] 2016-11-27 11:27:28.452 EST 3 sentry sentry 10.0.9.89(47653) STATEMENT:  INSERT INTO "sentry_organizationonboardingtask" ("organization_id", "user_id", "task", "status", "date_completed", "project_id", "data") VALUES (1, NULL, 5, 1, '2016-11-27 16:27:28.455387+00:00', 8, '{}') RETURNING "sentry_organizationonboardingtask"."id"
2016-11-27T11:27:28.591231-05:00 sentry7-pg-prod-7 postgres[24284]: [11-2] 2016-11-27 11:27:28.591 EST 2 sentry sentry 10.0.9.89(47663) DETAIL:  Key (project_id, hash)=(8, f528764d624db129b32c21fbca0cb8d6) already exists.
2016-11-27T11:27:28.593347-05:00 sentry7-pg-prod-7 postgres[24279]: [11-2] 2016-11-27 11:27:28.593 EST 2 sentry sentry 10.0.9.89(47660) DETAIL:  Key (project_id, hash)=(8, f528764d624db129b32c21fbca0cb8d6) already exists.
2016-11-27T11:27:28.621323-05:00 sentry7-pg-prod-7 postgres[24295]: [11-1] 2016-11-27 11:27:28.621 EST 1 sentry sentry 10.0.9.89(47664) ERROR:  duplicate key value violates unique constraint "sentry_organizationonboar_organization_id_47e98e05cae29cf3_uniq"
2016-11-27T11:27:28.621341-05:00 sentry7-pg-prod-7 postgres[24295]: [11-2] 2016-11-27 11:27:28.621 EST 2 sentry sentry 10.0.9.89(47664) DETAIL:  Key (organization_id, task)=(1, 5) already exists.
2016-11-27T11:27:28.621345-05:00 sentry7-pg-prod-7 postgres[24295]: [11-3] 2016-11-27 11:27:28.621 EST 3 sentry sentry 10.0.9.89(47664) STATEMENT:  INSERT INTO "sentry_organizationonboardingtask" ("organization_id", "user_id", "task", "status", "date_completed", "project_id", "data") VALUES (1, NULL, 5, 1, '2016-11-27 16:27:28.621303+00:00', 8, '{}') RETURNING "sentry_organizationonboardingtask"."id"
2016-11-27T11:27:28.680546-05:00 sentry7-pg-prod-7 postgres[24305]: [11-2] 2016-11-27 11:27:28.677 EST 2 sentry sentry 10.0.9.89(47667) DETAIL:  Key (project_id, hash)=(8, f528764d624db129b32c21fbca0cb8d6) already exists.

I had tried to work around this by deleting row in the table indicated in the error but it appears that new rows keep coming in. Here is an attempt

delete from sentry_organizationonboardingtask where task=5 and organization_id=1;

@kzwin the application handles these gracefully using a standard try/rollback pattern. I believe the next release (or last release?) of Sentry also reduces the number of failures for some of these.

It’ll be in 8.11 released this week.

Thank you for replies. I had to look into these logs because sentry cleanup was taking much much longer. This is after I updated to newer version of sentry and forgot to update sentry path in the cronjob and I ended up missing a few weeks of sentry cleanup . I normally run with --days 50 and now I try running with --days 150 and this clean up job which normally takes a few hours at most now is not finished after 2 days. I see DELETE command in postgres from this worker. I even upgraded (2X memory, disk, cpu) postgres hardware to make it faster. cleanup job output is

sentry cleanup --days 150
Removing expired values for LostPasswordHash
Removing old NodeStore values
Removing GroupRuleStatus for days=150 project=*
Removing GroupTagValue for days=150 project=*
Removing TagValue for days=150 project=*
Removing GroupEmailThread for days=150 project=*
Removing expired values for EventMapping

Should I create a new thread for this issue.

How large is your EventMapping table? If it is massive, this may take a while. EventMapping is special case and doesn’t respect the --days argument. And on top of that, we don’t provide a nice index it’d need to make it really fast. You can manually add an index here if you want to help speed it up.

CREATE INDEX CONCURRENTLY
sentry_eventmapping_date_added
ON sentry_eventmapping (date_added);

Should do the trick to help cleanup.

Wow. Thanks.

sentry=# select count(1) from sentry_eventmapping;
  count
----------
 19441715
(1 row)

That index did the trick. cleanup for 150 days finish almost immediately after I restarted.