The response is huge and most of the data is filled with something like {“status”: “complete”, “dateCompleted”: “2017-07-31T14:09:27.267Z”, “task”: 6, “data”: {}, “user”: null}
Are you able to share the full payload? Since as far as we know, we haven’t seen any issue with this on sentry.io and nobody has complained there, so not sure what the issue would be.
Also, based on your screenshot, if that url is accurate, that should be a 404.
There are two endpoints here related to organizations. There’s organization list, which is sorta what you have here, but it’s missing a tailing slash. But without this trailing slash, it’ll be a 404. This organization list endpoint wouldn’t be spitting back this onboarding tasks either which is what you’ve shown pasted in here.
And an organization detail endpoint would include this information. But a single organization can only, in theory, have a max of 9 values here. So it seems unlikely that this could somehow expand into 20MB of data from only 9 items.
So getting access to the full payload here would be useful. Looking at a few of these samples from production data, verifies my assumptions. I’m not sure if there’s something else at play that is causing this behavior to be broken possibly for on-premise users.
Ouch. So it sounds like something is wrong with your database. There’s no reason for there to be all of those duplicate things.
Can you paste the schema of your sentry_organizationonboardingtask table? Something like \d sentry_organizationonboardingtask. It sounds like you’re missing the index to make these values not infinitely grow in size. This should have come in with migration 0236.
I think I remember having troubles with the migrations in March. I was fiddling with the database and failing upgrade. Looks this could have been the source of the trouble. Would you please help me to add the unique index?
What version of Postgres is this? This is definitely missing the UNIQUE index needed. But you have the table… so I’m real curious how this happened. We can truncate all the data and add the index manually if we need to.
To manually create it, you’ll first need to truncate all the data in this table, since you have rows which violate this constraint in the first place. So the UNIQUE will fail to apply.
TRUNCATE sentry_organizationonboardingtask;
CREATE UNIQUE INDEX sentry_organizationonboar_organization_id_47e98e05cae29cf3_uniq ON sentry_organizationonboardingtask (organization_id, task);
This data is safe to delete, so there’s no real concern in data loss or anything.