First-time 'sentry upgrade' using > 1GB of RAM


I’m having trouble installing Sentry on smaller VMs (dev env). Create DB schema step is crashing on me with OoM. Obviously, the sentry upgrade command is using more than my 1GB of free RAM.

It will fail only for the first time. Second time, when some DB object are already created, the resource usage acceptable.

I would consider this a bug as I see no logical reason why script for creating DB schema on a blank database should consume so much ram.

This is an output from a different machine, where mem usage reached 1.3GB !!

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                    
 2250 sentry    20   0 1628356 1.316g  19488 R 88.0 67.4   1:04.63 /www/sentry/bin/python2 /www/sentry/bin/sentry upgrade --noinput                           

Backend is Postgres 9.6
psycopg2 (2.6.2)
libpq-dev 9.6.8

Should I create a ticket on Github for it ?



You can create a ticket, but I’ll be straight and say the Sentry team is unlikely to investigate. Someone form the community might though.

(this doesn’t affect us, and while I agree it seems like a bug, there’s far bigger fish to fry)

Did you look at Django issue trackers / did a search in the Django context? This is possibly a general problem of Django migrations (just speculating, but maybe you even get a solution this way).