Event 9.1.2 => 20.11.1 migration

Hello Sentry Team, thank you for making a great product. I am learning a lot by breaking the code one by one. While analyzing the infrastructure in my own way, I have a question, so I contacted you. What happens to the correct operation process when the accumulated events are upgraded? In the error event that was previously contained in 9.1.2 postgres, it is confirmed that most of the data in 20.11.0 go to the clickhouse through snuba after migration through install.sh. Is it correct? Also, I thought that some of the existing error events are moved to postgres and most of them to clickhouse through “docker-compose run --rm web upgrade” during migration in install.sh. Is that correct?
Finally, it seems that the clickhouse contains information related to sessions other than the error event. So, I wonder if you plan to actively use clickhouse for data that does not need to be included in relational db and needs to be processed quickly in the future.

I’m really, really curious about how events move when versioning up, and how events accumulate and consume in the 20.11.0 architecture in general. There are so many things to learn about architecture. Please explain in detail!

Yes, this is correct. All previous data will be migrated to Clickhouse via Snuba. There will still be some related data in Postgres though. Also, we cap the events to migrate to the last 90 days by default unless you override this setting explicitly.

This is the same thing as above. We have made moving events to Clickhouse/Snuba a database migration so when you run the upgrade command (which install.sh does for you), the above operation is performed.

Yes. Clickhouse is now powering some time-series data used for analytics and rate limiting, for performance transactions, and release health (via sessions). We are planning to utilize it even more in the future.

Here’s a simplified architecture diagram:

and here is the event migration code: https://github.com/getsentry/sentry/blob/b8bf85948d4c4832c60ab31e9f1a5a3c39fd53cf/src/sentry/migrations/0024_auto_20191230_2052.py

Hope these help.

Thanks for the detailed explanation, BYK. The structure is better understood by looking at the architecture part attached with the explanation! In the end, according to your explanation, I can think of relational and persistent data going to postgres, event-related and limited retention statistics data going to the clickhouse. Thank you!

1 Like

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.