Some questions like:
event → relay → kafka → ingest-consumer → kafka snuba → snuba consumer → clickhouse
-
- So looks like events are stored in clickhouse right ? Then what stores in postgresql ? like project configs or sth? We observe a growth in postgresql storage and cronjob cleanup doesn’t help , it’s kinda weird if events not in psql and took that much space.
- 2.What stores in redis? I know there are counters or maybe snuba queries caches in redis , but anything valuable ? Is it safe to purge the dump if needed? We have some incidents when kafka is down or ingest down then redis overloaded so I guess redis has something to do in the ingest workflow but how ?
- 3.Similar to above , I’ve opened another topic before
Sentry worker stop working (rabbitmq connection issue?)