Hello,
I’m trying to deploy Sentry Helm Chart v21.8.0 in our k8s cluster using our external PostgreSQL DB (already upgraded to 21.8.0 and all DB migrations have been executed). For some strange reason, the old events are not showing up on Sentry UI. I was able to deploy the chart successfully once and the old events showed up. I remember in that case after deploying the chart, I got this message on the UI - Background workers haven’t checked in recently. This is likely an issue with your configuration or the workers aren’t running, which prompted me to execute sentry run cron
and sentry run web
processes in sentry-web
pod.
Now in order to ensure and test that what I did the first time works for us the second time also, I am trying to deploy Sentry again but the old events fail to show up. I executed snuba bootstrap --force
inside the snuba-api
pod, followed by snuba migrations migrate
(not sure if the migrations migrate step is needed). The output of snuba bootstrap --force
shows that the Kafka topics were created and some migrations have been run and I can see a few events on Sentry UI from the Internal project but not our old events.
There are 3 worker pods created, logs from worker-3 pod,
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 384, in _make_request
six.raise_from(e, None)
File "<string>", line 2, in raise_from
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 380, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.6/site-packages/sentry_sdk/integrations/stdlib.py", line 102, in getresponse
rv = real_getresponse(self, *args, **kwargs)
File "/usr/local/lib/python3.6/http/client.py", line 1379, in getresponse
response.begin()
File "/usr/local/lib/python3.6/http/client.py", line 311, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.6/http/client.py", line 272, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.6/ssl.py", line 1012, in recv_into
return self.read(nbytes, buffer)
File "/usr/local/lib/python3.6/ssl.py", line 874, in read
return self._sslobj.read(len, buffer)
File "/usr/local/lib/python3.6/ssl.py", line 631, in read
v = self._sslobj.read(len, buffer)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
16:25:40 [ERROR] sentry_sdk.errors: Internal error in sentry_sdk
16:29:33 [WARNING] sentry.tasks.release_registry: Release registry URL is not specified, skipping the task.
16:29:33 [INFO] sentry.tasks.update_user_reports: update_user_reports.records_updated (reports_to_update=0 reports_with_event=0 updated_reports=0)
16:30:40 [WARNING] sentry.tasks.release_registry: Release registry URL is not specified, skipping the task.
Logs from 1 of the Kafka pods,
[2022-01-21 14:15:20,939] INFO [Log partition=__consumer_offsets-11, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,940] INFO Created log for partition __consumer_offsets-11 in /bitnami/kafka/data/__consumer_offsets-11 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-01-21 14:15:20,940] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition)
[2022-01-21 14:15:20,940] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-21 14:15:20,942] INFO [Log partition=__consumer_offsets-30, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,943] INFO Created log for partition __consumer_offsets-30 in /bitnami/kafka/data/__consumer_offsets-30 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-01-21 14:15:20,943] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2022-01-21 14:15:20,943] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-21 14:15:20,945] INFO [Log partition=__consumer_offsets-27, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,946] INFO Created log for partition __consumer_offsets-27 in /bitnami/kafka/data/__consumer_offsets-27 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-01-21 14:15:20,946] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2022-01-21 14:15:20,946] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-21 14:15:20,949] INFO [Log partition=__consumer_offsets-8, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,950] INFO Created log for partition __consumer_offsets-8 in /bitnami/kafka/data/__consumer_offsets-8 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-01-21 14:15:20,950] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition)
[2022-01-21 14:15:20,950] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-21 14:15:20,952] INFO [Log partition=__consumer_offsets-24, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,953] INFO Created log for partition __consumer_offsets-24 in /bitnami/kafka/data/__consumer_offsets-24 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2022-01-21 14:15:20,953] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2022-01-21 14:15:20,953] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition)
[2022-01-21 14:15:20,956] INFO [Log partition=__consumer_offsets-5, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2022-01-21 14:15:20,957] INFO Created log for partition __consumer_offsets-5 in /bitnami/kafka/data/__consumer_offsets-5 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 1000, retention.ms -> 604800000, segment.bytes -> 104857600, flush.messages -> 10000, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 50000000, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> 1073741824, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
There is 1 ingest consumer pod,
14:15:22 [WARNING] batching-kafka-consumer: Topic 'ingest-transactions' or its partitions are not ready, retrying...
14:15:22 [WARNING] batching-kafka-consumer: Topic 'ingest-events' or its partitions are not ready, retrying...
14:15:22 [INFO] batching-kafka-consumer: New partitions assigned: [TopicPartition{topic=ingest-attachments,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-events,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-transactions,partition=0,offset=-1001,error=None}]
14:18:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [0, 0]}): forced:False size:False time:True
14:18:59 [INFO] batching-kafka-consumer: Worker flush took 64ms
14:19:54 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-events', 0): [0, 0]}): forced:False size:False time:True
14:19:54 [INFO] batching-kafka-consumer: Worker flush took 10ms
14:19:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [1, 1]}): forced:False size:False time:True
14:19:59 [INFO] batching-kafka-consumer: Worker flush took 8ms
14:20:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [2, 2]}): forced:False size:False time:True
14:20:59 [INFO] batching-kafka-consumer: Worker flush took 8ms
14:21:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [3, 3]}): forced:False size:False time:True
14:21:59 [INFO] batching-kafka-consumer: Worker flush took 8ms
14:22:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [4, 4]}): forced:False size:False time:True
14:22:59 [INFO] batching-kafka-consumer: Worker flush took 8ms
14:23:59 [INFO] batching-kafka-consumer: Flushing 1 items (from {('ingest-transactions', 0): [5, 5]}): forced:False size:False time:True
14:23:59 [INFO] batching-kafka-consumer: Worker flush took 8ms
Would really helpful if you could guide me in the right direction.
NOTE - Unable to edit the Category of this post to On-Premise. Don’t know why.