Sentry stop working

2021-06-03T00:48:56.688092739Z 00:48:56 [WARNING] sentry.tasks.process_buffer: process_pending.fail (error=UnableToAcquireLock(u"Unable to acquire <Lock: ‘buffer:process_pending’> due to error: Could not set key: u’l:buffer:process_pending’",))

We noticed several errors recently and everytime end up nuke redis to get the service back. How this happen and is there any better workaround ?

Looks like something else might be stuck or you just need more memory for Redis.

CPU

used_cpu_sys:81.387712
used_cpu_user:126.668356
used_cpu_sys_children:30.898539
used_cpu_user_children:139.905098

Cluster

cluster_enabled:0

Keyspace

db0:keys=1622014,expires=1622012,avg_ttl=0
db1:keys=15,expires=0,avg_ttl=0
127.0.0.1:6379> info memory

Memory

used_memory:14904515288
used_memory_human:13.88G
used_memory_rss:15697424384
used_memory_rss_human:14.62G
used_memory_peak:15431956792
used_memory_peak_human:14.37G
used_memory_peak_perc:96.58%
used_memory_overhead:138910078
used_memory_startup:791728
used_memory_dataset:14765605210
used_memory_dataset_perc:99.07%
allocator_allocated:14904971936
allocator_active:15890120704
allocator_resident:16038047744
total_system_memory:32654979072
total_system_memory_human:30.41G
used_memory_lua:77824
used_memory_lua_human:76.00K
used_memory_scripts:19712
used_memory_scripts_human:19.25K
number_of_cached_scripts:4
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction

I think 14G mem used is too much for 1m keys , no ?

Redis is also used as a task queue. If you are using an older version of Sentry, we do store the whole event payload in Redis which would explain this.

What version of Sentry are you using?

Sentry 20.10.178833d9

And also found something interesting :slight_smile:
[00.00%] Biggest string found so far ‘c:1:e:1888dd214eda48e89a0c4f00b91f795b:2’ with 4929 bytes
[00.00%] Biggest string found so far ‘c:1:e:c497ca832c604361abbf780b75908ec3:2’ with 6889 bytes
[00.00%] Biggest string found so far ‘c:1:e:3153570a07ec433eaabf43ae4e177d9c:4:u’ with 8858 bytes
[00.00%] Biggest string found so far ‘c:1:e:5426c9c4e31f48b6b6e7cb143ad9eb21:4’ with 16452 bytes
[00.00%] Biggest string found so far ‘c:1:e:b497d80bc6834c669b34e6ea884aa67d:9’ with 24101 bytes
[00.00%] Biggest hash found so far ‘ts:0:450748:42’ with 1 fields
[00.00%] Biggest string found so far ‘c:1:e:dbb387d9d7704f41abbdb032a4a48b34:2’ with 93139 bytes
[00.02%] Biggest string found so far ‘c:1:e:f33b85934c7a4504893abb3d6b7b661f:3:u’ with 125998 bytes
[00.05%] Biggest string found so far ‘c:1:e:2fccfc23c7f847ef92ca5004dd852fd0:3’ with 181382 bytes
[00.05%] Biggest hash found so far ‘ts:0:162271008:56’ with 2 fields
[00.07%] Biggest hash found so far ‘ts:0:18780:21’ with 5 fields
[00.12%] Biggest string found so far ‘c:1:e:4f61b737f8b3466eab584efd64c66d30:3’ with 182627 bytes
[00.19%] Biggest string found so far ‘c:1:e:b21ee85a347342dfbbd52930e3aaadcc:3’ with 182753 bytes
[00.20%] Biggest string found so far ‘c:1:e:ab476c13d2e9464c908c54aaa59f5804:3’ with 258843 bytes
[01.09%] Biggest string found so far ‘c:1:e:4cad0b74e84c4dae8d59b505c70aa2fb:3’ with 259299 bytes
[01.77%] Biggest string found so far ‘c:1:e:3da30f464538432bb897ab333e8354f0:3’ with 269841 bytes
[34.97%] Biggest hash found so far ‘b:k:sentry.group:4f6217d711bb772d8db49d8c8ff14500’ with 6 fields
[53.13%] Biggest set found so far ‘organization-feature-adoption:1’ with 1 members
[62.33%] Sampled 1000000 keys so far
[73.19%] Biggest zset found so far ‘b:p’ with 9 members
[77.40%] Biggest hash found so far ‘b:k:sentry.group:308c6795e1a9cc214e46f1c4b119fa8c’ with 8 fields

redis-cli ttl organization-feature-adoption:1
(integer) -1
redis-cli ttl b:k:sentry.group:308c6795e1a9cc214e46f1c4b119fa8c
(integer) -2

So looks like some of the BIG keys are safe to expire , right ?

Don’t know about expiration but those big keys and your Sentry version tells me that you’d benefit greatly from an upgrade where we optimized this stuff :slight_smile:

Well thanks for that, but I did have some concerns about the upgrade since we are way behind the current release. I guess we can’t just jump into the latest right ?

You sure can :slight_smile:. Just expect some downtime during the upgrade due to migrations and all should be fine.

Thanks, I’ll bring it up to schedule

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.