iOS stack traces are symbolicated wrong

Hi there,

  • We have an objc iOS app without bitcode and using the latest sentry-cocoa 5.1.6
  • We have an on-premise installation (Sentry 20.7.0.dev000d9f3a) done with docker where hopefully everything is properly setup. (redis, memcached, symbolicator, etc.)
  • The CI that is building the app is uploading the dsyms and I can see them under /settings//projects//debug-symbols/

Issues appear to be symbolicated but in the wrong way, I am raising unhandled NSException in specific places but the issues show completely unrelated method names. This happens everywhere.

The event messages contain the correct path to where it failed like: methodName:methodName:methodName:thingThatCrashes but the symbolicated crash below it shows completely unrelated stuff in all threads.

The “images loaded” section displays dysyms with green checkmarks for the main app and for frameworks we include.

The only “weird” thing I have seen in the server logs is many instances of:

symbolicator_1             | 2020-07-07T09:55:12Z [goblin::mach::segment] WARN: section #1 size 45119288 out of bounds
symbolicator_1             | 2020-07-07T09:55:12Z [goblin::mach::segment] WARN: section #1 size 45119288 out of bounds

I was also getting this every time I was clicking on “raw | unsymbolicated” to try to download and manually symbolicate the crashes.:

web_1                      | Traceback (most recent call last):
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 90, in handle_exception
web_1                      |     response = super(Endpoint, self).handle_exception(exc)
web_1                      |   File "/usr/local/lib/python2.7/site-packages/rest_framework/views.py", line 449, in handle_exception
web_1                      |     self.raise_uncaught_exception(exc)
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 207, in dispatch
web_1                      |     response = handler(request, *args, **kwargs)
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/api/endpoints/event_apple_crash_report.py", line 40, in get
web_1                      |     symbolicated=symbolicated,
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/applecrashreport.py", line 33, in __str__
web_1                      |     rv.append(self.get_binary_images_apple_string())
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/applecrashreport.py", line 167, in get_binary_images_apple_string
web_1                      |     sorted(self.debug_images, key=lambda i: parse_addr(i["image_addr"])),
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/compat/__init__.py", line 22, in map
web_1                      |     return list(_map(a, b, *c))
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/applecrashreport.py", line 166, in <lambda>
web_1                      |     lambda i: self._convert_debug_meta_to_binary_image_row(debug_image=i),
web_1                      |   File "/usr/local/lib/python2.7/site-packages/sentry/lang/native/applecrashreport.py", line 179, in _convert_debug_meta_to_binary_image_row
web_1                      |     debug_image.get("debug_id").replace("-", "").lower(),
web_1                      | AttributeError: 'NoneType' object has no attribute 'replace'

This looks like it was solved by repairing the permissions, but maybe there is something else:

sudo docker-compose run --rm --entrypoint /bin/bash web -c "chmod -R 0770 /data"sudo docker-compose run --rm --entrypoint /bin/bash web -c "chown
 -R sentry:sentry /data"

I’m out of ideas on how to solve this issue, what else could I look into? Is there something else I could provide that helps analyzing this?

config.yml
# While a lot of configuration in Sentry can be changed via the UI, for all
# new-style config (as of 8.0) you can also declare values here in this file
# to enforce defaults or to ensure they cannot be changed via the UI. For more
# information see the Sentry documentation.

###############
# Mail Server #
###############

# mail.backend: 'smtp'  # Use dummy if you want to disable email entirely
mail.host: 'redacted'
mail.port: 587
mail.username: 'redacted'
mail.password: 'redacted'
mail.use-tls: true
# The email address to send on behalf of
mail.from: 'redacted'

# If you'd like to configure email replies, enable this.
# mail.enable-replies: true

# When email-replies are enabled, this value is used in the Reply-To header
# mail.reply-hostname: ''

# If you're using mailgun for inbound mail, set your API key and configure a
# route to forward to /api/hooks/mailgun/inbound/
# Also don't forget to set `mail.enable-replies: true` above.
# mail.mailgun-api-key: ''

###################
# System Settings #
###################

# If this file ever becomes compromised, it's important to regenerate your a new key
# Changing this value will result in all current sessions being invalidated.
# A new key can be generated with `$ sentry config generate-secret-key`
system.secret-key: 'redacted'

# The ``redis.clusters`` setting is used, unsurprisingly, to configure Redis
# clusters. These clusters can be then referred to by name when configuring
# backends such as the cache, digests, or TSDB backend.
# redis.clusters:
#   default:
#     hosts:
#       0:
#         host: 127.0.0.1
#         port: 6379

################
# File storage #
################

# Uploaded media uses these `filestore` settings. The available
# backends are either `filesystem` or `s3`.

filestore.backend: 'filesystem'
filestore.options:
  location: '/data/files'
dsym.cache-path: '/data/dsym-cache'
releasefile.cache-path: '/data/releasefile-cache'

# filestore.backend: 's3'
# filestore.options:
#   access_key: 'AKIXXXXXX'
#   secret_key: 'XXXXXXX'
#   bucket_name: 's3-bucket-name'

system.internal-url-prefix: 'http://web:9000'
symbolicator.enabled: true
symbolicator.options:
  url: "http://symbolicator:3021"

transaction-events.force-disable-internal-project: true

######################
# GitHub Integration #
######################

# github-app.id: GITHUB_APP_ID
# github-app.name: 'GITHUB_APP_NAME'
# github-app.webhook-secret: 'GITHUB_WEBHOOK_SECRET' # Use only if configured in GitHub
# github-app.client-id: 'GITHUB_CLIENT_ID'
# github-app.client-secret: 'GITHUB_CLIENT_SECRET'
# github-app.private-key: |
#   -----BEGIN RSA PRIVATE KEY-----
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   privatekeyprivatekeyprivatekeyprivatekey
#   -----END RSA PRIVATE KEY-----
sentry.conf
# This file is just Python, with a touch of Django which means
# you can inherit and tweak settings to your hearts content.

from sentry.conf.server import *  # NOQA

DATABASES = {
    "default": {
        "ENGINE": "sentry.db.postgres",
        "NAME": "postgres",
        "USER": "postgres",
        "PASSWORD": "",
        "HOST": "postgres",
        "PORT": "",
    }
}

# You should not change this setting after your database has been created
# unless you have altered all schemas first
SENTRY_USE_BIG_INTS = True

# Controls whether DEVSERVICES will spin up a Relay and direct store traffic through Relay or not.
# If Relay is used a reverse proxy server will be run at the 8000 (the port formally used by Sentry) that
# will split the requests between Relay and Sentry (all store requests will be passed to Relay, and the
# rest will be forwarded to Sentry)
#SENTRY_USE_RELAY = True
#SENTRY_RELAY_PORT = 3000
#SENTRY_REVERSE_PROXY_PORT = 8000


# If you're expecting any kind of real traffic on Sentry, we highly recommend
# configuring the CACHES and Redis settings

###########
# General #
###########

# Instruct Sentry that this install intends to be run by a single organization
# and thus various UI optimizations should be enabled.
SENTRY_SINGLE_ORGANIZATION = True

SENTRY_OPTIONS["system.event-retention-days"] = int(env('SENTRY_EVENT_RETENTION_DAYS', '90'))

#########
# Redis #
#########

# Generic Redis configuration used as defaults for various things including:
# Buffers, Quotas, TSDB

SENTRY_OPTIONS["redis.clusters"] = {
    "default": {
        "hosts": {0: {"host": "redis", "password": "", "port": "6379", "db": "0"}}
    }
}

#########
# Queue #
#########

# See https://docs.getsentry.com/on-premise/server/queue/ for more
# information on configuring your queue broker and workers. Sentry relies
# on a Python framework called Celery to manage queues.

rabbitmq_host = None
if rabbitmq_host:
    BROKER_URL = "amqp://{username}:{password}@{host}/{vhost}".format(
        username="guest", password="guest", host=rabbitmq_host, vhost="/"
    )
else:
    BROKER_URL = "redis://:{password}@{host}:{port}/{db}".format(
        **SENTRY_OPTIONS["redis.clusters"]["default"]["hosts"][0]
    )


#########
# Cache #
#########

# Sentry currently utilizes two separate mechanisms. While CACHES is not a
# requirement, it will optimize several high throughput patterns.

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.memcached.MemcachedCache",
        "LOCATION": ["memcached:11211"],
        "TIMEOUT": 3600,
    }
}

# A primary cache is required for things such as processing events
SENTRY_CACHE = "sentry.cache.redis.RedisCache"

DEFAULT_KAFKA_OPTIONS = {
    "bootstrap.servers": "kafka:9092",
    "message.max.bytes": 50000000,
    "socket.timeout.ms": 1000,
}

SENTRY_EVENTSTREAM = "sentry.eventstream.kafka.KafkaEventStream"
SENTRY_EVENTSTREAM_OPTIONS = {"producer_configuration": DEFAULT_KAFKA_OPTIONS}

KAFKA_CLUSTERS["default"] = DEFAULT_KAFKA_OPTIONS

###############
# Rate Limits #
###############

# Rate limits apply to notification handlers and are enforced per-project
# automatically.

SENTRY_RATELIMITER = "sentry.ratelimits.redis.RedisRateLimiter"

##################
# Update Buffers #
##################

# Buffers (combined with queueing) act as an intermediate layer between the
# database and the storage API. They will greatly improve efficiency on large
# numbers of the same events being sent to the API in a short amount of time.
# (read: if you send any kind of real data to Sentry, you should enable buffers)

SENTRY_BUFFER = "sentry.buffer.redis.RedisBuffer"

##########
# Quotas #
##########

# Quotas allow you to rate limit individual projects or the Sentry install as
# a whole.

SENTRY_QUOTAS = "sentry.quotas.redis.RedisQuota"

########
# TSDB #
########

# The TSDB is used for building charts as well as making things like per-rate
# alerts possible.

SENTRY_TSDB = "sentry.tsdb.redissnuba.RedisSnubaTSDB"

#########
# SNUBA #
#########

SENTRY_SEARCH = "sentry.search.snuba.EventsDatasetSnubaSearchBackend"
SENTRY_SEARCH_OPTIONS = {}
SENTRY_TAGSTORE_OPTIONS = {}

###########
# Digests #
###########

# The digest backend powers notification summaries.

SENTRY_DIGESTS = "sentry.digests.backends.redis.RedisBackend"

##############
# Web Server #
##############

SENTRY_WEB_HOST = "0.0.0.0"
SENTRY_WEB_PORT = 9000
SENTRY_WEB_OPTIONS = {
# These ase for proper HTTP/1.1 support from uWSGI
    # Without these it doesn't do keep-alives causing
    # issues with Relay's direct requests.
    "http-keepalive": True,
    "http-chunked-input": True,
    # the number of web workers
    'workers': 3,
    # Turn off memory reporting
    "memory-report": False,
    # Some stuff so uwsgi will cycle workers sensibly
    'max-requests': 100000,
    'max-requests-delta': 500,
    'max-worker-lifetime': 86400,
    # Duplicate options from sentry default just so we don't get
    # bit by sentry changing a default value that we depend on.
    'thunder-lock': True,
    'log-x-forwarded-for': False,
    'buffer-size': 32768,
    'limit-post': 209715200,
    'disable-logging': True,
    'reload-on-rss': 600,
    'ignore-sigpipe': True,
    'ignore-write-errors': True,
    'disable-write-exception': True,
}

###########
# SSL/TLS #
###########

# If you're using a reverse SSL proxy, you should enable the X-Forwarded-Proto
# header and enable the settings below

#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
#SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = False
#SOCIAL_AUTH_REDIRECT_IS_HTTPS = True

# End of SSL/TLS settings

############
# Features #
############

SENTRY_FEATURES["projects:sample-events"] = False
SENTRY_FEATURES.update(
    {
        feature: True
        for feature in (
            "organizations:discover",
            "organizations:events",
            "organizations:global-views",
            "organizations:integrations-issue-basic",
            "organizations:integrations-issue-sync",
            "organizations:invite-members",
            "organizations:new-issue-ui",
            "organizations:repos",
            "organizations:require-2fa",
            "organizations:sentry10",
            "organizations:sso-basic",
            "organizations:sso-rippling",
            "organizations:sso-saml2",
            "organizations:suggested-commits",
            "projects:custom-inbound-filters",
            "projects:data-forwarding",
            "projects:discard-groups",
            "projects:plugins",
            "projects:rate-limits",
            "projects:servicehooks",
        )
    }
)

######################
# GitHub Integration #
######################

GITHUB_EXTENDED_PERMISSIONS = ['repo']

#########################
# Bitbucket Integration #
########################

# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'
# BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'
SENTRY_RELAY_WHITELIST_PK = (SENTRY_RELAY_WHITELIST_PK or []) + (["redacted"])
docker-compose.yml
version: '3.4'
x-restart-policy: &restart_policy
  restart: unless-stopped
x-sentry-defaults: &sentry_defaults
  << : *restart_policy
  build:
    context: ./sentry
    args:
      - SENTRY_IMAGE
      - SENTRY_VERSION
  image: sentry-onpremise-local
  depends_on:
    - redis
    - postgres
    - memcached
    - smtp
    - snuba-api
    - snuba-consumer
    - snuba-outcomes-consumer
    - snuba-sessions-consumer
    - snuba-replacer
    - symbolicator
    - kafka
  environment:
    SENTRY_CONF: '/etc/sentry'
    SNUBA: 'http://snuba-api:1218'
  volumes:
    - 'sentry-data:/data'
    - './sentry:/etc/sentry'
x-snuba-defaults: &snuba_defaults
  << : *restart_policy
  depends_on:
    - redis
    - clickhouse
    - kafka
  image: 'getsentry/snuba:$SENTRY_VERSION'
  environment:
    SNUBA_SETTINGS: docker
    CLICKHOUSE_HOST: clickhouse
    DEFAULT_BROKERS: 'kafka:9092'
    REDIS_HOST: redis
    UWSGI_MAX_REQUESTS: '10000'
    UWSGI_DISABLE_LOGGING: 'true'
services:
  smtp:
    << : *restart_policy
    image: tianon/exim4
    volumes:
      - 'sentry-smtp:/var/spool/exim4'
      - 'sentry-smtp-log:/var/log/exim4'
  memcached:
    << : *restart_policy
    image: 'memcached:1.5-alpine'
  redis:
    << : *restart_policy
    image: 'redis:5.0-alpine'
    volumes:
      - 'sentry-redis:/data'
  postgres:
    << : *restart_policy
    image: 'postgres:9.6'
    environment:
      POSTGRES_HOST_AUTH_METHOD: 'trust'
    volumes:
      - 'sentry-postgres:/var/lib/postgresql/data'
  zookeeper:
    << : *restart_policy
    image: 'confluentinc/cp-zookeeper:5.5.0'
    environment:
      ZOOKEEPER_CLIENT_PORT: '2181'
      CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
      ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: 'WARN'
      ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: 'WARN'
    volumes:
      - 'sentry-zookeeper:/var/lib/zookeeper/data'
      - 'sentry-zookeeper-log:/var/lib/zookeeper/log'
      - 'sentry-secrets:/etc/zookeeper/secrets'
  kafka:
    << : *restart_policy
    depends_on:
      - zookeeper
    image: 'confluentinc/cp-kafka:5.5.0'
    environment:
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://kafka:9092'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
      KAFKA_MESSAGE_MAX_BYTES: '50000000' #50MB or bust
      KAFKA_MAX_REQUEST_SIZE: '50000000' #50MB on requests apparently too
      CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
      KAFKA_LOG4J_LOGGERS: 'kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN'
      KAFKA_LOG4J_ROOT_LOGLEVEL: 'WARN'
      KAFKA_TOOLS_LOG4J_LOGLEVEL: 'WARN'
    volumes:
      - 'sentry-kafka:/var/lib/kafka/data'
      - 'sentry-kafka-log:/var/lib/kafka/log'
      - 'sentry-secrets:/etc/kafka/secrets'
  clickhouse:
    << : *restart_policy
    image: 'yandex/clickhouse-server:19.17'
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    volumes:
      - 'sentry-clickhouse:/var/lib/clickhouse'
      - 'sentry-clickhouse-log:/var/log/clickhouse-server'
  snuba-api:
    << : *snuba_defaults
  # Kafka consumer responsible for feeding events into Clickhouse
  snuba-consumer:
    << : *snuba_defaults
    command: consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
  # Kafka consumer responsible for feeding outcomes into Clickhouse
  # Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
  # since we did not do a proper migration
  snuba-outcomes-consumer:
    << : *snuba_defaults
    command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
  # Kafka consumer responsible for feeding session data into Clickhouse
  snuba-sessions-consumer:
    << : *snuba_defaults
    command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
  snuba-replacer:
    << : *snuba_defaults
    command: replacer --storage events --auto-offset-reset=latest --max-batch-size 3
  snuba-cleanup:
    << : *snuba_defaults
    image: snuba-cleanup-onpremise-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: 'getsentry/snuba:$SENTRY_VERSION'
    command: '"*/5 * * * * gosu snuba snuba cleanup --dry-run False"'
  symbolicator:
    << : *restart_policy
    image: 'getsentry/symbolicator:$SYMBOLICATOR_VERSION'
    volumes:
      - 'sentry-symbolicator:/data'
    command: run
  symbolicator-cleanup:
    << : *restart_policy
    image: symbolicator-cleanup-onpremise-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: 'getsentry/symbolicator:$SYMBOLICATOR_VERSION'
    command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
    volumes:
      - 'sentry-symbolicator:/data'
  web:
    << : *sentry_defaults
  cron:
    << : *sentry_defaults
    command: run cron
  worker:
    << : *sentry_defaults
    command: run worker
  ingest-consumer:
    << : *sentry_defaults
    command: run ingest-consumer --all-consumer-types
  post-process-forwarder:
    << : *sentry_defaults
    # Increase `--commit-batch-size 1` below to deal with high-load environments.
    command: run post-process-forwarder --commit-batch-size 1
  sentry-cleanup:
    << : *sentry_defaults
    image: sentry-cleanup-onpremise-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: 'sentry-onpremise-local'
    command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
  nginx:
    << : *restart_policy
    ports:
      - '9000:80/tcp'
    image: 'nginx:1.16'
    volumes:
      - type: bind
        read_only: true
        source: ./nginx
        target: /etc/nginx
    depends_on:
      - web
      - relay
  relay:
    << : *restart_policy
    image: 'getsentry/relay:$SENTRY_VERSION'
    volumes:
      - type: bind
        read_only: true
        source: ./relay
        target: /work/.relay
    depends_on:
      - kafka
      - redis
volumes:
  sentry-data:
    external: true
  sentry-postgres:
    external: true
  sentry-redis:
    external: true
  sentry-zookeeper:
    external: true
  sentry-kafka:
    external: true
  sentry-clickhouse:
    external: true
  sentry-symbolicator:
    external: true
  sentry-secrets:
  sentry-smtp:
  sentry-zookeeper-log:
  sentry-kafka-log:
  sentry-smtp-log:
  sentry-clickhouse-log:

Update: I tried manually symbolicating (atos, etc.) the raw crash file that can be downloaded from the issue and I get indeed the same result as the server… but it’s still wrong, it doesn’t match at all the place where the crash happens.