Sentry On-Premise: Internal Error on Projects and Issues

Hi team,

We use Sentry On-Premise docker based installation Version: 10.1.0.
It has been working fine for a year, but we have started to face issues recently.
When checking for the issues on the project, it displays “Internal Error”.

And when checking the Projects themselves, I receive the below error:

When checking for the containers themselves, all the containers are seem to be up and running. We do have enough of space on our linux server which runs the service. I would be happy to provide the logs for any specific service. Kindly let me know from which service should I fetch the logs from.

Any help would be appreciated.
Thanks in advannce.

Best Regards
Naveen

We’d need the logs for all the services (faster than going one by one).

here is the result of "docker ps -a" command:

CONTAINER ID        IMAGE                                                                    COMMAND                  CREATED             STATUS                     PORTS                                                NAMES
ecc81534ffc3        sentry-cleanup-onpremise-local                                           "/entrypoint.sh '0..."   14 months ago       Up 5 months                9000/tcp                                             sentry_onpremise_sentry-cleanup_1
b4def5f3e245        sentry-onpremise-local                                                   "/bin/sh -c 'exec ..."   14 months ago       Up 5 months                9000/tcp                                             sentry_onpremise_worker_1
1782c7d15126        sentry-onpremise-local                                                   "/bin/sh -c 'exec ..."   14 months ago       Up 5 months                0.0.0.0:9000->9000/tcp                               sentry_onpremise_web_1
5c8b7140a4de        sentry-onpremise-local                                                   "/bin/sh -c 'exec ..."   14 months ago       Up 5 months                9000/tcp                                             sentry_onpremise_cron_1
5e5b282f58da        sentry-onpremise-local                                                   "/bin/sh -c 'exec ..."   14 months ago       Up 4 weeks                 9000/tcp                                             sentry_onpremise_post-process-forwarder_1
976e7ef3d061        getsentry/snuba:latest                                                   "./docker_entrypoi..."   16 months ago       Up 5 months                1218/tcp                                             sentry_onpremise_snuba-api_1
c9e2765f9386        getsentry/snuba:latest                                                   "./docker_entrypoi..."   16 months ago       Up 5 months                1218/tcp                                             sentry_onpremise_snuba-replacer_1
1d93fda22b16        snuba-cleanup-onpremise-local                                            "/entrypoint.sh '*..."   16 months ago       Up 5 months                1218/tcp                                             sentry_onpremise_snuba-cleanup_1
259918caed86        getsentry/snuba:latest                                                   "./docker_entrypoi..."   16 months ago       Exited (1) 4 weeks ago                                                          sentry_onpremise_snuba-consumer_1
0a0493a68d88        confluentinc/cp-kafka:5.1.2                                              "/etc/confluent/do..."   16 months ago       Up 5 months                9092/tcp                                             sentry_onpremise_kafka_1
dd06a55f40d7        confluentinc/cp-zookeeper:5.1.2                                          "/etc/confluent/do..."   16 months ago       Up 5 months                2181/tcp, 2888/tcp, 3888/tcp                         sentry_onpremise_zookeeper_1
04adcc1e1259        memcached:1.5-alpine                                                     "docker-entrypoint..."   16 months ago       Up 5 months                11211/tcp                                            sentry_onpremise_memcached_1
e3f24e4e20d0        tianon/exim4                                                             "docker-entrypoint..."   16 months ago       Up 5 months                25/tcp                                               sentry_onpremise_smtp_1
c6abe2f3192f        getsentry/symbolicator:latest                                            "/bin/bash /docker..."   16 months ago       Up 5 months                3021/tcp                                             sentry_onpremise_symbolicator_1
d6ac14856b00        yandex/clickhouse-server:19.11                                           "/entrypoint.sh"         16 months ago       Exited (137) 4 weeks ago                                                        sentry_onpremise_clickhouse_1
04eaba20ccfc        redis:5.0-alpine                                                         "docker-entrypoint..."   16 months ago       Up 5 months                6379/tcp                                             sentry_onpremise_redis_1
3273d2ef9c99        postgres:9.6                                                             "docker-entrypoint..."   16 months ago       Up 5 months                5432/tcp                                             sentry_onpremise_postgres_1

Hi @BYK ,

As you might have seen from my above comment, the “clickhouse” and “snuba-consumer” services are down.

How do we resolve this issue? Would “docker-compose up -d” help in this case?

We receive the below error log from my “web” service:

web_1                     | Traceback (most recent call last):
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 89, in handle_exception
web_1                     |     response = super(Endpoint, self).handle_exception(exc)
web_1                     |   File "/usr/local/lib/python2.7/site-packages/rest_framework/views.py", line 449, in handle_exception
web_1                     |     self.raise_uncaught_exception(exc)
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/api/base.py", line 196, in dispatch
web_1                     |     response = handler(request, *args, **kwargs)
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/api/endpoints/organization_group_index.py", line 179, in get
web_1                     |     {"count_hits": True, "date_to": end, "date_from": start},
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/api/endpoints/organization_group_index.py", line 48, in _search
web_1                     |     result = search.query(**query_kwargs)
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/search/snuba/backend.py", line 181, in query
web_1                     |     date_to=date_to,
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/search/snuba/executors.py", line 408, in query
web_1                     |     search_filters=search_filters,
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/search/snuba/executors.py", line 194, in snuba_search
web_1                     |     condition_resolver=snuba.get_snuba_column_name,
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 808, in aliased_query
web_1                     |     **kwargs
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 532, in raw_query
web_1                     |     return bulk_raw_query([snuba_params], referrer=referrer)[0]
web_1                     |   File "/usr/local/lib/python2.7/site-packages/sentry/utils/snuba.py", line 584, in bulk_raw_query
web_1                     |     error["message"]
web_1                     | QueryExecutionError: [210] Temporary failure in name resolution (clickhouse:9000)

Logs from “clickhouse” service:

Attaching to sentry_onpremise_clickhouse_1
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
clickhouse_1              | Logging trace to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1              | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1              | Include not found: networks
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
clickhouse_1              | Logging trace to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1              | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1              | Include not found: networks
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
clickhouse_1              | Logging trace to /var/log/clickhouse-server/clickhouse-server.log
clickhouse_1              | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse_1              | Include not found: networks
clickhouse_1              | Include not found: clickhouse_remote_servers
clickhouse_1              | Include not found: clickhouse_compression
sentry_onpremise_clickhouse_1 exited with code 137

Well, those services are core to processing and gathering events so no wonder why you are getting these errors. clickhouse logs don’t suggest anything out of the ordinary so my suspicion is insufficient resources on the system. Especially memory or disk space.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.