Been running an onpremise install of Sentry for about a year now. recently upgraded to the latest and just noticed we stopped getting events from our client applications. In checking the proxy logs, I see them trying to send stuff in, but it is returning a 502 error. I can log into the sentry app fine and navigate around without issue. When I looked at the nginx logs, I see it complaining failing to connect to upstream host 172.18.0.29:3000. When I do a docker network inspect on the onpremise network to see what .29 is, I dont find a .29 address listed. Tried restarting, and that did not help. All appeared to come up clean. I have checked everything I can find to check, but not really sure what do do next.
Any help or suggestions in how to trouble shot this?
That must be relay
. Can you check if your relay
instance is healthy and up?
You appear to be correct.
Name Command State Ports
------------------------------------------------------------------------------------------------------------------------------------------
sentry_onpremise_clickhouse_1 /entrypoint.sh Up (healthy) 8123/tcp, 9000/tcp, 9009/tcp
sentry_onpremise_cron_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_geoipupdate_1 /usr/bin/geoipupdate -d /s ... Exit 1
sentry_onpremise_ingest-consumer_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_kafka_1 /etc/confluent/docker/run Up (healthy) 9092/tcp
sentry_onpremise_memcached_1 docker-entrypoint.sh memcached Up (healthy) 11211/tcp
sentry_onpremise_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:9000->80/tcp
sentry_onpremise_post-process-forwarder_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_postgres_1 /opt/sentry/postgres-entry ... Up (healthy) 5432/tcp
sentry_onpremise_redis_1 docker-entrypoint.sh redis ... Up (healthy) 6379/tcp
sentry_onpremise_relay_1 /bin/bash /docker-entrypoi ... Restarting
sentry_onpremise_sentry-cleanup_1 /entrypoint.sh 0 0 * * * g ... Up 9000/tcp
sentry_onpremise_smtp_1 docker-entrypoint.sh exim ... Up 25/tcp
sentry_onpremise_snuba-api_1 ./docker_entrypoint.sh api Up 1218/tcp
sentry_onpremise_snuba-cleanup_1 /entrypoint.sh */5 * * * * ... Up 1218/tcp
sentry_onpremise_snuba-consumer_1 ./docker_entrypoint.sh con ... Up 1218/tcp
sentry_onpremise_snuba-outcomes-consumer_1 ./docker_entrypoint.sh con ... Up 1218/tcp
sentry_onpremise_snuba-replacer_1 ./docker_entrypoint.sh rep ... Up 1218/tcp
sentry_onpremise_snuba-sessions-consumer_1 ./docker_entrypoint.sh con ... Up 1218/tcp
sentry_onpremise_snuba-subscription-consumer-events_1 ./docker_entrypoint.sh sub ... Up 1218/tcp
sentry_onpremise_snuba-subscription-consumer-transactions_1 ./docker_entrypoint.sh sub ... Up 1218/tcp
sentry_onpremise_snuba-transactions-cleanup_1 /entrypoint.sh */5 * * * * ... Up 1218/tcp
sentry_onpremise_snuba-transactions-consumer_1 ./docker_entrypoint.sh con ... Up 1218/tcp
sentry_onpremise_subscription-consumer-events_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_subscription-consumer-transactions_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_symbolicator-cleanup_1 /entrypoint.sh 55 23 * * * ... Up 3021/tcp
sentry_onpremise_symbolicator_1 /bin/bash /docker-entrypoi ... Up 3021/tcp
sentry_onpremise_web_1 /etc/sentry/entrypoint.sh ... Up (healthy) 9000/tcp
sentry_onpremise_worker_1 /etc/sentry/entrypoint.sh ... Up 9000/tcp
sentry_onpremise_zookeeper_1 /etc/confluent/docker/run Up (healthy) 2181/tcp, 2888/tcp, 3888/tcp
In checking relay logs, I just see this repeated over and over.
relay_1 | error: could not parse json config file (file /work/.relay/credentials.json)
relay_1 | caused by: expected value at line 1 column 1
Ok, from what you helped me find, I see that the credentials.json file was not valid. It actually had:
cat credentials.json
error: could not open config file (file /tmp/config.yml)
caused by: Permission denied (os error 13)
This confused me at first. Then I realized that is the actual contents of the file.
So I removed the file and ran install.sh again. This appears to have created a proper credentials.json file, with a secret_key, public_key, and id. But when I tried to start everything back up, relay is still restarting.
I see this in its logs now.
relay_1 | 2021-10-07T21:17:34Z [relay::setup] INFO: launching relay without config folder
relay_1 | 2021-10-07T21:17:34Z [relay::setup] INFO: relay mode: managed
relay_1 | 2021-10-07T21:17:34Z [relay::setup] INFO: relay id: -
relay_1 | 2021-10-07T21:17:34Z [relay::setup] INFO: public key: -
relay_1 | 2021-10-07T21:17:34Z [relay::setup] INFO: log level: INFO
relay_1 | 2021-10-07T21:17:34Z [relay_log::utils] ERROR: relay has no credentials, which are required in managed mode. Generate some with "relay credentials generate" first.
Do I have to copy that json file somewhere else since it was messed up? Not sure what to do now.
Just reporting back incase anyone else runs into this. My final problem turned out to be a permissions issue. a chown and chmod to open up the permissions seemed to fix this issue for me.
Thanks @BYK for pointing me in the correct direction!
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.