hi there, recently I have set up a server configured to run sentry. I wrote all docker-compose files configs on my own and it is working for some projects but for a specific project I got bellow error from relay server:
relay | 2021-02-23T07:18:24Z [relay_server::actors::events] ERROR: project state for key ########### is missing key id
relay | 2021-02-23T07:18:25Z [relay_server::actors::events] ERROR: project state for key ########### is missing key id
can anyone help me to work around this problem?
BYK
March 4, 2021, 6:31pm
2
Hi there! Sorry but it is very hard for us to support custom setups as it is impossible to know all the details. Any reason you are not using our supported onpremise repo ?
yes indeed.
your repo is just for those who completely access the internet. our servers are offline. I have to have internet access without any restrictions to docker repositories or any other sites that the on-premise rep needs to startup while running. Here I have restrictions to see docker repos or other areas.
BYK
May 3, 2021, 8:54am
4
It should be possible to download and ship those docker images to your offline network just like you do for our other code?
otherwise in any other questions which I saw here in the forum, even anyone asked about a problem with your repo, all answers he/she got is just we are thinking about,
or ‘update the repo then the problem will solve!’.
these are not supported which need.
I rarely saw a problem has solved!
you can find some here:
Following up here - any thoughts on what might be happening?
On the other pages i neither see any helpful answers from you even when the requester are using your mentioned repository.
BYK
June 28, 2021, 8:15am
8
It is not possible to help you when you just share a single line of generic error message, sorry.
If you can provide more details we’d try to help. I don’t think digging the forum for posts to show us we are not being helpful is a good use of neither your nor our time.
how can i share info with you?
I have these line related ro relay server:
relay | ERROR relay_server::actors::events > project state for key f2fc60f1d7d04ade83d98d38fe988429 is missing key id
relay | ERROR relay_server::actors::events > project state for key f2fc60f1d7d04ade83d98d38fe988429 is missing key id
relay | DEBUG relay_server::actors::project > project 48459058a971423a9e59584b05348cfd state requested
relay | DEBUG relay_server::actors::project > project state 48459058a971423a9e59584b05348cfd updated
relay | ERROR relay_server::actors::events > project state for key 48459058a971423a9e59584b05348cfd is missing key id
and relay service setting:
---
relay:
mode: proxy
#mode: static
upstream: "http://sentry:9000/"
tls_port: ~
tls_identity_path: ~
tls_identity_password: ~
logging:
level: debug
format: pretty
enable_backtraces: true
processing:
enabled: true
kafka_config:
- {name: "bootstrap.servers", value: "kafka:9092"}
- {name: "message.max.bytes", value: 50000000} #50MB or bust
redis: redis://sentry-redis:6379
#geoip_path: "/geoip/GeoLite2-City.mmdb"
http:
_client: "reqwest"
And my docker-compose.yml file:
version: '3'
#--------------------------------------------------------------
volumes:
nginx_dir:
nginx_logs:
sentry_data:
sentry_setting:
sentry_config:
pgdb_data:
redis_data:
relay_config:
driver: "local"
relay_geoip:
driver: "local"
clickhouse_config:
clickhouse_log:
clickhouse:
sentry_symbolicator:
zookeeper:
zookeeper_log:
zookeeper_secrets:
kafka:
kafka_log:
kafka_settings:
#--------------------------------------------------------------
services:
nginx:
image: nginx:1.19.6
container_name: nginx
restart: always
environment:
TZ: "Asia/Tehran"
depends_on:
- "relay"
volumes:
- "nginx_dir:/etc/nginx"
- "nginx_logs:/var/log/nginx"
ports:
- "80:80"
- "443:443"
#--------------------------------------------------------------
sentry-redis:
image: redis:5
container_name: sentry-redis
restart: always
privileged: true
# environment:
# TZ: "Asia/Tehran"
volumes:
- "redis_data:/data"
#--------------------------------------------------------------
sentry-postgres:
image: postgres:9.6.20
container_name: sentry-postgres
restart: always
environment:
POSTGRES_USER: "sentry"
POSTGRES_PASSWORD: "sentry"
POSTGRES_DB: "sentry"
# TZ: "Asia/Tehran"
volumes:
- "pgdb_data:/var/lib/postgresql/data"
depends_on:
- "sentry-redis"
#--------------------------------------------------------------
# generate secret key
# docker run --rm sentry:9.1.2 config generate-secret-key
# docker run on-off command for initializeing databases and sentry
# docker run -it --rm -e SENTRY_SECRET_KEY='######' --link sentry-postgres:postgres --link sentry-redis:redis sentry:9.1.2 upgrade
sentry-run:
image: sentry:9.1.2
container_name: sentry-run
stdin_open: true
tty: true
links:
- "sentry-redis"
- "sentry-postgres"
environment:
SENTRY_SECRET_KEY: "#######"
SENTRY_POSTGRES_HOST: "sentry-postgres"
SENTRY_DB_USER: "sentry"
SENTRY_DB_PASSWORD: "sentry"
SENTRY_REDIS_HOST: "sentry-redis"
SENTRY_ADMIN_EMAIL: "root@localhost"
depends_on:
- "sentry-redis"
- "sentry-postgres"
command:
- "upgrade"
#--------------------------------------------------------------
# user: root@localhost
# pass: ######
sentry:
image: sentry:9.1.2
container_name: sentry
restart: always
links:
- "sentry-redis"
- "sentry-postgres"
environment:
# TZ: "Asia/Tehran"
SENTRY_SECRET_KEY: "######"
SENTRY_POSTGRES_HOST: "sentry-postgres"
SENTRY_DB_USER: "sentry"
SENTRY_DB_PASSWORD: "sentry"
SENTRY_REDIS_HOST: "sentry-redis"
SENTRY_ADMIN_EMAIL: "root@localhost"
SENTRY_ALLOW_REGISTRATION: "false"
SENTRY_USE_REMOTE_USER: "true"
SNUBA: "http://snuba-api:1218"
SENTRY_SAMPLE_DATA: "false"
depends_on:
- "sentry-redis"
- "sentry-postgres"
volumes:
- "sentry_data:/var/lib/sentry/files"
- "sentry_config:/data"
- "sentry_setting:/etc/sentry/"
#--------------------------------------------------------------
cron:
image: sentry:9.1.2
container_name: cron
restart: always
links:
- "sentry-redis"
- "sentry-postgres"
command: "sentry run cron"
environment:
SENTRY_SECRET_KEY: "######"
SENTRY_POSTGRES_HOST: "sentry-postgres"
SENTRY_DB_USER: "sentry"
SENTRY_DB_PASSWORD: "sentry"
SENTRY_REDIS_HOST: "sentry-redis"
# TZ: "Asia/Tehran"
depends_on:
- "sentry"
#--------------------------------------------------------------
worker:
image: sentry:9.1.2
container_name: worker
restart: always
links:
- "sentry-redis"
- "sentry-postgres"
command: "sentry run worker"
environment:
SENTRY_SECRET_KEY: "######"
SENTRY_POSTGRES_HOST: "sentry-postgres"
SENTRY_DB_USER: "sentry"
SENTRY_DB_PASSWORD: "sentry"
SENTRY_REDIS_HOST: "sentry-redis"
# TZ: "Asia/Tehran"
depends_on:
- "sentry"
#--------------------------------------------------------------
# FYI: refer to -->> https://docs.sentry.io/product/relay/getting-started/
#
# docker run --rm -it -v /mnt/storage/root-dir-docker/volumes/docker-services_relay_config/_data/:/work/.relay/ --entrypoint bash getsentry/relay:21.1.0 -c 'chown -R relay:relay /work/.relay'
# docker run --rm -it -v /mnt/storage/root-dir-docker/volumes/docker-services_relay_config/_data/:/work/.relay/ getsentry/relay:21.1.0 config init
relay:
image: getsentry/relay:21.1.0
container_name: relay
restart: always
depends_on:
- "sentry-redis"
# environment:
# TZ: "Asia/Tehran"
volumes:
- "relay_config:/work/.relay:rw"
- "relay_geoip:/geoip"
#--------------------------------------------------------------
clickhouse:
image: yandex/clickhouse-server:21.1.2.15
container_name: clickhouse
restart: always
environment:
# TZ: "Asia/Tehran"
MAX_MEMORY_USAGE_RATIO: 0.3
volumes:
- "clickhouse:/var/lib/clickhouse"
- "clickhouse_log:/var/log/clickhouse-server"
- "clickhouse_config:/etc/clickhouse-server/config.d"
ulimits:
nofile:
soft: 262144
hard: 262144
#--------------------------------------------------------------
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
container_name: zookeeper
restart: always
environment:
# TZ: "Asia/Tehran"
ZOOKEEPER_CLIENT_PORT: "2181"
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "WARN"
ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: "WARN"
volumes:
- "zookeeper:/var/lib/zookeeper/data"
- "zookeeper_log:/var/lib/zookeeper/log"
- "zookeeper_secrets:/etc/zookeeper/secrets"
#--------------------------------------------------------------
kafka:
image: confluentinc/cp-kafka:5.5.0
container_name: kafka
restart: always
depends_on:
- "zookeeper"
environment:
# TZ: "Asia/Tehran"
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
KAFKA_LOG_RETENTION_HOURS: "24"
KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust
KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN"
KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN"
KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN"
volumes:
- "kafka:/var/lib/kafka/data"
- "kafka_log:/var/lib/kafka/log"
- "kafka_settings:/etc/kafka"
#--------------------------------------------------------------
snuba-api:
image: getsentry/snuba:21.1.0
container_name: snuba-api
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
# SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
# UWSGI_MAX_REQUESTS: "10000"
# UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
#--------------------------------------------------------------
#docker-compose exec -- kafka kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic events
snuba-consumer:
image: getsentry/snuba:21.1.0
container_name: snuba-consumer
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: consumer --storage events --auto-offset-reset=latest --max-batch-time-ms 750
#--------------------------------------------------------------
snuba-outcomes-consumer:
image: getsentry/snuba:21.1.0
container_name: snuba-outcomes-consumer
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750
#--------------------------------------------------------------
snuba-sessions-consumer:
image: getsentry/snuba:21.1.0
container_name: snuba-sessions-consumer
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750
#--------------------------------------------------------------
snuba-transactions-consumer:
image: getsentry/snuba:21.1.0
container_name: snuba-transactions-consumer
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --commit-log-topic=snuba-commit-log
#--------------------------------------------------------------
snuba-replacer:
image: getsentry/snuba:21.1.0
container_name: snuba-replacer
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: replacer --storage events --auto-offset-reset=latest --max-batch-size 3
#--------------------------------------------------------------
snuba-subscription-consumer-events:
image: getsentry/snuba:21.1.0
container_name: snuba-subscription-consumer-events
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: subscriptions --auto-offset-reset=latest --consumer-group=snuba-events-subscriptions-consumers --topic=events --result-topic=events-subscription-results --dataset=events --commit-log-topic=snuba-commit-log --commit-log-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60
#--------------------------------------------------------------
snuba-subscription-consumer-transactions:
image: getsentry/snuba:21.1.0
container_name: snuba-subscription-consumer-transactions
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: subscriptions --auto-offset-reset=latest --consumer-group=snuba-transactions-subscriptions-consumers --topic=events --result-topic=transactions-subscription-results --dataset=transactions --commit-log-topic=snuba-commit-log --commit-log-group=transactions_group --delay-seconds=60 --schedule-ttl=60
#--------------------------------------------------------------
snuba-cleanup:
image: getsentry/snuba:21.1.0
container_name: snuba-cleanup
restart: always
depends_on:
- "sentry-redis"
- "clickhouse"
- "kafka"
environment:
# TZ: "Asia/Tehran"
SNUBA_SETTINGS: docker
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: "sentry-redis"
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS: 30
command: '"*/5 * * * * gosu snuba snuba cleanup --dry-run False"'
#--------------------------------------------------------------
symbolicator:
image: getsentry/symbolicator:nightly
container_name: symbolicator
restart: always
volumes:
- "sentry_symbolicator:/etc/symbolicator"
command: run -c /etc/symbolicator/config.yml
#--------------------------------------------------------------
symbolicator-cleanup:
image: getsentry/snuba:21.1.0
container_name: symbolicator-cleanup
restart: always
command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
volumes:
- "sentry_symbolicator:/data"
#--------------------------------------------------------------
there was my service running. but not properly. my question is how did you can help other requesters that provided you with more info with their problem?!
What I saw was only a line by you that wrote any update
or just saying update the repo and run docker again!!!!
these are not supports.
BYK
July 6, 2021, 9:51am
11
Mind you, Sentry self-hosted is offered as is without any support or guarantees. If you don’t want to deal with these cases or need dedicated support, you can always stop using self-hosted and/or start using our cloud hosted service. You can read more about self-hosted support here: https://develop.sentry.dev/self-hosted/support/
Now, with that out of the way, looks like you are changing your Relay’s mode. It should run in managed
mode, which is the default and I’m quite sure that’s the reason for your issues. Can you try removing that mode: proxy
from your relay
config and restarting it?
system
Closed
October 4, 2021, 9:52am
12
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.