extract the file and run install.sh
after installation completed I get the following error on UI:
Background workers haven’t checked in recently. This is likely an issue with your configuration or the workers aren’t running
And also the bellow error is shown in docker logs:
%3|1609765774.558|FAIL|rdkafka#consumer-2| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Connect to ipv4#10.100.12.11:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
here’s the installation log:
and the result of ‘api/0/internal/health/’ is:
{“problems”:[{“id”:“0becb7eeaef19cd7e28c66e13dbaa819”,“message”:“There is 1 issue with your system configuration.”,“severity”:“warning”,“url”:null}],“healthy”:{“CeleryAliveCheck”:true,“CeleryAppVersionCheck”:true,“WarningStatusCheck”:false}}
This is normal for very fresh installs. Just give it several minutes and this should go away.
Again, give it some time. If this persists, seems like your Kafka instance is having issues. Otherwise, it is okay to see some transitional errors like this while all the services get up and running.
Thank you for the fast reply.
For kafka error (connection refused error… ) I stopped all services and start zookeeper, kafka and clickhouse at first, and then started other services. The error is gone. But I will bring up all services and wait some minutes as you said to check it out.
I also fixed another error: Application: Listen [::]: 0: DNS error: EAI: -9 If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
by the help of: https://github.com/ClickHouse/ClickHouse/issues/4406
and one more question: Does Sentry really need geoip if it is going to be used in one country more precisely in one small company?
I changed the volumes. I mean from external to local volumes. because I want to store volumes on different partition from /var. I started zookeeper kafka clickhouse redis postgres memcached snuba-api successfully. But when I start snuba-consumer I get the following error: snuba.utils.streams.backends.abstract.ConsumerError: KafkaError{code=UNKNOWN_TOPIC_OR_PART,val=3,str="Subscribed topic not available: events: Broker: Unknown topic or partition"}
I also enabled auto.create.topics.enable=true in server.properties in kafka configuration. But nothings changed!
I think you may have permission issues with the volumes moved and Kafka or ZK lost its data. This topic should already exist after a regular install.
I think this option stopped working in the recent versions of Kafka. Even if it does, it may still fail if you have permission issues as I suspect above.