Hi,
I am using fresh sentry on premise version, Kafka volume use quite a lot of space, it’s changing from 30 to 50GB
[root@sentry _data]# du -a ./ | sort -n -r | head -n 20
31366904 ./
19287144 ./events-0
11669384 ./ingest-events-0
1457600 ./events-0/00000000000003234392.log
1048576 ./events-0/00000000000003050169.log
1048572 ./ingest-events-0/00000000000003877121.log
1048568 ./ingest-events-0/00000000000003973500.log
1048568 ./events-0/00000000000003197501.log
1048564 ./ingest-events-0/00000000000003997306.log
1048560 ./ingest-events-0/00000000000003781716.log
1048560 ./ingest-events-0/00000000000003757783.log
1048560 ./events-0/00000000000003160921.log
1048556 ./events-0/00000000000003112972.log
1048556 ./events-0/00000000000003088069.log
1048552 ./events-0/00000000000003062748.log
1048548 ./ingest-events-0/00000000000003925157.log
1048544 ./events-0/00000000000003172902.log
1048544 ./events-0/00000000000003025446.log
1048540 ./events-0/00000000000003136947.log
1048536 ./events-0/00000000000003149016.log
docker-compose has such parameters which is default
KAFKA_ZOOKEEPER_CONNECT: “zookeeper:2181”
KAFKA_ADVERTISED_LISTENERS: “PLAINTEXT://kafka:9092”
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: “1”
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: “1”
KAFKA_LOG_RETENTION_HOURS: “24”
KAFKA_MESSAGE_MAX_BYTES: “50000000” #50MB or bust
KAFKA_MAX_REQUEST_SIZE: “50000000” #50MB on requests apparently too
CONFLUENT_SUPPORT_METRICS_ENABLE: “false”
KAFKA_LOG4J_LOGGERS: “kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN”
KAFKA_LOG4J_ROOT_LOGLEVEL: “WARN”
KAFKA_TOOLS_LOG4J_LOGLEVEL: “WARN”
is there some elegant way to tell kafka delete those logs regulary and consume less space?