In order to clean up logs, i truncated some logs in kafka .
root@871d7856246d:/var/lib/kafka/data/events-0# pwd
/var/lib/kafka/data/events-0
-rw-r–r-- 1 root root 0 Apr 5 13:42 00000000000018096585.log
-rw-r–r-- 1 root root 0 Apr 5 13:42 00000000000017849768.log
-rw-r–r-- 1 root root 0 Apr 5 13:42 00000000000017574826.log
-rw-r–r-- 1 root root 0 Apr 5 13:42 00000000000017307174.log
-rw-r–r-- 1 root root 10 Apr 5 10:27 00000000000018352981.snapshot
-rw-r–r-- 1 root root 1486168 Apr 5 10:27 00000000000018096585.index
After doing that . on startup i see the following errors . Does kafka need a preallocated log size on startup . Can this be fixed somehow @BYK
**ingest-consumer_1 | 14:26:48 [INFO] batching-kafka-consumer: New partitions assigned: [TopicPartition{topic=ingest-attachments,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-events,partition=0,offset=-1001,error=None}, TopicPartition{topic=ingest-transactions,partition=0,offset=-1001,error=None}]**
**kafka_1 | [2021-04-05 14:26:48,226] ERROR [ReplicaManager broker=1001] Error processing fetch with max size 1048576 from consumer on partition ingest-transactions-0: (fetchOffset=11632139, logStartOffset=-1, maxBytes=1048576, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)**
**kafka_1 | org.apache.kafka.common.errors.CorruptRecordException: Found record size 0 smaller than minimum record overhead (14) in file /var/lib/kafka/data/ingest-transactions-0/00000000000011627018.log.**
**ingest-consumer_1 | * Unknown config option found: 'slack.legacy-app'**