Hi All, we are experiencing errors like the following when we have many jobs publishing to our Kafka topics during peak times. We want to ensure that Flink is not skipping the expired records and that we don't have data loss. We know that Kafka has a
retries value set to Integer.MAX_VALUE. We also noticed that errors like this are causing the job to restart. We are trying to understand if Kafka is actually retrying to publish the expired records before the exception is thrown, or if this exception will cause the Flink to restart the job as soon as it is raised?
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 55 record(s) for topic-1:120000 ms has passed since batch creation