Tom Schamberger created FLINK-11186:
---------------------------------------
Summary: Support for event-time balancing for multiple Kafka comsumer partitions
Key: FLINK-11186
URL:
https://issues.apache.org/jira/browse/FLINK-11186 Project: Flink
Issue Type: New Feature
Components: DataStream API, Kafka Connector
Reporter: Tom Schamberger
Currently, it is not possible with Flink to back-pressure individual Kafka partitions, which are faster in terms of event-time. This leads to unnecessary memory consumption and can lead to deadlocks in the case of back-pressure.
When multiple Kafka topics are consumed, succeeding event-time window operators have to wait until the last Kafka partition has produced a sufficient watermark to be triggered. If individual Kafka partitions differ in read performance or the event-time of messages within partitions is not monotonically distributed, this can lead to a situation, where 'fast' partitions (event-time makes fast progress) outperform slower partitions until back-pressuring prevents all partitions from being further consumed. This leads to a deadlock of the application.
I suggest, that windows should be able to back-pressure individual partitions, which progress faster in terms of event-time, so that slow partitions can keep up.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)