Leonard Xu created FLINK-16070:
----------------------------------
Summary: Blink planner can not extract correct unique key for UpsertStreamTableSink
Key: FLINK-16070
URL:
https://issues.apache.org/jira/browse/FLINK-16070 Project: Flink
Issue Type: Bug
Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Leonard Xu
Fix For: 1.11.0
I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail list[1] that Blink planner can not extract correct unique key for following query, but legacy planner work well.
{code:java}
// user code
NSERT INTO ES6_ZHANGLE_OUTPUT SELECT aggId, pageId, ts_min as ts, count(case when eventId = 'exposure' then 1 else null end) as expoCnt, count(case when eventId = 'click' then 1 else null end) as clkCnt FROM ( SELECT 'ZL_001' as aggId, pageId, eventId, recvTime, ts2Date(recvTime) as ts_min from kafka_zl_etrack_event_stream where eventId in ('exposure', 'click') ) as t1 group by aggId, pageId, ts_min
{code}
I found that blink planner can extract unique key in `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* `. A simple ETL job to reproduce this issue can refers[2]
[1][
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
[2][
https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)