yuemeng created FLINK-14666:
-------------------------------
Summary: support multiple catalog in flink table sql
Key: FLINK-14666
URL:
https://issues.apache.org/jira/browse/FLINK-14666 Project: Flink
Issue Type: Bug
Components: Table SQL / Planner
Affects Versions: 1.9.1, 1.9.0, 1.8.2, 1.8.0
Reporter: yuemeng
currently, calcite will only use the current catalog as schema path to validate sql node,
maybe this is not reasonable
{code}
tableEnvironment.useCatalog("user_catalog");
tableEnvironment.useDatabase("user_db");
Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' SECOND)"); tableEnvironment.registerTable("v1", table);
Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
tableEnvironment.registerTable("v2", t2);
tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT action, os,cast (cnt as BIGINT) as cnt from v2");
{code}
suppose source table music_queue_3 and sink table kafka_table_test1 both in user_catalog
catalog
but some temp table or view such as v1, v2,v3 will register in default catalog.
when we select temp table v2 and insert it into our own catalog table database2.kafka_table_test1
it always failed with sql node validate, because of schema path in
catalog reader is the current catalog without default catalog,the temp table or view will never be Identified
--
This message was sent by Atlassian Jira
(v8.3.4#803005)