hehuiyuan created FLINK-15326:
---------------------------------
Summary: Add document description for DECIMAL(38, 18) when the sink table uses json schema
Key: FLINK-15326
URL:
https://issues.apache.org/jira/browse/FLINK-15326 Project: Flink
Issue Type: Wish
Reporter: hehuiyuan
Env :
Flink 1.9.1
table-planner-blink
Question:
If i have a kafka sink table with json schema:
{code:java}
String jsonSchema =
"{
type:'object',
properties:{
name: { type: 'string' },
age: { type: 'integer' },
sex: { type: 'string' }
}
}";
JsonRowDeserializationSchema deserializationSchema = new JsonRowDeserializationSchema(jsonSchema.toString());
TypeInformation fieldTypes = deserializationSchema.getProducedType();
Kafka kafka = new Kafka.......
Schema schema = new Schema......
tableEnvironment.connect(kafka)
.withFormat( new Json().jsonSchema(jsonSchema))
.withSchema( schema )
.inAppendMode()
.registerTableSink("sink_table")
;
String sinksql = "insert into sink_example2 select * from table2"
tableEnvironment.sqlUpdate(sinksql);
{code}
Error:
{code:java}
Query result schema: [name: String, age: BigDecimal, sex: String]
TableSink schema: [name: String, age: BigDecimal, sex: String]
{code}
The table `table2` : table schema
{code:java}
[2019-12-19 18:10:16,937] INFO t2: root
|-- name: STRING
|-- age: DECIMAL(10, 0)
|-- sex: STRING
{code}
When i use kafka to read data for json schema , i understand the integer type in json is mapped DECIMAL(38, 18) in flink table
{code:java}
|-- name: STRING
|-- age: DECIMAL(38, 18)
|-- sex: STRING
{code}
That's why i know to set decimal precision
{code:java}
String sinksql = "insert into sink_example2 select name CAST(age as decimal(38,18) ) as age, sex from table2"
{code}
JSON format needs to pay attention to this problem.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)