谢波 created FLINK-21370:
--------------------------
Summary: flink-1.12.1 - JDBCExecutionOptions defaults config JDBCDynamicTableFactory default config is not consistent
Key: FLINK-21370
URL:
https://issues.apache.org/jira/browse/FLINK-21370 Project: Flink
Issue Type: Bug
Components: Connectors / JDBC
Affects Versions: 1.12.1
Reporter: 谢波
When i test jdbc sink with kafka src, the data is not sink to db, and then i find out JDBCExecutionOptions defaults config JDBCDynamicTableFactory default config is not consistent.
JDBCExecutionOptions
public static final int DEFAULT_MAX_RETRY_TIMES = 3;
private static final int DEFAULT_INTERVAL_MILLIS = 0;
public static final int DEFAULT_SIZE = 5000;
JDBCDynamicTableFactory
// write config options
private static final ConfigOption<Integer> SINK_BUFFER_FLUSH_MAX_ROWS =
ConfigOptions.key("sink.buffer-flush.max-rows")
.intType()
.defaultValue(100)
.withDescription(
"the flush max size (includes all append, upsert and delete records), over this number"
+ " of records, will flush data. The default value is 100.");
private static final ConfigOption<Duration> SINK_BUFFER_FLUSH_INTERVAL =
ConfigOptions.key("sink.buffer-flush.interval")
.durationType()
.defaultValue(Duration.ofSeconds(1))
.withDescription(
"the flush interval mills, over this time, asynchronous threads will flush data. The "
+ "default value is 1s.");
private static final ConfigOption<Integer> SINK_MAX_RETRIES =
ConfigOptions.key("sink.max-retries")
.intType()
.defaultValue(3)
.withDescription("the max retry times if writing records to database failed.");
--
This message was sent by Atlassian Jira
(v8.3.4#803005)