[jira] [Created] (FLINK-12153) 提交flink job到flink环境下报错

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Created] (FLINK-12153) 提交flink job到flink环境下报错

Shang Yuanchun (Jira)
gaojunjie created FLINK-12153:
---------------------------------

             Summary: 提交flink job到flink环境下报错
                 Key: FLINK-12153
                 URL: https://issues.apache.org/jira/browse/FLINK-12153
             Project: Flink
          Issue Type: Bug
    Affects Versions: 1.7.2
         Environment: flink maven

<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-streaming-java_2.12</artifactId>
 <version>1.7.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka-0.11 -->
<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-connector-kafka-0.11_2.12</artifactId>
 <version>1.7.1</version>
</dependency>


<dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-hdfs</artifactId>
 <version>2.7.2</version>
 <exclusions>
 <exclusion>
 <artifactId>xml-apis</artifactId>
 <groupId>xml-apis</groupId>
 </exclusion>
 </exclusions>
</dependency>


<dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-common</artifactId>
 <version>2.7.2</version>
</dependency>


<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-hadoop-compatibility -->
<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-hadoop-compatibility_2.12</artifactId>
 <version>1.7.1</version>
</dependency>


<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-filesystem -->
<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-connector-filesystem_2.12</artifactId>
 <version>1.7.1</version>
</dependency>

<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-elasticsearch5 -->
<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-connector-elasticsearch5_2.12</artifactId>
 <version>1.7.1</version>
</dependency>

 

 

hadoop 环境版本 2.7.7

 
            Reporter: gaojunjie


java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are only supported for HDFS and for Hadoop version 2.7 or newer
        at org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:57)
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
        at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
        at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.<init>(Buckets.java:112)
        at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:242)
        at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:327)
        at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
        at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
        at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
        at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
        at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
Reply | Threaded
Open this post in threaded view
|

Re: [jira] [Created] (FLINK-12153) 提交flink job到flink环境下报错

Biao Liu
Hi gaojunjie,
1. Please use English to describe your JIRA issue although I think this is
more like a question not a bug report
2. You can send your question in flink-user-zh mailing list which Chinese
is supported
3. I think the exception is clear enough that this feature is not supported
in your Hadoop version

gaojunjie (JIRA) <[hidden email]> 于2019年4月10日周三 下午5:06写道:

> gaojunjie created FLINK-12153:
> ---------------------------------
>
>              Summary: 提交flink job到flink环境下报错
>                  Key: FLINK-12153
>                  URL: https://issues.apache.org/jira/browse/FLINK-12153
>              Project: Flink
>           Issue Type: Bug
>     Affects Versions: 1.7.2
>          Environment: flink maven
>
> <dependency>
>  <groupId>org.apache.flink</groupId>
>  <artifactId>flink-streaming-java_2.12</artifactId>
>  <version>1.7.1</version>
> </dependency>
> <!--
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka-0.11
> -->
> <dependency>
>  <groupId>org.apache.flink</groupId>
>  <artifactId>flink-connector-kafka-0.11_2.12</artifactId>
>  <version>1.7.1</version>
> </dependency>
>
>
> <dependency>
>  <groupId>org.apache.hadoop</groupId>
>  <artifactId>hadoop-hdfs</artifactId>
>  <version>2.7.2</version>
>  <exclusions>
>  <exclusion>
>  <artifactId>xml-apis</artifactId>
>  <groupId>xml-apis</groupId>
>  </exclusion>
>  </exclusions>
> </dependency>
>
>
> <dependency>
>  <groupId>org.apache.hadoop</groupId>
>  <artifactId>hadoop-common</artifactId>
>  <version>2.7.2</version>
> </dependency>
>
>
> <!--
> https://mvnrepository.com/artifact/org.apache.flink/flink-hadoop-compatibility
> -->
> <dependency>
>  <groupId>org.apache.flink</groupId>
>  <artifactId>flink-hadoop-compatibility_2.12</artifactId>
>  <version>1.7.1</version>
> </dependency>
>
>
> <!--
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-filesystem
> -->
> <dependency>
>  <groupId>org.apache.flink</groupId>
>  <artifactId>flink-connector-filesystem_2.12</artifactId>
>  <version>1.7.1</version>
> </dependency>
>
> <!--
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-elasticsearch5
> -->
> <dependency>
>  <groupId>org.apache.flink</groupId>
>  <artifactId>flink-connector-elasticsearch5_2.12</artifactId>
>  <version>1.7.1</version>
> </dependency>
>
>
>
>
>
> hadoop 环境版本 2.7.7
>
>
>             Reporter: gaojunjie
>
>
> java.lang.UnsupportedOperationException: Recoverable writers on Hadoop are
> only supported for HDFS and for Hadoop version 2.7 or newer
>         at
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:57)
>         at
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>         at
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.<init>(Buckets.java:112)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:242)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:327)
>         at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>         at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>         at
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>         at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
>         at java.lang.Thread.run(Thread.java:748)
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>