Error due to Hadoop version mismatch

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Error due to Hadoop version mismatch

Kashmar, Ali
Hello,

I’m trying to use HDFS as store for Flink checkpoints so I downloaded the Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also downloaded Hadoop 2.6.0 separately from the Hadoop website and set up HDFS on a separate machine. When I start Flink I get the following error:

17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event Processing) changed to FAILED.
java.lang.Exception: Call to registerInputOutput() of invokable failed
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: The given file URI (hdfs://10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode at 10.13.182.171:9000, but the File System could not be initialized with that address: Server IPC version 9 cannot communicate with client version 4
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:337)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
        at org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBackend.java:142)
        at org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBackend.java:101)
        at org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createFromConfig(FsStateBackendFactory.java:48)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(StreamTask.java:517)
        at org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(StreamTask.java:171)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
        ... 1 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$VersionMismatch): Server IPC version 9 cannot communicate with client version 4
        at org.apache.hadoop.ipc.Client.call(Client.java:1113)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
        at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
        at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
        at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:321)
        ... 8 more

I searched for this error online and it indicates that the client which is Flink in this case is at a much lower version. Is there a way to check the version of Hadoop packaged with my Flink installation?

Thanks,
Ali
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Robert Metzger
Hi Ali,

the TaskManagers and the JobManager is logging the Hadoop version on
startup.

On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]> wrote:

> Hello,
>
> I’m trying to use HDFS as store for Flink checkpoints so I downloaded the
> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up HDFS
> on a separate machine. When I start Flink I get the following error:
>
> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
> Processing) changed to FAILED.
> java.lang.Exception: Call to registerInputOutput() of invokable failed
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: The given file URI (hdfs://
> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode at
> 10.13.182.171:9000, but the File System could not be initialized with
> that address: Server IPC version 9 cannot communicate with client version 4
>         at
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:337)
>         at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>         at
> org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBackend.java:142)
>         at
> org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBackend.java:101)
>         at
> org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createFromConfig(FsStateBackendFactory.java:48)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(StreamTask.java:517)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(StreamTask.java:171)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>         ... 1 more
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$VersionMismatch):
> Server IPC version 9 cannot communicate with client version 4
>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>         at
> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
>         at
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:321)
>         ... 8 more
>
> I searched for this error online and it indicates that the client which is
> Flink in this case is at a much lower version. Is there a way to check the
> version of Hadoop packaged with my Flink installation?
>
> Thanks,
> Ali
>
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Kashmar, Ali
Hi Robert,

I found the version in the job manager log file:

17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
      -  Hadoop version: 2.6.0

But the Hadoop installation I have is saying this:

ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using
/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0
.jar


So one of them is lying to me? :)

Ali

On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]> wrote:

>Hi Ali,
>
>the TaskManagers and the JobManager is logging the Hadoop version on
>startup.
>
>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]> wrote:
>
>> Hello,
>>
>> I¹m trying to use HDFS as store for Flink checkpoints so I downloaded
>>the
>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up
>>HDFS
>> on a separate machine. When I start Flink I get the following error:
>>
>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
>> Processing) changed to FAILED.
>> java.lang.Exception: Call to registerInputOutput() of invokable failed
>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>         at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.io.IOException: The given file URI (hdfs://
>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode
>>at
>> 10.13.182.171:9000, but the File System could not be initialized with
>> that address: Server IPC version 9 cannot communicate with client
>>version 4
>>         at
>>
>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>stem.java:337)
>>         at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>         at
>>
>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>ckend.java:142)
>>         at
>>
>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>ckend.java:101)
>>         at
>>
>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createFro
>>mConfig(FsStateBackendFactory.java:48)
>>         at
>>
>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(St
>>reamTask.java:517)
>>         at
>>
>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(S
>>treamTask.java:171)
>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>         ... 1 more
>> Caused by:
>>
>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$VersionMi
>>smatch):
>> Server IPC version 9 cannot communicate with client version 4
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>>
>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
>>:62)
>>         at
>>
>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
>>mpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>         at
>>
>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc
>>ationHandler.java:85)
>>         at
>>
>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH
>>andler.java:62)
>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>         at
>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>         at
>>
>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSy
>>stem.java:100)
>>         at
>>
>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>stem.java:321)
>>         ... 8 more
>>
>> I searched for this error online and it indicates that the client which
>>is
>> Flink in this case is at a much lower version. Is there a way to check
>>the
>> version of Hadoop packaged with my Flink installation?
>>
>> Thanks,
>> Ali
>>

mxm
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

mxm
Hi Ali,

Could you please also post the Hadoop version output of the task
manager log files? It looks like the task managers are running a
different Hadoop version.

Thanks,
Max

On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]> wrote:

> Hi Robert,
>
> I found the version in the job manager log file:
>
> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>       -  Hadoop version: 2.6.0
>
> But the Hadoop installation I have is saying this:
>
> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
> Hadoop 2.6.0
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> Compiled by jenkins on 2014-11-13T21:10Z
> Compiled with protoc 2.5.0
> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> This command was run using
> /home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0
> .jar
>
>
> So one of them is lying to me? :)
>
> Ali
>
> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]> wrote:
>
>>Hi Ali,
>>
>>the TaskManagers and the JobManager is logging the Hadoop version on
>>startup.
>>
>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]> wrote:
>>
>>> Hello,
>>>
>>> I¹m trying to use HDFS as store for Flink checkpoints so I downloaded
>>>the
>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up
>>>HDFS
>>> on a separate machine. When I start Flink I get the following error:
>>>
>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
>>> Processing) changed to FAILED.
>>> java.lang.Exception: Call to registerInputOutput() of invokable failed
>>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>>         at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.io.IOException: The given file URI (hdfs://
>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode
>>>at
>>> 10.13.182.171:9000, but the File System could not be initialized with
>>> that address: Server IPC version 9 cannot communicate with client
>>>version 4
>>>         at
>>>
>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>>stem.java:337)
>>>         at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>>ckend.java:142)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsStateBa
>>>ckend.java:101)
>>>         at
>>>
>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createFro
>>>mConfig(FsStateBackendFactory.java:48)
>>>         at
>>>
>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(St
>>>reamTask.java:517)
>>>         at
>>>
>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(S
>>>treamTask.java:171)
>>>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>>         ... 1 more
>>> Caused by:
>>>
>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$VersionMi
>>>smatch):
>>> Server IPC version 9 cannot communicate with client version 4
>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at
>>>
>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
>>>:62)
>>>         at
>>>
>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
>>>mpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>>         at
>>>
>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc
>>>ationHandler.java:85)
>>>         at
>>>
>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH
>>>andler.java:62)
>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>>         at
>>>
>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSy
>>>stem.java:100)
>>>         at
>>>
>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSy
>>>stem.java:321)
>>>         ... 8 more
>>>
>>> I searched for this error online and it indicates that the client which
>>>is
>>> Flink in this case is at a much lower version. Is there a way to check
>>>the
>>> version of Hadoop packaged with my Flink installation?
>>>
>>> Thanks,
>>> Ali
>>>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Kashmar, Ali
Hi Max,

I have the same output for the Task Manager:

11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
      -  Hadoop version: 2.6.0

I do get this line at the beginning of both job and task manager log files:

11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
      - Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable

Do you think it has anything to do with it?

Thanks,
Ali

On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:

>Hi Ali,
>
>Could you please also post the Hadoop version output of the task
>manager log files? It looks like the task managers are running a
>different Hadoop version.
>
>Thanks,
>Max
>
>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]> wrote:
>> Hi Robert,
>>
>> I found the version in the job manager log file:
>>
>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>       -  Hadoop version: 2.6.0
>>
>> But the Hadoop installation I have is saying this:
>>
>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
>> Hadoop 2.6.0
>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> Compiled by jenkins on 2014-11-13T21:10Z
>> Compiled with protoc 2.5.0
>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> This command was run using
>>
>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6
>>.0
>> .jar
>>
>>
>> So one of them is lying to me? :)
>>
>> Ali
>>
>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]> wrote:
>>
>>>Hi Ali,
>>>
>>>the TaskManagers and the JobManager is logging the Hadoop version on
>>>startup.
>>>
>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]>
>>>wrote:
>>>
>>>> Hello,
>>>>
>>>> I¹m trying to use HDFS as store for Flink checkpoints so I downloaded
>>>>the
>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up
>>>>HDFS
>>>> on a separate machine. When I start Flink I get the following error:
>>>>
>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
>>>> Processing) changed to FAILED.
>>>> java.lang.Exception: Call to registerInputOutput() of invokable failed
>>>>         at
>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>>>         at java.lang.Thread.run(Thread.java:745)
>>>> Caused by: java.io.IOException: The given file URI (hdfs://
>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode
>>>>at
>>>> 10.13.182.171:9000, but the File System could not be initialized with
>>>> that address: Server IPC version 9 cannot communicate with client
>>>>version 4
>>>>         at
>>>>
>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFile
>>>>Sy
>>>>stem.java:337)
>>>>         at
>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>>>         at
>>>>
>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsState
>>>>Ba
>>>>ckend.java:142)
>>>>         at
>>>>
>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsState
>>>>Ba
>>>>ckend.java:101)
>>>>         at
>>>>
>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createF
>>>>ro
>>>>mConfig(FsStateBackendFactory.java:48)
>>>>         at
>>>>
>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(
>>>>St
>>>>reamTask.java:517)
>>>>         at
>>>>
>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput
>>>>(S
>>>>treamTask.java:171)
>>>>         at
>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>>>         ... 1 more
>>>> Caused by:
>>>>
>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Version
>>>>Mi
>>>>smatch):
>>>> Server IPC version 9 cannot communicate with client version 4
>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>         at
>>>>
>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
>>>>va
>>>>:62)
>>>>         at
>>>>
>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
>>>>rI
>>>>mpl.java:43)
>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>>>         at
>>>>
>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv
>>>>oc
>>>>ationHandler.java:85)
>>>>         at
>>>>
>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio
>>>>nH
>>>>andler.java:62)
>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>>>         at
>>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>>>         at
>>>>
>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFile
>>>>Sy
>>>>stem.java:100)
>>>>         at
>>>>
>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFile
>>>>Sy
>>>>stem.java:321)
>>>>         ... 8 more
>>>>
>>>> I searched for this error online and it indicates that the client
>>>>which
>>>>is
>>>> Flink in this case is at a much lower version. Is there a way to check
>>>>the
>>>> version of Hadoop packaged with my Flink installation?
>>>>
>>>> Thanks,
>>>> Ali
>>>>
>>

mxm
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

mxm
Hi Ali,

The warning about the native Hadoop libraries is nothing to worry
about. The native modules are platform-optimized modules which may be
used to improve performance. They are not necessary for Hadoop to
function correctly.

The exception message implies that you are using is a very old version
of Hadoop. Do you have other Hadoop versions installed on the same
machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
without any problems.

On the cluster machines, what is the output of these commands?

echo $HADOOP_CLASSPATH
echo $HADOOP_CONF_DIR


Thanks,
Max

On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]> wrote:

> Hi Max,
>
> I have the same output for the Task Manager:
>
> 11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>       -  Hadoop version: 2.6.0
>
> I do get this line at the beginning of both job and task manager log files:
>
> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
>       - Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> Do you think it has anything to do with it?
>
> Thanks,
> Ali
>
> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:
>
>>Hi Ali,
>>
>>Could you please also post the Hadoop version output of the task
>>manager log files? It looks like the task managers are running a
>>different Hadoop version.
>>
>>Thanks,
>>Max
>>
>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]> wrote:
>>> Hi Robert,
>>>
>>> I found the version in the job manager log file:
>>>
>>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>       -  Hadoop version: 2.6.0
>>>
>>> But the Hadoop installation I have is saying this:
>>>
>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
>>> Hadoop 2.6.0
>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>>> Compiled by jenkins on 2014-11-13T21:10Z
>>> Compiled with protoc 2.5.0
>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>>> This command was run using
>>>
>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6
>>>.0
>>> .jar
>>>
>>>
>>> So one of them is lying to me? :)
>>>
>>> Ali
>>>
>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]> wrote:
>>>
>>>>Hi Ali,
>>>>
>>>>the TaskManagers and the JobManager is logging the Hadoop version on
>>>>startup.
>>>>
>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]>
>>>>wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I downloaded
>>>>>the
>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set up
>>>>>HDFS
>>>>> on a separate machine. When I start Flink I get the following error:
>>>>>
>>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol Event
>>>>> Processing) changed to FAILED.
>>>>> java.lang.Exception: Call to registerInputOutput() of invokable failed
>>>>>         at
>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS NameNode
>>>>>at
>>>>> 10.13.182.171:9000, but the File System could not be initialized with
>>>>> that address: Server IPC version 9 cannot communicate with client
>>>>>version 4
>>>>>         at
>>>>>
>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFile
>>>>>Sy
>>>>>stem.java:337)
>>>>>         at
>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>>>>         at
>>>>>
>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsState
>>>>>Ba
>>>>>ckend.java:142)
>>>>>         at
>>>>>
>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsState
>>>>>Ba
>>>>>ckend.java:101)
>>>>>         at
>>>>>
>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.createF
>>>>>ro
>>>>>mConfig(FsStateBackendFactory.java:48)
>>>>>         at
>>>>>
>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBackend(
>>>>>St
>>>>>reamTask.java:517)
>>>>>         at
>>>>>
>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput
>>>>>(S
>>>>>treamTask.java:171)
>>>>>         at
>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>>>>         ... 1 more
>>>>> Caused by:
>>>>>
>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Version
>>>>>Mi
>>>>>smatch):
>>>>> Server IPC version 9 cannot communicate with client version 4
>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>         at
>>>>>
>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
>>>>>va
>>>>>:62)
>>>>>         at
>>>>>
>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
>>>>>rI
>>>>>mpl.java:43)
>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>>>>         at
>>>>>
>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv
>>>>>oc
>>>>>ationHandler.java:85)
>>>>>         at
>>>>>
>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio
>>>>>nH
>>>>>andler.java:62)
>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>>>>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>>>>         at
>>>>>
>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFile
>>>>>Sy
>>>>>stem.java:100)
>>>>>         at
>>>>>
>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFile
>>>>>Sy
>>>>>stem.java:321)
>>>>>         ... 8 more
>>>>>
>>>>> I searched for this error online and it indicates that the client
>>>>>which
>>>>>is
>>>>> Flink in this case is at a much lower version. Is there a way to check
>>>>>the
>>>>> version of Hadoop packaged with my Flink installation?
>>>>>
>>>>> Thanks,
>>>>> Ali
>>>>>
>>>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Kashmar, Ali
Hi Max,

Both commands return nothing. Those variables aren’t set.

The only software I installed on these machines is Flink and Java.

-Ali

On 2015-12-28, 6:42 AM, "Maximilian Michels" <[hidden email]> wrote:

>Hi Ali,
>
>The warning about the native Hadoop libraries is nothing to worry
>about. The native modules are platform-optimized modules which may be
>used to improve performance. They are not necessary for Hadoop to
>function correctly.
>
>The exception message implies that you are using is a very old version
>of Hadoop. Do you have other Hadoop versions installed on the same
>machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
>without any problems.
>
>On the cluster machines, what is the output of these commands?
>
>echo $HADOOP_CLASSPATH
>echo $HADOOP_CONF_DIR
>
>
>Thanks,
>Max
>
>On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]> wrote:
>> Hi Max,
>>
>> I have the same output for the Task Manager:
>>
>> 11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>>       -  Hadoop version: 2.6.0
>>
>> I do get this line at the beginning of both job and task manager log
>>files:
>>
>> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
>>       - Unable to load native-hadoop library for your platform... using
>> builtin-java classes where applicable
>>
>> Do you think it has anything to do with it?
>>
>> Thanks,
>> Ali
>>
>> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:
>>
>>>Hi Ali,
>>>
>>>Could you please also post the Hadoop version output of the task
>>>manager log files? It looks like the task managers are running a
>>>different Hadoop version.
>>>
>>>Thanks,
>>>Max
>>>
>>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]>
>>>wrote:
>>>> Hi Robert,
>>>>
>>>> I found the version in the job manager log file:
>>>>
>>>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>>       -  Hadoop version: 2.6.0
>>>>
>>>> But the Hadoop installation I have is saying this:
>>>>
>>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
>>>> Hadoop 2.6.0
>>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>>>> Compiled by jenkins on 2014-11-13T21:10Z
>>>> Compiled with protoc 2.5.0
>>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>>>> This command was run using
>>>>
>>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2
>>>>.6
>>>>.0
>>>> .jar
>>>>
>>>>
>>>> So one of them is lying to me? :)
>>>>
>>>> Ali
>>>>
>>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]> wrote:
>>>>
>>>>>Hi Ali,
>>>>>
>>>>>the TaskManagers and the JobManager is logging the Hadoop version on
>>>>>startup.
>>>>>
>>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]>
>>>>>wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I
>>>>>>downloaded
>>>>>>the
>>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set
>>>>>>up
>>>>>>HDFS
>>>>>> on a separate machine. When I start Flink I get the following error:
>>>>>>
>>>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol
>>>>>>Event
>>>>>> Processing) changed to FAILED.
>>>>>> java.lang.Exception: Call to registerInputOutput() of invokable
>>>>>>failed
>>>>>>         at
>>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
>>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS
>>>>>>NameNode
>>>>>>at
>>>>>> 10.13.182.171:9000, but the File System could not be initialized
>>>>>>with
>>>>>> that address: Server IPC version 9 cannot communicate with client
>>>>>>version 4
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFi
>>>>>>le
>>>>>>Sy
>>>>>>stem.java:337)
>>>>>>         at
>>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsSta
>>>>>>te
>>>>>>Ba
>>>>>>ckend.java:142)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsSta
>>>>>>te
>>>>>>Ba
>>>>>>ckend.java:101)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.creat
>>>>>>eF
>>>>>>ro
>>>>>>mConfig(FsStateBackendFactory.java:48)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBacken
>>>>>>d(
>>>>>>St
>>>>>>reamTask.java:517)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutp
>>>>>>ut
>>>>>>(S
>>>>>>treamTask.java:171)
>>>>>>         at
>>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>>>>>>         ... 1 more
>>>>>> Caused by:
>>>>>>
>>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Versi
>>>>>>on
>>>>>>Mi
>>>>>>smatch):
>>>>>> Server IPC version 9 cannot communicate with client version 4
>>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>>>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>>Method)
>>>>>>         at
>>>>>>
>>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
>>>>>>ja
>>>>>>va
>>>>>>:62)
>>>>>>         at
>>>>>>
>>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>>>>>so
>>>>>>rI
>>>>>>mpl.java:43)
>>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>>>>>>         at
>>>>>>
>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI
>>>>>>nv
>>>>>>oc
>>>>>>ationHandler.java:85)
>>>>>>         at
>>>>>>
>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat
>>>>>>io
>>>>>>nH
>>>>>>andler.java:62)
>>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
>>>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>>>>>>         at
>>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>>>>>>         at
>>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>>>>>>         at
>>>>>>
>>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
>>>>>>le
>>>>>>Sy
>>>>>>stem.java:100)
>>>>>>         at
>>>>>>
>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFi
>>>>>>le
>>>>>>Sy
>>>>>>stem.java:321)
>>>>>>         ... 8 more
>>>>>>
>>>>>> I searched for this error online and it indicates that the client
>>>>>>which
>>>>>>is
>>>>>> Flink in this case is at a much lower version. Is there a way to
>>>>>>check
>>>>>>the
>>>>>> version of Hadoop packaged with my Flink installation?
>>>>>>
>>>>>> Thanks,
>>>>>> Ali
>>>>>>
>>>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Robert Metzger
Your Flink installation has Hadoop 2.6.0 included, on the other machine,
there is a Hadoop version installed, which is most likely a 1.x or even a
0.x version.
Are you sure that the host "ubuntu-171" has the ip 10.13.182.171 and that
the hadoop installation in the "/home/ubuntu/Documents/hadoop-2.6.0/"
directory is listening on port 9000 ?

On Mon, Jan 4, 2016 at 3:03 PM, Kashmar, Ali <[hidden email]> wrote:

> Hi Max,
>
> Both commands return nothing. Those variables aren’t set.
>
> The only software I installed on these machines is Flink and Java.
>
> -Ali
>
> On 2015-12-28, 6:42 AM, "Maximilian Michels" <[hidden email]> wrote:
>
> >Hi Ali,
> >
> >The warning about the native Hadoop libraries is nothing to worry
> >about. The native modules are platform-optimized modules which may be
> >used to improve performance. They are not necessary for Hadoop to
> >function correctly.
> >
> >The exception message implies that you are using is a very old version
> >of Hadoop. Do you have other Hadoop versions installed on the same
> >machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
> >without any problems.
> >
> >On the cluster machines, what is the output of these commands?
> >
> >echo $HADOOP_CLASSPATH
> >echo $HADOOP_CONF_DIR
> >
> >
> >Thanks,
> >Max
> >
> >On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]>
> wrote:
> >> Hi Max,
> >>
> >> I have the same output for the Task Manager:
> >>
> >> 11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
> >>       -  Hadoop version: 2.6.0
> >>
> >> I do get this line at the beginning of both job and task manager log
> >>files:
> >>
> >> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
> >>       - Unable to load native-hadoop library for your platform... using
> >> builtin-java classes where applicable
> >>
> >> Do you think it has anything to do with it?
> >>
> >> Thanks,
> >> Ali
> >>
> >> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:
> >>
> >>>Hi Ali,
> >>>
> >>>Could you please also post the Hadoop version output of the task
> >>>manager log files? It looks like the task managers are running a
> >>>different Hadoop version.
> >>>
> >>>Thanks,
> >>>Max
> >>>
> >>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]>
> >>>wrote:
> >>>> Hi Robert,
> >>>>
> >>>> I found the version in the job manager log file:
> >>>>
> >>>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
> >>>>       -  Hadoop version: 2.6.0
> >>>>
> >>>> But the Hadoop installation I have is saying this:
> >>>>
> >>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
> >>>> Hadoop 2.6.0
> >>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> >>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> >>>> Compiled by jenkins on 2014-11-13T21:10Z
> >>>> Compiled with protoc 2.5.0
> >>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> >>>> This command was run using
> >>>>
> >>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2
> >>>>.6
> >>>>.0
> >>>> .jar
> >>>>
> >>>>
> >>>> So one of them is lying to me? :)
> >>>>
> >>>> Ali
> >>>>
> >>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]>
> wrote:
> >>>>
> >>>>>Hi Ali,
> >>>>>
> >>>>>the TaskManagers and the JobManager is logging the Hadoop version on
> >>>>>startup.
> >>>>>
> >>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]>
> >>>>>wrote:
> >>>>>
> >>>>>> Hello,
> >>>>>>
> >>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I
> >>>>>>downloaded
> >>>>>>the
> >>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
> >>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and set
> >>>>>>up
> >>>>>>HDFS
> >>>>>> on a separate machine. When I start Flink I get the following error:
> >>>>>>
> >>>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
> >>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol
> >>>>>>Event
> >>>>>> Processing) changed to FAILED.
> >>>>>> java.lang.Exception: Call to registerInputOutput() of invokable
> >>>>>>failed
> >>>>>>         at
> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
> >>>>>>         at java.lang.Thread.run(Thread.java:745)
> >>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
> >>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS
> >>>>>>NameNode
> >>>>>>at
> >>>>>> 10.13.182.171:9000, but the File System could not be initialized
> >>>>>>with
> >>>>>> that address: Server IPC version 9 cannot communicate with client
> >>>>>>version 4
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFi
> >>>>>>le
> >>>>>>Sy
> >>>>>>stem.java:337)
> >>>>>>         at
> >>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsSta
> >>>>>>te
> >>>>>>Ba
> >>>>>>ckend.java:142)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsSta
> >>>>>>te
> >>>>>>Ba
> >>>>>>ckend.java:101)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.creat
> >>>>>>eF
> >>>>>>ro
> >>>>>>mConfig(FsStateBackendFactory.java:48)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBacken
> >>>>>>d(
> >>>>>>St
> >>>>>>reamTask.java:517)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutp
> >>>>>>ut
> >>>>>>(S
> >>>>>>treamTask.java:171)
> >>>>>>         at
> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
> >>>>>>         ... 1 more
> >>>>>> Caused by:
> >>>>>>
> >>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Versi
> >>>>>>on
> >>>>>>Mi
> >>>>>>smatch):
> >>>>>> Server IPC version 9 cannot communicate with client version 4
> >>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >>>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
> >>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >>>>>>Method)
> >>>>>>         at
> >>>>>>
> >>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> >>>>>>ja
> >>>>>>va
> >>>>>>:62)
> >>>>>>         at
> >>>>>>
> >>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> >>>>>>so
> >>>>>>rI
> >>>>>>mpl.java:43)
> >>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI
> >>>>>>nv
> >>>>>>oc
> >>>>>>ationHandler.java:85)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat
> >>>>>>io
> >>>>>>nH
> >>>>>>andler.java:62)
> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
> >>>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
> >>>>>>         at
> >>>>>> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
> >>>>>>         at
> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
> >>>>>>         at
> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
> >>>>>>le
> >>>>>>Sy
> >>>>>>stem.java:100)
> >>>>>>         at
> >>>>>>
> >>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFi
> >>>>>>le
> >>>>>>Sy
> >>>>>>stem.java:321)
> >>>>>>         ... 8 more
> >>>>>>
> >>>>>> I searched for this error online and it indicates that the client
> >>>>>>which
> >>>>>>is
> >>>>>> Flink in this case is at a much lower version. Is there a way to
> >>>>>>check
> >>>>>>the
> >>>>>> version of Hadoop packaged with my Flink installation?
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Ali
> >>>>>>
> >>>>
> >>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Kashmar, Ali
Hi Robert,

On the ubuntu host, the port 9000 is up:

ubuntu@ubuntu-171:~$ netstat -tulpn
....
tcp        0      0 10.13.182.171:9000      0.0.0.0:*               LISTEN
     8849/java
....

And the process using this port is:

ubuntu@ubuntu-171:~$ ps -ef | grep 8849
ubuntu    8849     1  0  2015 ?        00:25:00
/usr/lib/jvm/java-8-oracle/bin/java -Dproc_namenode -Xmx1000m
-Djava.net.preferIPv4Stack=true
-Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
-Dhadoop.log.file=hadoop.log
-Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
-Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,console
-Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
-Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
-Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true
-Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
-Dhadoop.log.file=hadoop-ubuntu-namenode-ubuntu-171.log
-Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
-Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,RFA
-Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
-Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
-Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
-Dhadoop.security.logger=INFO,RFAS
org.apache.hadoop.hdfs.server.namenode.NameNode

And the hadoop version is:

ubuntu@ubuntu-171:~$ Documents/hadoop-2.6.0/bin/hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using
/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0
.jar



-Ali



On 2016-01-04, 2:20 PM, "Robert Metzger" <[hidden email]> wrote:

>Your Flink installation has Hadoop 2.6.0 included, on the other machine,
>there is a Hadoop version installed, which is most likely a 1.x or even a
>0.x version.
>Are you sure that the host "ubuntu-171" has the ip 10.13.182.171 and that
>the hadoop installation in the "/home/ubuntu/Documents/hadoop-2.6.0/"
>directory is listening on port 9000 ?
>
>On Mon, Jan 4, 2016 at 3:03 PM, Kashmar, Ali <[hidden email]> wrote:
>
>> Hi Max,
>>
>> Both commands return nothing. Those variables aren’t set.
>>
>> The only software I installed on these machines is Flink and Java.
>>
>> -Ali
>>
>> On 2015-12-28, 6:42 AM, "Maximilian Michels" <[hidden email]> wrote:
>>
>> >Hi Ali,
>> >
>> >The warning about the native Hadoop libraries is nothing to worry
>> >about. The native modules are platform-optimized modules which may be
>> >used to improve performance. They are not necessary for Hadoop to
>> >function correctly.
>> >
>> >The exception message implies that you are using is a very old version
>> >of Hadoop. Do you have other Hadoop versions installed on the same
>> >machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
>> >without any problems.
>> >
>> >On the cluster machines, what is the output of these commands?
>> >
>> >echo $HADOOP_CLASSPATH
>> >echo $HADOOP_CONF_DIR
>> >
>> >
>> >Thanks,
>> >Max
>> >
>> >On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]>
>> wrote:
>> >> Hi Max,
>> >>
>> >> I have the same output for the Task Manager:
>> >>
>> >> 11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
>> >>       -  Hadoop version: 2.6.0
>> >>
>> >> I do get this line at the beginning of both job and task manager log
>> >>files:
>> >>
>> >> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
>> >>       - Unable to load native-hadoop library for your platform...
>>using
>> >> builtin-java classes where applicable
>> >>
>> >> Do you think it has anything to do with it?
>> >>
>> >> Thanks,
>> >> Ali
>> >>
>> >> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:
>> >>
>> >>>Hi Ali,
>> >>>
>> >>>Could you please also post the Hadoop version output of the task
>> >>>manager log files? It looks like the task managers are running a
>> >>>different Hadoop version.
>> >>>
>> >>>Thanks,
>> >>>Max
>> >>>
>> >>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]>
>> >>>wrote:
>> >>>> Hi Robert,
>> >>>>
>> >>>> I found the version in the job manager log file:
>> >>>>
>> >>>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
>> >>>>       -  Hadoop version: 2.6.0
>> >>>>
>> >>>> But the Hadoop installation I have is saying this:
>> >>>>
>> >>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
>> >>>> Hadoop 2.6.0
>> >>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> >>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> >>>> Compiled by jenkins on 2014-11-13T21:10Z
>> >>>> Compiled with protoc 2.5.0
>> >>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> >>>> This command was run using
>> >>>>
>>
>>>>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common
>>>>>>-2
>> >>>>.6
>> >>>>.0
>> >>>> .jar
>> >>>>
>> >>>>
>> >>>> So one of them is lying to me? :)
>> >>>>
>> >>>> Ali
>> >>>>
>> >>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]>
>> wrote:
>> >>>>
>> >>>>>Hi Ali,
>> >>>>>
>> >>>>>the TaskManagers and the JobManager is logging the Hadoop version
>>on
>> >>>>>startup.
>> >>>>>
>> >>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]>
>> >>>>>wrote:
>> >>>>>
>> >>>>>> Hello,
>> >>>>>>
>> >>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I
>> >>>>>>downloaded
>> >>>>>>the
>> >>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
>> >>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and
>>set
>> >>>>>>up
>> >>>>>>HDFS
>> >>>>>> on a separate machine. When I start Flink I get the following
>>error:
>> >>>>>>
>> >>>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
>> >>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol
>> >>>>>>Event
>> >>>>>> Processing) changed to FAILED.
>> >>>>>> java.lang.Exception: Call to registerInputOutput() of invokable
>> >>>>>>failed
>> >>>>>>         at
>> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>> >>>>>>         at java.lang.Thread.run(Thread.java:745)
>> >>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
>> >>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS
>> >>>>>>NameNode
>> >>>>>>at
>> >>>>>> 10.13.182.171:9000, but the File System could not be initialized
>> >>>>>>with
>> >>>>>> that address: Server IPC version 9 cannot communicate with client
>> >>>>>>version 4
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hadoop
>>>>>>>>Fi
>> >>>>>>le
>> >>>>>>Sy
>> >>>>>>stem.java:337)
>> >>>>>>         at
>> >>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsS
>>>>>>>>ta
>> >>>>>>te
>> >>>>>>Ba
>> >>>>>>ckend.java:142)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsS
>>>>>>>>ta
>> >>>>>>te
>> >>>>>>Ba
>> >>>>>>ckend.java:101)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.cre
>>>>>>>>at
>> >>>>>>eF
>> >>>>>>ro
>> >>>>>>mConfig(FsStateBackendFactory.java:48)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBack
>>>>>>>>en
>> >>>>>>d(
>> >>>>>>St
>> >>>>>>reamTask.java:517)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOu
>>>>>>>>tp
>> >>>>>>ut
>> >>>>>>(S
>> >>>>>>treamTask.java:171)
>> >>>>>>         at
>> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>> >>>>>>         ... 1 more
>> >>>>>> Caused by:
>> >>>>>>
>>
>>>>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Ver
>>>>>>>>si
>> >>>>>>on
>> >>>>>>Mi
>> >>>>>>smatch):
>> >>>>>> Server IPC version 9 cannot communicate with client version 4
>> >>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>> >>>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
>>Source)
>> >>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >>>>>>Method)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp
>>>>>>>>l.
>> >>>>>>ja
>> >>>>>>va
>> >>>>>>:62)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc
>>>>>>>>es
>> >>>>>>so
>> >>>>>>rI
>> >>>>>>mpl.java:43)
>> >>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Retr
>>>>>>>>yI
>> >>>>>>nv
>> >>>>>>oc
>> >>>>>>ationHandler.java:85)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvoc
>>>>>>>>at
>> >>>>>>io
>> >>>>>>nH
>> >>>>>>andler.java:62)
>> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
>>Source)
>> >>>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>> >>>>>>         at
>> >>>>>>
>>org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>> >>>>>>         at
>> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>> >>>>>>         at
>> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(Distributed
>>>>>>>>Fi
>> >>>>>>le
>> >>>>>>Sy
>> >>>>>>stem.java:100)
>> >>>>>>         at
>> >>>>>>
>>
>>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hadoop
>>>>>>>>Fi
>> >>>>>>le
>> >>>>>>Sy
>> >>>>>>stem.java:321)
>> >>>>>>         ... 8 more
>> >>>>>>
>> >>>>>> I searched for this error online and it indicates that the client
>> >>>>>>which
>> >>>>>>is
>> >>>>>> Flink in this case is at a much lower version. Is there a way to
>> >>>>>>check
>> >>>>>>the
>> >>>>>> version of Hadoop packaged with my Flink installation?
>> >>>>>>
>> >>>>>> Thanks,
>> >>>>>> Ali
>> >>>>>>
>> >>>>
>> >>
>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Robert Metzger
Okay, then, my previous statement is false. From the stack trace, it seems
that Flink is using an older Hadoop version. The DFSClient and RPC classes
look different in Hadoop 2.6.0.
Max checked already for some environment variables. Is $CLASSPATH set? Did
you install Hadoop only by downloading the binaries or did you use
something like the ubuntu package manager? (those packages sometimes place
jar files in some lib/ folders)
Did you put something additionally into the lib/ folder of Flink?

I think when Flink is starting up, its also logging the classpath of the
JVM. Can you post it here? (Maybe you need to start with DEBUG log level)


On Mon, Jan 4, 2016 at 8:46 PM, Kashmar, Ali <[hidden email]> wrote:

> Hi Robert,
>
> On the ubuntu host, the port 9000 is up:
>
> ubuntu@ubuntu-171:~$ netstat -tulpn
> ....
> tcp        0      0 10.13.182.171:9000      0.0.0.0:*               LISTEN
>      8849/java
> ....
>
> And the process using this port is:
>
> ubuntu@ubuntu-171:~$ ps -ef | grep 8849
> ubuntu    8849     1  0  2015 ?        00:25:00
> /usr/lib/jvm/java-8-oracle/bin/java -Dproc_namenode -Xmx1000m
> -Djava.net.preferIPv4Stack=true
> -Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
> -Dhadoop.log.file=hadoop.log
> -Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
> -Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,console
> -Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
> -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
> -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true
> -Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
> -Dhadoop.log.file=hadoop-ubuntu-namenode-ubuntu-171.log
> -Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
> -Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,RFA
> -Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
> -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
> -Dhadoop.security.logger=INFO,RFAS
> org.apache.hadoop.hdfs.server.namenode.NameNode
>
> And the hadoop version is:
>
> ubuntu@ubuntu-171:~$ Documents/hadoop-2.6.0/bin/hadoop version
> Hadoop 2.6.0
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> Compiled by jenkins on 2014-11-13T21:10Z
> Compiled with protoc 2.5.0
> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> This command was run using
> /home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0
> .jar
>
>
>
> -Ali
>
>
>
> On 2016-01-04, 2:20 PM, "Robert Metzger" <[hidden email]> wrote:
>
> >Your Flink installation has Hadoop 2.6.0 included, on the other machine,
> >there is a Hadoop version installed, which is most likely a 1.x or even a
> >0.x version.
> >Are you sure that the host "ubuntu-171" has the ip 10.13.182.171 and that
> >the hadoop installation in the "/home/ubuntu/Documents/hadoop-2.6.0/"
> >directory is listening on port 9000 ?
> >
> >On Mon, Jan 4, 2016 at 3:03 PM, Kashmar, Ali <[hidden email]> wrote:
> >
> >> Hi Max,
> >>
> >> Both commands return nothing. Those variables aren’t set.
> >>
> >> The only software I installed on these machines is Flink and Java.
> >>
> >> -Ali
> >>
> >> On 2015-12-28, 6:42 AM, "Maximilian Michels" <[hidden email]> wrote:
> >>
> >> >Hi Ali,
> >> >
> >> >The warning about the native Hadoop libraries is nothing to worry
> >> >about. The native modules are platform-optimized modules which may be
> >> >used to improve performance. They are not necessary for Hadoop to
> >> >function correctly.
> >> >
> >> >The exception message implies that you are using is a very old version
> >> >of Hadoop. Do you have other Hadoop versions installed on the same
> >> >machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
> >> >without any problems.
> >> >
> >> >On the cluster machines, what is the output of these commands?
> >> >
> >> >echo $HADOOP_CLASSPATH
> >> >echo $HADOOP_CONF_DIR
> >> >
> >> >
> >> >Thanks,
> >> >Max
> >> >
> >> >On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]>
> >> wrote:
> >> >> Hi Max,
> >> >>
> >> >> I have the same output for the Task Manager:
> >> >>
> >> >> 11:25:04,274 INFO  org.apache.flink.runtime.taskmanager.TaskManager
> >> >>       -  Hadoop version: 2.6.0
> >> >>
> >> >> I do get this line at the beginning of both job and task manager log
> >> >>files:
> >> >>
> >> >> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
> >> >>       - Unable to load native-hadoop library for your platform...
> >>using
> >> >> builtin-java classes where applicable
> >> >>
> >> >> Do you think it has anything to do with it?
> >> >>
> >> >> Thanks,
> >> >> Ali
> >> >>
> >> >> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]> wrote:
> >> >>
> >> >>>Hi Ali,
> >> >>>
> >> >>>Could you please also post the Hadoop version output of the task
> >> >>>manager log files? It looks like the task managers are running a
> >> >>>different Hadoop version.
> >> >>>
> >> >>>Thanks,
> >> >>>Max
> >> >>>
> >> >>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali <[hidden email]>
> >> >>>wrote:
> >> >>>> Hi Robert,
> >> >>>>
> >> >>>> I found the version in the job manager log file:
> >> >>>>
> >> >>>> 17:33:49,636 INFO  org.apache.flink.runtime.jobmanager.JobManager
> >> >>>>       -  Hadoop version: 2.6.0
> >> >>>>
> >> >>>> But the Hadoop installation I have is saying this:
> >> >>>>
> >> >>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
> >> >>>> Hadoop 2.6.0
> >> >>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> >> >>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> >> >>>> Compiled by jenkins on 2014-11-13T21:10Z
> >> >>>> Compiled with protoc 2.5.0
> >> >>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> >> >>>> This command was run using
> >> >>>>
> >>
> >>>>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common
> >>>>>>-2
> >> >>>>.6
> >> >>>>.0
> >> >>>> .jar
> >> >>>>
> >> >>>>
> >> >>>> So one of them is lying to me? :)
> >> >>>>
> >> >>>> Ali
> >> >>>>
> >> >>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]>
> >> wrote:
> >> >>>>
> >> >>>>>Hi Ali,
> >> >>>>>
> >> >>>>>the TaskManagers and the JobManager is logging the Hadoop version
> >>on
> >> >>>>>startup.
> >> >>>>>
> >> >>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali <[hidden email]
> >
> >> >>>>>wrote:
> >> >>>>>
> >> >>>>>> Hello,
> >> >>>>>>
> >> >>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I
> >> >>>>>>downloaded
> >> >>>>>>the
> >> >>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I also
> >> >>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and
> >>set
> >> >>>>>>up
> >> >>>>>>HDFS
> >> >>>>>> on a separate machine. When I start Flink I get the following
> >>error:
> >> >>>>>>
> >> >>>>>> 17:34:13,047 INFO  org.apache.flink.runtime.jobmanager.JobManager
> >> >>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72 (Protocol
> >> >>>>>>Event
> >> >>>>>> Processing) changed to FAILED.
> >> >>>>>> java.lang.Exception: Call to registerInputOutput() of invokable
> >> >>>>>>failed
> >> >>>>>>         at
> >> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
> >> >>>>>>         at java.lang.Thread.run(Thread.java:745)
> >> >>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
> >> >>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS
> >> >>>>>>NameNode
> >> >>>>>>at
> >> >>>>>> 10.13.182.171:9000, but the File System could not be initialized
> >> >>>>>>with
> >> >>>>>> that address: Server IPC version 9 cannot communicate with client
> >> >>>>>>version 4
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hadoop
> >>>>>>>>Fi
> >> >>>>>>le
> >> >>>>>>Sy
> >> >>>>>>stem.java:337)
> >> >>>>>>         at
> >> >>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsS
> >>>>>>>>ta
> >> >>>>>>te
> >> >>>>>>Ba
> >> >>>>>>ckend.java:142)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(FsS
> >>>>>>>>ta
> >> >>>>>>te
> >> >>>>>>Ba
> >> >>>>>>ckend.java:101)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.cre
> >>>>>>>>at
> >> >>>>>>eF
> >> >>>>>>ro
> >> >>>>>>mConfig(FsStateBackendFactory.java:48)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBack
> >>>>>>>>en
> >> >>>>>>d(
> >> >>>>>>St
> >> >>>>>>reamTask.java:517)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOu
> >>>>>>>>tp
> >> >>>>>>ut
> >> >>>>>>(S
> >> >>>>>>treamTask.java:171)
> >> >>>>>>         at
> >> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
> >> >>>>>>         ... 1 more
> >> >>>>>> Caused by:
> >> >>>>>>
> >>
> >>>>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$Ver
> >>>>>>>>si
> >> >>>>>>on
> >> >>>>>>Mi
> >> >>>>>>smatch):
> >> >>>>>> Server IPC version 9 cannot communicate with client version 4
> >> >>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >> >>>>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
> >>Source)
> >> >>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> >>>>>>Method)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImp
> >>>>>>>>l.
> >> >>>>>>ja
> >> >>>>>>va
> >> >>>>>>:62)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcc
> >>>>>>>>es
> >> >>>>>>so
> >> >>>>>>rI
> >> >>>>>>mpl.java:43)
> >> >>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Retr
> >>>>>>>>yI
> >> >>>>>>nv
> >> >>>>>>oc
> >> >>>>>>ationHandler.java:85)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvoc
> >>>>>>>>at
> >> >>>>>>io
> >> >>>>>>nH
> >> >>>>>>andler.java:62)
> >> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
> >>Source)
> >> >>>>>>         at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
> >> >>>>>>         at
> >> >>>>>>
> >>org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
> >> >>>>>>         at
> >> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
> >> >>>>>>         at
> >> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(Distributed
> >>>>>>>>Fi
> >> >>>>>>le
> >> >>>>>>Sy
> >> >>>>>>stem.java:100)
> >> >>>>>>         at
> >> >>>>>>
> >>
> >>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hadoop
> >>>>>>>>Fi
> >> >>>>>>le
> >> >>>>>>Sy
> >> >>>>>>stem.java:321)
> >> >>>>>>         ... 8 more
> >> >>>>>>
> >> >>>>>> I searched for this error online and it indicates that the client
> >> >>>>>>which
> >> >>>>>>is
> >> >>>>>> Flink in this case is at a much lower version. Is there a way to
> >> >>>>>>check
> >> >>>>>>the
> >> >>>>>> version of Hadoop packaged with my Flink installation?
> >> >>>>>>
> >> >>>>>> Thanks,
> >> >>>>>> Ali
> >> >>>>>>
> >> >>>>
> >> >>
> >>
> >>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Error due to Hadoop version mismatch

Kashmar, Ali
Looking at the lib folder revealed the problem. The lib folder on one of
the nodes had libraries for both hadoop 1 and 2. I’m not sure how I ended
up with that but it must have happened while I was copying the dependency
jars to each node. I removed all the jars and started with a fresh copy
and reran the test. It worked this time.

Thank you so much for being patient with me and sorry if I wasted your
time.

-Ali

On 2016-01-04, 3:17 PM, "Robert Metzger" <[hidden email]> wrote:

>Okay, then, my previous statement is false. From the stack trace, it seems
>that Flink is using an older Hadoop version. The DFSClient and RPC classes
>look different in Hadoop 2.6.0.
>Max checked already for some environment variables. Is $CLASSPATH set? Did
>you install Hadoop only by downloading the binaries or did you use
>something like the ubuntu package manager? (those packages sometimes place
>jar files in some lib/ folders)
>Did you put something additionally into the lib/ folder of Flink?
>
>I think when Flink is starting up, its also logging the classpath of the
>JVM. Can you post it here? (Maybe you need to start with DEBUG log level)
>
>
>On Mon, Jan 4, 2016 at 8:46 PM, Kashmar, Ali <[hidden email]> wrote:
>
>> Hi Robert,
>>
>> On the ubuntu host, the port 9000 is up:
>>
>> ubuntu@ubuntu-171:~$ netstat -tulpn
>> ....
>> tcp        0      0 10.13.182.171:9000      0.0.0.0:*
>>LISTEN
>>      8849/java
>> ....
>>
>> And the process using this port is:
>>
>> ubuntu@ubuntu-171:~$ ps -ef | grep 8849
>> ubuntu    8849     1  0  2015 ?        00:25:00
>> /usr/lib/jvm/java-8-oracle/bin/java -Dproc_namenode -Xmx1000m
>> -Djava.net.preferIPv4Stack=true
>> -Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
>> -Dhadoop.log.file=hadoop.log
>> -Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
>> -Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,console
>> -Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
>> -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
>> -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true
>> -Dhadoop.log.dir=/home/ubuntu/Documents/hadoop-2.6.0/logs
>> -Dhadoop.log.file=hadoop-ubuntu-namenode-ubuntu-171.log
>> -Dhadoop.home.dir=/home/ubuntu/Documents/hadoop-2.6.0
>> -Dhadoop.id.str=ubuntu -Dhadoop.root.logger=INFO,RFA
>> -Djava.library.path=/home/ubuntu/Documents/hadoop-2.6.0/lib/native
>> -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true
>> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
>> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
>> -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
>> -Dhadoop.security.logger=INFO,RFAS
>> org.apache.hadoop.hdfs.server.namenode.NameNode
>>
>> And the hadoop version is:
>>
>> ubuntu@ubuntu-171:~$ Documents/hadoop-2.6.0/bin/hadoop version
>> Hadoop 2.6.0
>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> Compiled by jenkins on 2014-11-13T21:10Z
>> Compiled with protoc 2.5.0
>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> This command was run using
>>
>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6
>>.0
>> .jar
>>
>>
>>
>> -Ali
>>
>>
>>
>> On 2016-01-04, 2:20 PM, "Robert Metzger" <[hidden email]> wrote:
>>
>> >Your Flink installation has Hadoop 2.6.0 included, on the other
>>machine,
>> >there is a Hadoop version installed, which is most likely a 1.x or
>>even a
>> >0.x version.
>> >Are you sure that the host "ubuntu-171" has the ip 10.13.182.171 and
>>that
>> >the hadoop installation in the "/home/ubuntu/Documents/hadoop-2.6.0/"
>> >directory is listening on port 9000 ?
>> >
>> >On Mon, Jan 4, 2016 at 3:03 PM, Kashmar, Ali <[hidden email]>
>>wrote:
>> >
>> >> Hi Max,
>> >>
>> >> Both commands return nothing. Those variables aren’t set.
>> >>
>> >> The only software I installed on these machines is Flink and Java.
>> >>
>> >> -Ali
>> >>
>> >> On 2015-12-28, 6:42 AM, "Maximilian Michels" <[hidden email]> wrote:
>> >>
>> >> >Hi Ali,
>> >> >
>> >> >The warning about the native Hadoop libraries is nothing to worry
>> >> >about. The native modules are platform-optimized modules which may
>>be
>> >> >used to improve performance. They are not necessary for Hadoop to
>> >> >function correctly.
>> >> >
>> >> >The exception message implies that you are using is a very old
>>version
>> >> >of Hadoop. Do you have other Hadoop versions installed on the same
>> >> >machine? We have had people using Flink 0.10.0 with Hadoop 2.6.0
>> >> >without any problems.
>> >> >
>> >> >On the cluster machines, what is the output of these commands?
>> >> >
>> >> >echo $HADOOP_CLASSPATH
>> >> >echo $HADOOP_CONF_DIR
>> >> >
>> >> >
>> >> >Thanks,
>> >> >Max
>> >> >
>> >> >On Wed, Dec 23, 2015 at 3:53 PM, Kashmar, Ali <[hidden email]>
>> >> wrote:
>> >> >> Hi Max,
>> >> >>
>> >> >> I have the same output for the Task Manager:
>> >> >>
>> >> >> 11:25:04,274 INFO
>>org.apache.flink.runtime.taskmanager.TaskManager
>> >> >>       -  Hadoop version: 2.6.0
>> >> >>
>> >> >> I do get this line at the beginning of both job and task manager
>>log
>> >> >>files:
>> >> >>
>> >> >> 11:25:04,100 WARN  org.apache.hadoop.util.NativeCodeLoader
>> >> >>       - Unable to load native-hadoop library for your platform...
>> >>using
>> >> >> builtin-java classes where applicable
>> >> >>
>> >> >> Do you think it has anything to do with it?
>> >> >>
>> >> >> Thanks,
>> >> >> Ali
>> >> >>
>> >> >> On 2015-12-23, 7:30 AM, "Maximilian Michels" <[hidden email]>
>>wrote:
>> >> >>
>> >> >>>Hi Ali,
>> >> >>>
>> >> >>>Could you please also post the Hadoop version output of the task
>> >> >>>manager log files? It looks like the task managers are running a
>> >> >>>different Hadoop version.
>> >> >>>
>> >> >>>Thanks,
>> >> >>>Max
>> >> >>>
>> >> >>>On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali
>><[hidden email]>
>> >> >>>wrote:
>> >> >>>> Hi Robert,
>> >> >>>>
>> >> >>>> I found the version in the job manager log file:
>> >> >>>>
>> >> >>>> 17:33:49,636 INFO
>>org.apache.flink.runtime.jobmanager.JobManager
>> >> >>>>       -  Hadoop version: 2.6.0
>> >> >>>>
>> >> >>>> But the Hadoop installation I have is saying this:
>> >> >>>>
>> >> >>>> ubuntu@ubuntu-171:~/Documents/hadoop-2.6.0$ bin/hadoop version
>> >> >>>> Hadoop 2.6.0
>> >> >>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> >> >>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> >> >>>> Compiled by jenkins on 2014-11-13T21:10Z
>> >> >>>> Compiled with protoc 2.5.0
>> >> >>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> >> >>>> This command was run using
>> >> >>>>
>> >>
>>
>>>>>>>>/home/ubuntu/Documents/hadoop-2.6.0/share/hadoop/common/hadoop-comm
>>>>>>>>on
>> >>>>>>-2
>> >> >>>>.6
>> >> >>>>.0
>> >> >>>> .jar
>> >> >>>>
>> >> >>>>
>> >> >>>> So one of them is lying to me? :)
>> >> >>>>
>> >> >>>> Ali
>> >> >>>>
>> >> >>>> On 2015-12-22, 10:16 AM, "Robert Metzger" <[hidden email]>
>> >> wrote:
>> >> >>>>
>> >> >>>>>Hi Ali,
>> >> >>>>>
>> >> >>>>>the TaskManagers and the JobManager is logging the Hadoop
>>version
>> >>on
>> >> >>>>>startup.
>> >> >>>>>
>> >> >>>>>On Tue, Dec 22, 2015 at 4:10 PM, Kashmar, Ali
>><[hidden email]
>> >
>> >> >>>>>wrote:
>> >> >>>>>
>> >> >>>>>> Hello,
>> >> >>>>>>
>> >> >>>>>> I¹m trying to use HDFS as store for Flink checkpoints so I
>> >> >>>>>>downloaded
>> >> >>>>>>the
>> >> >>>>>> Hadoop 2.6.0/Scala 2.10 version of Flink and installed it. I
>>also
>> >> >>>>>> downloaded Hadoop 2.6.0 separately from the Hadoop website and
>> >>set
>> >> >>>>>>up
>> >> >>>>>>HDFS
>> >> >>>>>> on a separate machine. When I start Flink I get the following
>> >>error:
>> >> >>>>>>
>> >> >>>>>> 17:34:13,047 INFO
>>org.apache.flink.runtime.jobmanager.JobManager
>> >> >>>>>>       - Status of job 9ba32a08bc0ec02810bf5d2710842f72
>>(Protocol
>> >> >>>>>>Event
>> >> >>>>>> Processing) changed to FAILED.
>> >> >>>>>> java.lang.Exception: Call to registerInputOutput() of
>>invokable
>> >> >>>>>>failed
>> >> >>>>>>         at
>> >> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:745)
>> >> >>>>>> Caused by: java.io.IOException: The given file URI (hdfs://
>> >> >>>>>> 10.13.182.171:9000/user/flink/checkpoints) points to the HDFS
>> >> >>>>>>NameNode
>> >> >>>>>>at
>> >> >>>>>> 10.13.182.171:9000, but the File System could not be
>>initialized
>> >> >>>>>>with
>> >> >>>>>> that address: Server IPC version 9 cannot communicate with
>>client
>> >> >>>>>>version 4
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hado
>>>>>>>>>>op
>> >>>>>>>>Fi
>> >> >>>>>>le
>> >> >>>>>>Sy
>> >> >>>>>>stem.java:337)
>> >> >>>>>>         at
>> >> >>>>>>org.apache.flink.core.fs.FileSystem.get(FileSystem.java:253)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(F
>>>>>>>>>>sS
>> >>>>>>>>ta
>> >> >>>>>>te
>> >> >>>>>>Ba
>> >> >>>>>>ckend.java:142)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackend.<init>(F
>>>>>>>>>>sS
>> >>>>>>>>ta
>> >> >>>>>>te
>> >> >>>>>>Ba
>> >> >>>>>>ckend.java:101)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.runtime.state.filesystem.FsStateBackendFactory.c
>>>>>>>>>>re
>> >>>>>>>>at
>> >> >>>>>>eF
>> >> >>>>>>ro
>> >> >>>>>>mConfig(FsStateBackendFactory.java:48)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.createStateBa
>>>>>>>>>>ck
>> >>>>>>>>en
>> >> >>>>>>d(
>> >> >>>>>>St
>> >> >>>>>>reamTask.java:517)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.streaming.runtime.tasks.StreamTask.registerInput
>>>>>>>>>>Ou
>> >>>>>>>>tp
>> >> >>>>>>ut
>> >> >>>>>>(S
>> >> >>>>>>treamTask.java:171)
>> >> >>>>>>         at
>> >> >>>>>>org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
>> >> >>>>>>         ... 1 more
>> >> >>>>>> Caused by:
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RPC$V
>>>>>>>>>>er
>> >>>>>>>>si
>> >> >>>>>>on
>> >> >>>>>>Mi
>> >> >>>>>>smatch):
>> >> >>>>>> Server IPC version 9 cannot communicate with client version 4
>> >> >>>>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1113)
>> >> >>>>>>         at
>>org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> >> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
>> >>Source)
>> >> >>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> >> >>>>>>Method)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
>>>>>>>>>>mp
>> >>>>>>>>l.
>> >> >>>>>>ja
>> >> >>>>>>va
>> >> >>>>>>:62)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
>>>>>>>>>>cc
>> >>>>>>>>es
>> >> >>>>>>so
>> >> >>>>>>rI
>> >> >>>>>>mpl.java:43)
>> >> >>>>>>         at java.lang.reflect.Method.invoke(Method.java:497)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re
>>>>>>>>>>tr
>> >>>>>>>>yI
>> >> >>>>>>nv
>> >> >>>>>>oc
>> >> >>>>>>ationHandler.java:85)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInv
>>>>>>>>>>oc
>> >>>>>>>>at
>> >> >>>>>>io
>> >> >>>>>>nH
>> >> >>>>>>andler.java:62)
>> >> >>>>>>         at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown
>> >>Source)
>> >> >>>>>>         at
>>org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
>> >> >>>>>>         at
>> >> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
>> >> >>>>>>         at
>> >> >>>>>>org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.hadoop.hdfs.DistributedFileSystem.initialize(Distribut
>>>>>>>>>>ed
>> >>>>>>>>Fi
>> >> >>>>>>le
>> >> >>>>>>Sy
>> >> >>>>>>stem.java:100)
>> >> >>>>>>         at
>> >> >>>>>>
>> >>
>>
>>>>>>>>>>org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(Hado
>>>>>>>>>>op
>> >>>>>>>>Fi
>> >> >>>>>>le
>> >> >>>>>>Sy
>> >> >>>>>>stem.java:321)
>> >> >>>>>>         ... 8 more
>> >> >>>>>>
>> >> >>>>>> I searched for this error online and it indicates that the
>>client
>> >> >>>>>>which
>> >> >>>>>>is
>> >> >>>>>> Flink in this case is at a much lower version. Is there a way
>>to
>> >> >>>>>>check
>> >> >>>>>>the
>> >> >>>>>> version of Hadoop packaged with my Flink installation?
>> >> >>>>>>
>> >> >>>>>> Thanks,
>> >> >>>>>> Ali
>> >> >>>>>>
>> >> >>>>
>> >> >>
>> >>
>> >>
>>
>>