Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi,
I am aware of at least two Flink users which were facing various issues with HDFS when using Flink. *Issues observed:* - HDFS client trying to connect to the standby Namenode "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby" - java.io.IOException: Bad response ERROR for block BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from datanode 172.22.5.81:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732) - Caused by: org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): 0 at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002) I've added the exceptions to the email so that users facing these issues can find a solution for them. I suspect that all these issues are caused by the Hadoop 2.2.0 client we are packing into the binary releases. Upgrading the HDFS client to the same version as the HDFS installation (say, for example 2.4.1) resolved all issues. Therefore, I propose to provide Hadoop 2.4.0 and Hadoop 2.6.0 binaries on the Flink download page. For the 0.9.0 release, I would do another VOTE on providing these two binaries. I've also filed a JIRA to provide a Flink build which doesn't include Hadoop at all (relying on the version provided by the user through the classpath): https://issues.apache.org/jira/browse/FLINK-2268 Let me know what you think! Robert |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
I think this is a very good idea and very urgent (because of the issues you outlined and for the user experience of *not* having to compile your own version). Big +1.
On 24 Jun 2015, at 11:45, Robert Metzger <[hidden email]> wrote: > Hi, > > I am aware of at least two Flink users which were facing various issues > with HDFS when using Flink. > > *Issues observed:* > - HDFS client trying to connect to the standby Namenode > "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category READ is not supported in state standby" > - java.io.IOException: Bad response ERROR for block > BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from > datanode 172.22.5.81:50010 > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732) > > - Caused by: > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > 0 > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002) > > > I've added the exceptions to the email so that users facing these issues > can find a solution for them. > I suspect that all these issues are caused by the Hadoop 2.2.0 client we > are packing into the binary releases. > > Upgrading the HDFS client to the same version as the HDFS installation > (say, for example 2.4.1) resolved all issues. > > Therefore, I propose to provide Hadoop 2.4.0 and Hadoop 2.6.0 binaries on > the Flink download page. > For the 0.9.0 release, I would do another VOTE on providing these two > binaries. > > I've also filed a JIRA to provide a Flink build which doesn't include > Hadoop at all (relying on the version provided by the user through the > classpath): https://issues.apache.org/jira/browse/FLINK-2268 > > > Let me know what you think! > > Robert ... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
big +1 from me as well!
On Wed, Jun 24, 2015 at 12:05 PM, Ufuk Celebi <[hidden email]> wrote: > I think this is a very good idea and very urgent (because of the issues > you outlined and for the user experience of *not* having to compile your > own version). Big +1. > > On 24 Jun 2015, at 11:45, Robert Metzger <[hidden email]> wrote: > > > Hi, > > > > I am aware of at least two Flink users which were facing various issues > > with HDFS when using Flink. > > > > *Issues observed:* > > - HDFS client trying to connect to the standby Namenode > > > "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > > Operation category READ is not supported in state standby" > > - java.io.IOException: Bad response ERROR for block > > BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from > > datanode 172.22.5.81:50010 > > at > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732) > > > > - Caused by: > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > 0 > > at > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478) > > at > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039) > > at > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002) > > > > > > I've added the exceptions to the email so that users facing these issues > > can find a solution for them. > > I suspect that all these issues are caused by the Hadoop 2.2.0 client we > > are packing into the binary releases. > > > > Upgrading the HDFS client to the same version as the HDFS installation > > (say, for example 2.4.1) resolved all issues. > > > > Therefore, I propose to provide Hadoop 2.4.0 and Hadoop 2.6.0 binaries on > > the Flink download page. > > For the 0.9.0 release, I would do another VOTE on providing these two > > binaries. > > > > I've also filed a JIRA to provide a Flink build which doesn't include > > Hadoop at all (relying on the version provided by the user through the > > classpath): https://issues.apache.org/jira/browse/FLINK-2268 > > > > > > Let me know what you think! > > > > Robert > > ... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
+1, if users require it it is serious
On Wed, 24 Jun 2015 at 12:14 Stephan Ewen <[hidden email]> wrote: > big +1 from me as well! > > On Wed, Jun 24, 2015 at 12:05 PM, Ufuk Celebi <[hidden email]> wrote: > > > I think this is a very good idea and very urgent (because of the issues > > you outlined and for the user experience of *not* having to compile your > > own version). Big +1. > > > > On 24 Jun 2015, at 11:45, Robert Metzger <[hidden email]> wrote: > > > > > Hi, > > > > > > I am aware of at least two Flink users which were facing various issues > > > with HDFS when using Flink. > > > > > > *Issues observed:* > > > - HDFS client trying to connect to the standby Namenode > > > > > > "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > > > Operation category READ is not supported in state standby" > > > - java.io.IOException: Bad response ERROR for block > > > BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from > > > datanode 172.22.5.81:50010 > > > at > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732) > > > > > > - Caused by: > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > 0 > > > at > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478) > > > at > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039) > > > at > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002) > > > > > > > > > I've added the exceptions to the email so that users facing these > issues > > > can find a solution for them. > > > I suspect that all these issues are caused by the Hadoop 2.2.0 client > we > > > are packing into the binary releases. > > > > > > Upgrading the HDFS client to the same version as the HDFS installation > > > (say, for example 2.4.1) resolved all issues. > > > > > > Therefore, I propose to provide Hadoop 2.4.0 and Hadoop 2.6.0 binaries > on > > > the Flink download page. > > > For the 0.9.0 release, I would do another VOTE on providing these two > > > binaries. > > > > > > I've also filed a JIRA to provide a Flink build which doesn't include > > > Hadoop at all (relying on the version provided by the user through the > > > classpath): https://issues.apache.org/jira/browse/FLINK-2268 > > > > > > > > > Let me know what you think! > > > > > > Robert > > > > > ... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by Stephan Ewen
+1 for the different Hadoop versions.
For the version without Hadoop binaries we should check whether the Hadoop interfaces are compatible across the different versions. But if this is the case, then also +1 for that. On Wed, Jun 24, 2015 at 12:14 PM, Stephan Ewen <[hidden email]> wrote: > big +1 from me as well! > > On Wed, Jun 24, 2015 at 12:05 PM, Ufuk Celebi <[hidden email]> wrote: > > > I think this is a very good idea and very urgent (because of the issues > > you outlined and for the user experience of *not* having to compile your > > own version). Big +1. > > > > On 24 Jun 2015, at 11:45, Robert Metzger <[hidden email]> wrote: > > > > > Hi, > > > > > > I am aware of at least two Flink users which were facing various issues > > > with HDFS when using Flink. > > > > > > *Issues observed:* > > > - HDFS client trying to connect to the standby Namenode > > > > > > "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > > > Operation category READ is not supported in state standby" > > > - java.io.IOException: Bad response ERROR for block > > > BP-1335380477-172.22.5.37-1424696786673:blk_1107843111_34301064 from > > > datanode 172.22.5.81:50010 > > > at > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732) > > > > > > - Caused by: > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > 0 > > > at > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478) > > > at > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039) > > > at > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002) > > > > > > > > > I've added the exceptions to the email so that users facing these > issues > > > can find a solution for them. > > > I suspect that all these issues are caused by the Hadoop 2.2.0 client > we > > > are packing into the binary releases. > > > > > > Upgrading the HDFS client to the same version as the HDFS installation > > > (say, for example 2.4.1) resolved all issues. > > > > > > Therefore, I propose to provide Hadoop 2.4.0 and Hadoop 2.6.0 binaries > on > > > the Flink download page. > > > For the 0.9.0 release, I would do another VOTE on providing these two > > > binaries. > > > > > > I've also filed a JIRA to provide a Flink build which doesn't include > > > Hadoop at all (relying on the version provided by the user through the > > > classpath): https://issues.apache.org/jira/browse/FLINK-2268 > > > > > > > > > Let me know what you think! > > > > > > Robert > > > > > ... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by Robert Metzger
We were experiencing different kinds of issues with Flink on Hadoop 2.4. When rebuilding Flink with Hadoop 2.4 dependencies all issues went away. Would be great if one could download binaries for different hadoop versions.
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
+1 for different Hadoop bundles. Other projects do it as well.
On Wed, Jun 24, 2015 at 2:25 PM, Vyacheslav Zholudev < [hidden email]> wrote: > We were experiencing different kinds of issues with Flink on Hadoop 2.4. > When > rebuilding Flink with Hadoop 2.4 dependencies all issues went away. Would > be > great if one could download binaries for different hadoop versions. > > > > -- > View this message in context: > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Provide-Hadoop-pre-build-Hadoop-2-4-and-Hadoop-2-6-binaries-tp6657p6676.html > Sent from the Apache Flink Mailing List archive. mailing list archive at > Nabble.com. > ... [show rest of quote]
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
@Robert, Ma: can one of you start the vote today?
Anyone who is against this can give a -1 in the vote thread. ;) – Ufuk On 25 Jun 2015, at 10:24, Maximilian Michels <[hidden email]> wrote: > +1 for different Hadoop bundles. Other projects do it as well. > > On Wed, Jun 24, 2015 at 2:25 PM, Vyacheslav Zholudev < > [hidden email]> wrote: > >> We were experiencing different kinds of issues with Flink on Hadoop 2.4. >> When >> rebuilding Flink with Hadoop 2.4 dependencies all issues went away. Would >> be >> great if one could download binaries for different hadoop versions. >> >> >> >> -- >> View this message in context: >> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Provide-Hadoop-pre-build-Hadoop-2-4-and-Hadoop-2-6-binaries-tp6657p6676.html >> Sent from the Apache Flink Mailing List archive. mailing list archive at >> Nabble.com. >> ... [show rest of quote] |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
I'll create the binaries and start the vote
On Thu, Jun 25, 2015 at 10:33 AM, Ufuk Celebi <[hidden email]> wrote: > @Robert, Ma: can one of you start the vote today? > > Anyone who is against this can give a -1 in the vote thread. ;) > > – Ufuk > > On 25 Jun 2015, at 10:24, Maximilian Michels <[hidden email]> wrote: > > > +1 for different Hadoop bundles. Other projects do it as well. > > > > On Wed, Jun 24, 2015 at 2:25 PM, Vyacheslav Zholudev < > > [hidden email]> wrote: > > > >> We were experiencing different kinds of issues with Flink on Hadoop 2.4. > >> When > >> rebuilding Flink with Hadoop 2.4 dependencies all issues went away. > Would > >> be > >> great if one could download binaries for different hadoop versions. > >> > >> > >> > >> -- > >> View this message in context: > >> > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Provide-Hadoop-pre-build-Hadoop-2-4-and-Hadoop-2-6-binaries-tp6657p6676.html > >> Sent from the Apache Flink Mailing List archive. mailing list archive at > >> Nabble.com. > >> > > ... [show rest of quote]
|
Free forum by Nabble | Disable Popup Ads | Edit this page |