Hi All,
this is the first bugfix release for the 0.10 series of Flink. I've CC'ed the user@ list if users are interested in helping to verify the release. It contains fixes for critical issues, in particular: - FLINK-3021 Fix class loading issue for streaming sources - FLINK-2974 Add periodic offset committer for Kafka - FLINK-2977 Using reflection to load HBase Kerberos tokens - FLINK-3024 Fix TimestampExtractor.getCurrentWatermark() Behaviour - FLINK-2967 Increase timeout for LOCAL_HOST address detection stratey - FLINK-3025 [kafka consumer] Bump transitive ZkClient dependency - FLINK-2989 job cancel button doesn't work on YARN - FLINK-3032: Flink does not start on Hadoop 2.7.1 (HDP), due to class conflict - FLINK-3011, 3019, 3028 Cancel jobs in RESTARTING state This is the guide on how to verify a release: https://cwiki.apache.org/confluence/display/FLINK/Releasing During the testing, please focus on trying out Flink on different Hadoop platforms: We changed the way how Hadoop's Maven dependencies are packaged, so maybe there are issues with different Hadoop distributions. The Kafka consumer also changed a bit, would be good to test it on a cluster. ------------------------------------------------------------- Please vote on releasing the following candidate as Apache Flink version 0.10.1: The commit to be voted on: http://git-wip-us.apache.org/repos/asf/flink/commit/2e9b2316 Branch: release-0.10.1-rc1 (see https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git) The release artifacts to be voted on can be found at: http://people.apache.org/~rmetzger/flink-0.10.1-rc1/ The release artifacts are signed with the key with fingerprint D9839159: http://www.apache.org/dist/flink/KEYS The staging repository for this release can be found at: https://repository.apache.org/content/repositories/orgapacheflink-1058 ------------------------------------------------------------- The vote is open for the next 72 hours and passes if a majority of at least three +1 PMC votes are cast. The vote ends on Wednesday, November 25. [ ] +1 Release this package as Apache Flink 0.10.1 [ ] -1 Do not release this package because ... =================================== |
I'm having trouble building release-0.10.1-rc1 with parameters:
mvn clean install -Dhadoop.version=2.6.0.2.2.6.0-2800 -Pvendor-repos Env: maven 3, JDK 7, MacOS 10.10.5 Attached maven log when it started to produce failing tests. P.S. I had to kill the build process since it got stuck (probably due to some long waiting interval) mvn.log |
Hi Slava!
I think the problem with your build is the file handles. It shows in various points: Exception in thread "main" java.lang.InternalError: java.io.FileNotFoundException: /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/jre/lib/ext/localedata.jar (Too many open files in system) Caused by: java.io.IOException: Too many open files in system at sun.nio.ch.KQueueArrayWrapper.init(Native Method) at sun.nio.ch.KQueueArrayWrapper.<init>(KQueueArrayWrapper.java:98) at sun.nio.ch.KQueueSelectorImpl.<init>(KQueueSelectorImpl.java:87) at sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:42) at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126) Can you check this on your system? I'll try to compile with the same flags on my system as well... On Tue, Nov 24, 2015 at 11:07 AM, Vyacheslav Zholudev < [hidden email]> wrote: > I'm having trouble building release-0.10.1-rc1 with parameters: > mvn clean install -Dhadoop.version=2.6.0.2.2.6.0-2800 -Pvendor-repos > > Env: maven 3, JDK 7, MacOS 10.10.5 > > Attached maven log when it started to produce failing tests. > > P.S. I had to kill the build process since it got stuck (probably due to > some long waiting interval) > > mvn.log > < > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/file/n9315/mvn.log > > > > > > -- > View this message in context: > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9315.html > Sent from the Apache Flink Mailing List archive. mailing list archive at > Nabble.com. > |
In reply to this post by Vyacheslav Zholudev
Hi,
I vote -1 for the RC due to the fact that the zookeeper deadlock issue was not completely solved. Robert could find the problem with the dependency management plugin and has opened a PR: [FLINK-3067] Enforce zkclient 0.7 for Kafka https://github.com/apache/flink/pull/1399 Cheers, Gyula Vyacheslav Zholudev <[hidden email]> ezt írta (időpont: 2015. nov. 24., K, 11:07): > I'm having trouble building release-0.10.1-rc1 with parameters: > mvn clean install -Dhadoop.version=2.6.0.2.2.6.0-2800 -Pvendor-repos > > Env: maven 3, JDK 7, MacOS 10.10.5 > > Attached maven log when it started to produce failing tests. > > P.S. I had to kill the build process since it got stuck (probably due to > some long waiting interval) > > mvn.log > < > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/file/n9315/mvn.log > > > > > > -- > View this message in context: > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9315.html > Sent from the Apache Flink Mailing List archive. mailing list archive at > Nabble.com. > |
In reply to this post by Stephan Ewen
Sorry, should have paid attention to the stacktraces. And actually I see than Java8 got picked up.
Will try to fix the issue and rerun |
Hi,
Regarding my previous comment for the Kafka/Zookeeper issue, let's discuss if this is critical enough so we want to include it in this release or the next bugfix. I will try to further investigate the reason the job failed in the first place (we suspect broker failure) Cheers, Gyula Vyacheslav Zholudev <[hidden email]> ezt írta (időpont: 2015. nov. 24., K, 11:17): > Sorry, should have paid attention to the stacktraces. And actually I see > than > Java8 got picked up. > Will try to fix the issue and rerun > > > > -- > View this message in context: > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9318.html > Sent from the Apache Flink Mailing List archive. mailing list archive at > Nabble.com. > |
Hi Gyula,
I'm undecided whether we should cancel the release on this or not. I understand that the issue is affecting you a lot. But I'm a bit hesitant delaying the release on this one because we are not 100% sure its really fixing the issue. You are running on a custom Flink branch anyways, right? Maybe you can include the fix into your build and see if the issue is really fixed with the change. If so, we can do a 0.10.2 next week. Maybe you are lucky and we find more issues in 0.10.1 and the fix is verified by then ;) On Tue, Nov 24, 2015 at 11:34 AM, Gyula Fóra <[hidden email]> wrote: > Hi, > Regarding my previous comment for the Kafka/Zookeeper issue, let's discuss > if this is critical enough so we want to include it in this release or the > next bugfix. > > I will try to further investigate the reason the job failed in the first > place (we suspect broker failure) > > Cheers, > Gyula > > Vyacheslav Zholudev <[hidden email]> ezt írta (időpont: > 2015. nov. 24., K, 11:17): > > > Sorry, should have paid attention to the stacktraces. And actually I see > > than > > Java8 got picked up. > > Will try to fix the issue and rerun > > > > > > > > -- > > View this message in context: > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9318.html > > Sent from the Apache Flink Mailing List archive. mailing list archive at > > Nabble.com. > > > |
In reply to this post by Gyula Fóra
I would rather not block the minor release on this issue. We don't
know if we have a valid fix for it. Let's get out the minor release first and have another one when we have the fix. On Tue, Nov 24, 2015 at 11:34 AM, Gyula Fóra <[hidden email]> wrote: > Hi, > Regarding my previous comment for the Kafka/Zookeeper issue, let's discuss > if this is critical enough so we want to include it in this release or the > next bugfix. > > I will try to further investigate the reason the job failed in the first > place (we suspect broker failure) > > Cheers, > Gyula > > Vyacheslav Zholudev <[hidden email]> ezt írta (időpont: > 2015. nov. 24., K, 11:17): > >> Sorry, should have paid attention to the stacktraces. And actually I see >> than >> Java8 got picked up. >> Will try to fix the issue and rerun >> >> >> >> -- >> View this message in context: >> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9318.html >> Sent from the Apache Flink Mailing List archive. mailing list archive at >> Nabble.com. >> |
Thanks Robert,
This issue is not really a blocker for me because we are using the snapshot versions anyways. If you think this won't affect other production users too much at this stage, let's go ahead. Gyula Maximilian Michels <[hidden email]> ezt írta (időpont: 2015. nov. 24., K, 12:08): > I would rather not block the minor release on this issue. We don't > know if we have a valid fix for it. Let's get out the minor release > first and have another one when we have the fix. > > On Tue, Nov 24, 2015 at 11:34 AM, Gyula Fóra <[hidden email]> wrote: > > Hi, > > Regarding my previous comment for the Kafka/Zookeeper issue, let's > discuss > > if this is critical enough so we want to include it in this release or > the > > next bugfix. > > > > I will try to further investigate the reason the job failed in the first > > place (we suspect broker failure) > > > > Cheers, > > Gyula > > > > Vyacheslav Zholudev <[hidden email]> ezt írta (időpont: > > 2015. nov. 24., K, 11:17): > > > >> Sorry, should have paid attention to the stacktraces. And actually I see > >> than > >> Java8 got picked up. > >> Will try to fix the issue and rerun > >> > >> > >> > >> -- > >> View this message in context: > >> > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9318.html > >> Sent from the Apache Flink Mailing List archive. mailing list archive at > >> Nabble.com. > >> > |
In reply to this post by Stephan Ewen
I can confirm that the build works fine when increasing a max number of opened files. Sorry for confusion.
|
@Gyula: I think it affects users, so should definitely be fixed very soon
(either 0.10.1 or 0.10.2) Still checking whether Robert's current version fix solves it now, or not... On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < [hidden email]> wrote: > I can confirm that the build works fine when increasing a max number of > opened files. Sorry for confusion. > > > > -- > View this message in context: > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html > Sent from the Apache Flink Mailing List archive. mailing list archive at > Nabble.com. > |
+1
- Build a maven project with the staging repository - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster with YARN and HDFS HA - ran some kafka (0.8.2.0) read / write experiments - job cancellation with yarn is working ;) I found the following issue while testing: https://issues.apache.org/jira/browse/FLINK-3078 but it was already in 0.10.0 and its not super critical bc the JobManager container will be killed by YARN after a few minutes. I'll extend the vote until tomorrow Thursday, November 26. On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> wrote: > @Gyula: I think it affects users, so should definitely be fixed very soon > (either 0.10.1 or 0.10.2) > > Still checking whether Robert's current version fix solves it now, or > not... > > On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < > [hidden email]> wrote: > > > I can confirm that the build works fine when increasing a max number of > > opened files. Sorry for confusion. > > > > > > > > -- > > View this message in context: > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html > > Sent from the Apache Flink Mailing List archive. mailing list archive at > > Nabble.com. > > > |
Checked checksums for src release and Hadoop 2.7 Scala 2.10 release
Checked binaries in source release - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro License - no new files added which are relevant for licensing Build Flink and run tests from source release for Hadoop 2.5.1 Checked empty that log files don't contain exceptions and out files are empty Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 node standalone cluster and YARN cluster Tested planVisualizer Tested flink command line client - tested info command - tested -p option Tested cluster HA in standalone mode => working Tested cluster HA on YARN (2.7.1) => working Except for the avro testdata file which is contained in the source release, I didn't find anything. +1 for releasing and removing the testdata file for the next release. On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <[hidden email]> wrote: > +1 > > - Build a maven project with the staging repository > - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster with > YARN and HDFS HA > - ran some kafka (0.8.2.0) read / write experiments > - job cancellation with yarn is working ;) > > I found the following issue while testing: > https://issues.apache.org/jira/browse/FLINK-3078 but it was already in > 0.10.0 and its not super critical bc the JobManager container will be > killed by YARN after a few minutes. > > > I'll extend the vote until tomorrow Thursday, November 26. > > > On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> wrote: > > > @Gyula: I think it affects users, so should definitely be fixed very soon > > (either 0.10.1 or 0.10.2) > > > > Still checking whether Robert's current version fix solves it now, or > > not... > > > > On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < > > [hidden email]> wrote: > > > > > I can confirm that the build works fine when increasing a max number of > > > opened files. Sorry for confusion. > > > > > > > > > > > > -- > > > View this message in context: > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html > > > Sent from the Apache Flink Mailing List archive. mailing list archive > at > > > Nabble.com. > > > > > > |
In reply to this post by Stephan Ewen
+1
- License and Notice are good - ran all tests (including manual tests) work for hadoop 2.3.0 - Scala 2.10 - ran all tests for hadoop 2.7.0 - Scala 2.11 - ran all examples, several on larger external data - checked web frontend - checked quickstart archetypes On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> wrote: > @Gyula: I think it affects users, so should definitely be fixed very soon > (either 0.10.1 or 0.10.2) > > Still checking whether Robert's current version fix solves it now, or > not... > > On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < > [hidden email]> wrote: > >> I can confirm that the build works fine when increasing a max number of >> opened files. Sorry for confusion. >> >> >> >> -- >> View this message in context: >> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html >> Sent from the Apache Flink Mailing List archive. mailing list archive at >> Nabble.com. >> > > |
In reply to this post by till.rohrmann
@Till I think the avro test data file is okay, the "no binaries" policy
refers to binary executables, as far as I know. On Wed, Nov 25, 2015 at 2:54 PM, Till Rohrmann <[hidden email]> wrote: > Checked checksums for src release and Hadoop 2.7 Scala 2.10 release > > Checked binaries in source release > - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro > > License > - no new files added which are relevant for licensing > > Build Flink and run tests from source release for Hadoop 2.5.1 > > Checked empty that log files don't contain exceptions and out files are > empty > > Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 > node standalone cluster and YARN cluster > > Tested planVisualizer > > Tested flink command line client > - tested info command > - tested -p option > > Tested cluster HA in standalone mode => working > > Tested cluster HA on YARN (2.7.1) => working > > Except for the avro testdata file which is contained in the source release, > I didn't find anything. > > +1 for releasing and removing the testdata file for the next release. > > On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <[hidden email]> > wrote: > > > +1 > > > > - Build a maven project with the staging repository > > - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster > with > > YARN and HDFS HA > > - ran some kafka (0.8.2.0) read / write experiments > > - job cancellation with yarn is working ;) > > > > I found the following issue while testing: > > https://issues.apache.org/jira/browse/FLINK-3078 but it was already in > > 0.10.0 and its not super critical bc the JobManager container will be > > killed by YARN after a few minutes. > > > > > > I'll extend the vote until tomorrow Thursday, November 26. > > > > > > On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> wrote: > > > > > @Gyula: I think it affects users, so should definitely be fixed very > soon > > > (either 0.10.1 or 0.10.2) > > > > > > Still checking whether Robert's current version fix solves it now, or > > > not... > > > > > > On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < > > > [hidden email]> wrote: > > > > > > > I can confirm that the build works fine when increasing a max number > of > > > > opened files. Sorry for confusion. > > > > > > > > > > > > > > > > -- > > > > View this message in context: > > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html > > > > Sent from the Apache Flink Mailing List archive. mailing list archive > > at > > > > Nabble.com. > > > > > > > > > > |
Alright, then I withdraw my remark concerning testdata.avro.
On Wed, Nov 25, 2015 at 2:56 PM, Stephan Ewen <[hidden email]> wrote: > @Till I think the avro test data file is okay, the "no binaries" policy > refers to binary executables, as far as I know. > > On Wed, Nov 25, 2015 at 2:54 PM, Till Rohrmann <[hidden email]> > wrote: > > > Checked checksums for src release and Hadoop 2.7 Scala 2.10 release > > > > Checked binaries in source release > > - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro > > > > License > > - no new files added which are relevant for licensing > > > > Build Flink and run tests from source release for Hadoop 2.5.1 > > > > Checked empty that log files don't contain exceptions and out files are > > empty > > > > Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 > > node standalone cluster and YARN cluster > > > > Tested planVisualizer > > > > Tested flink command line client > > - tested info command > > - tested -p option > > > > Tested cluster HA in standalone mode => working > > > > Tested cluster HA on YARN (2.7.1) => working > > > > Except for the avro testdata file which is contained in the source > release, > > I didn't find anything. > > > > +1 for releasing and removing the testdata file for the next release. > > > > On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <[hidden email]> > > wrote: > > > > > +1 > > > > > > - Build a maven project with the staging repository > > > - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster > > with > > > YARN and HDFS HA > > > - ran some kafka (0.8.2.0) read / write experiments > > > - job cancellation with yarn is working ;) > > > > > > I found the following issue while testing: > > > https://issues.apache.org/jira/browse/FLINK-3078 but it was already in > > > 0.10.0 and its not super critical bc the JobManager container will be > > > killed by YARN after a few minutes. > > > > > > > > > I'll extend the vote until tomorrow Thursday, November 26. > > > > > > > > > On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> > wrote: > > > > > > > @Gyula: I think it affects users, so should definitely be fixed very > > soon > > > > (either 0.10.1 or 0.10.2) > > > > > > > > Still checking whether Robert's current version fix solves it now, or > > > > not... > > > > > > > > On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < > > > > [hidden email]> wrote: > > > > > > > > > I can confirm that the build works fine when increasing a max > number > > of > > > > > opened files. Sorry for confusion. > > > > > > > > > > > > > > > > > > > > -- > > > > > View this message in context: > > > > > > > > > > > > > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html > > > > > Sent from the Apache Flink Mailing List archive. mailing list > archive > > > at > > > > > Nabble.com. > > > > > > > > > > > > > > > |
+1
I ran an example with a custom operator that processes high-volume kafka input/output and has a large state size. I ran this on 10 GCE nodes. > On 25 Nov 2015, at 14:58, Till Rohrmann <[hidden email]> wrote: > > Alright, then I withdraw my remark concerning testdata.avro. > > On Wed, Nov 25, 2015 at 2:56 PM, Stephan Ewen <[hidden email]> wrote: > >> @Till I think the avro test data file is okay, the "no binaries" policy >> refers to binary executables, as far as I know. >> >> On Wed, Nov 25, 2015 at 2:54 PM, Till Rohrmann <[hidden email]> >> wrote: >> >>> Checked checksums for src release and Hadoop 2.7 Scala 2.10 release >>> >>> Checked binaries in source release >>> - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro >>> >>> License >>> - no new files added which are relevant for licensing >>> >>> Build Flink and run tests from source release for Hadoop 2.5.1 >>> >>> Checked empty that log files don't contain exceptions and out files are >>> empty >>> >>> Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 >>> node standalone cluster and YARN cluster >>> >>> Tested planVisualizer >>> >>> Tested flink command line client >>> - tested info command >>> - tested -p option >>> >>> Tested cluster HA in standalone mode => working >>> >>> Tested cluster HA on YARN (2.7.1) => working >>> >>> Except for the avro testdata file which is contained in the source >> release, >>> I didn't find anything. >>> >>> +1 for releasing and removing the testdata file for the next release. >>> >>> On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <[hidden email]> >>> wrote: >>> >>>> +1 >>>> >>>> - Build a maven project with the staging repository >>>> - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster >>> with >>>> YARN and HDFS HA >>>> - ran some kafka (0.8.2.0) read / write experiments >>>> - job cancellation with yarn is working ;) >>>> >>>> I found the following issue while testing: >>>> https://issues.apache.org/jira/browse/FLINK-3078 but it was already in >>>> 0.10.0 and its not super critical bc the JobManager container will be >>>> killed by YARN after a few minutes. >>>> >>>> >>>> I'll extend the vote until tomorrow Thursday, November 26. >>>> >>>> >>>> On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> >> wrote: >>>> >>>>> @Gyula: I think it affects users, so should definitely be fixed very >>> soon >>>>> (either 0.10.1 or 0.10.2) >>>>> >>>>> Still checking whether Robert's current version fix solves it now, or >>>>> not... >>>>> >>>>> On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < >>>>> [hidden email]> wrote: >>>>> >>>>>> I can confirm that the build works fine when increasing a max >> number >>> of >>>>>> opened files. Sorry for confusion. >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> View this message in context: >>>>>> >>>>> >>>> >>> >> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html >>>>>> Sent from the Apache Flink Mailing List archive. mailing list >> archive >>>> at >>>>>> Nabble.com. >>>>>> >>>>> >>>> >>> >> |
+1
- Verified hashes and signatures - Ran example jobs on YARN with vanilla Hadoop vesions (on 4 GCE nodes): * 2.7.1 with Flink Hadoop 2.7 binary, Scala 2.10 and 11 * 2.6.2 with Flink Hadoop 2.6 binary, Scala 2.10 * 2.4.1 with Flink Hadoop 2.4 binary, Scala 2.10 * 2.3.0 with Flink Hadoop 2 binary, Scala 2.10 - Cancelled a restarting job via CLI and web interface - Ran simple Kafka read-write pipeline - Ran manual tests - Ran examples on local cluster > On 25 Nov 2015, at 17:45, Aljoscha Krettek <[hidden email]> wrote: > > +1 > > I ran an example with a custom operator that processes high-volume kafka input/output and has a large state size. I ran this on 10 GCE nodes. > >> On 25 Nov 2015, at 14:58, Till Rohrmann <[hidden email]> wrote: >> >> Alright, then I withdraw my remark concerning testdata.avro. >> >> On Wed, Nov 25, 2015 at 2:56 PM, Stephan Ewen <[hidden email]> wrote: >> >>> @Till I think the avro test data file is okay, the "no binaries" policy >>> refers to binary executables, as far as I know. >>> >>> On Wed, Nov 25, 2015 at 2:54 PM, Till Rohrmann <[hidden email]> >>> wrote: >>> >>>> Checked checksums for src release and Hadoop 2.7 Scala 2.10 release >>>> >>>> Checked binaries in source release >>>> - contains ./flink-staging/flink-avro/src/test/resources/testdata.avro >>>> >>>> License >>>> - no new files added which are relevant for licensing >>>> >>>> Build Flink and run tests from source release for Hadoop 2.5.1 >>>> >>>> Checked empty that log files don't contain exceptions and out files are >>>> empty >>>> >>>> Run all examples with Hadoop 2.7 Scala 2.10 binaries via FliRTT tool on 4 >>>> node standalone cluster and YARN cluster >>>> >>>> Tested planVisualizer >>>> >>>> Tested flink command line client >>>> - tested info command >>>> - tested -p option >>>> >>>> Tested cluster HA in standalone mode => working >>>> >>>> Tested cluster HA on YARN (2.7.1) => working >>>> >>>> Except for the avro testdata file which is contained in the source >>> release, >>>> I didn't find anything. >>>> >>>> +1 for releasing and removing the testdata file for the next release. >>>> >>>> On Wed, Nov 25, 2015 at 2:33 PM, Robert Metzger <[hidden email]> >>>> wrote: >>>> >>>>> +1 >>>>> >>>>> - Build a maven project with the staging repository >>>>> - started Flink on YARN on a CDH 5.4.5 / Hadoop 2.6.0-cdh5.4.5 cluster >>>> with >>>>> YARN and HDFS HA >>>>> - ran some kafka (0.8.2.0) read / write experiments >>>>> - job cancellation with yarn is working ;) >>>>> >>>>> I found the following issue while testing: >>>>> https://issues.apache.org/jira/browse/FLINK-3078 but it was already in >>>>> 0.10.0 and its not super critical bc the JobManager container will be >>>>> killed by YARN after a few minutes. >>>>> >>>>> >>>>> I'll extend the vote until tomorrow Thursday, November 26. >>>>> >>>>> >>>>> On Tue, Nov 24, 2015 at 1:54 PM, Stephan Ewen <[hidden email]> >>> wrote: >>>>> >>>>>> @Gyula: I think it affects users, so should definitely be fixed very >>>> soon >>>>>> (either 0.10.1 or 0.10.2) >>>>>> >>>>>> Still checking whether Robert's current version fix solves it now, or >>>>>> not... >>>>>> >>>>>> On Tue, Nov 24, 2015 at 1:46 PM, Vyacheslav Zholudev < >>>>>> [hidden email]> wrote: >>>>>> >>>>>>> I can confirm that the build works fine when increasing a max >>> number >>>> of >>>>>>> opened files. Sorry for confusion. >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> View this message in context: >>>>>>> >>>>>> >>>>> >>>> >>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-Apache-Flink-0-10-1-release-0-10-0-rc1-tp9296p9327.html >>>>>>> Sent from the Apache Flink Mailing List archive. mailing list >>> archive >>>>> at >>>>>>> Nabble.com. >>>>>>> >>>>>> >>>>> >>>> >>> > |
In reply to this post by Robert Metzger
+1
LICENSE file looks good in source artifact NOTICE file looks good in source artifact Signature file looks good in source artifact Hash files looks good in source artifact No 3rd party executables in source artifact Source compiled All tests are passed Run standalone mode test app - Henry On Mon, Nov 23, 2015 at 4:45 AM, Robert Metzger <[hidden email]> wrote: > Hi All, > > this is the first bugfix release for the 0.10 series of Flink. > I've CC'ed the user@ list if users are interested in helping to verify the > release. > > It contains fixes for critical issues, in particular: > - FLINK-3021 Fix class loading issue for streaming sources > - FLINK-2974 Add periodic offset committer for Kafka > - FLINK-2977 Using reflection to load HBase Kerberos tokens > - FLINK-3024 Fix TimestampExtractor.getCurrentWatermark() Behaviour > - FLINK-2967 Increase timeout for LOCAL_HOST address detection stratey > - FLINK-3025 [kafka consumer] Bump transitive ZkClient dependency > - FLINK-2989 job cancel button doesn't work on YARN > - FLINK-3032: Flink does not start on Hadoop 2.7.1 (HDP), due to class > conflict > - FLINK-3011, 3019, 3028 Cancel jobs in RESTARTING state > > This is the guide on how to verify a release: > https://cwiki.apache.org/confluence/display/FLINK/Releasing > > During the testing, please focus on trying out Flink on different Hadoop > platforms: We changed the way how Hadoop's Maven dependencies are packaged, > so maybe there are issues with different Hadoop distributions. > The Kafka consumer also changed a bit, would be good to test it on a > cluster. > > ------------------------------------------------------------- > > Please vote on releasing the following candidate as Apache Flink version > 0.10.1: > > The commit to be voted on: > http://git-wip-us.apache.org/repos/asf/flink/commit/2e9b2316 > > Branch: > release-0.10.1-rc1 (see > https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git) > > The release artifacts to be voted on can be found at: > http://people.apache.org/~rmetzger/flink-0.10.1-rc1/ > > The release artifacts are signed with the key with fingerprint D9839159: > http://www.apache.org/dist/flink/KEYS > > The staging repository for this release can be found at: > https://repository.apache.org/content/repositories/orgapacheflink-1058 > > ------------------------------------------------------------- > > The vote is open for the next 72 hours and passes if a majority of at least > three +1 PMC votes are cast. > > The vote ends on Wednesday, November 25. > > [ ] +1 Release this package as Apache Flink 0.10.1 > [ ] -1 Do not release this package because ... > > =================================== |
Free forum by Nabble | Edit this page |