Hi everyone,
it took some time to compile the next release candidate but here we are: Please review and vote on the release candidate #2 for the version 1.5.0, as follows: [ ] +1, Approve the release [ ] -1, Do not approve the release (please provide specific comments) The complete staging area is available for your review, which includes: * JIRA release notes [1], * the official Apache source release and binary convenience releases to be deployed to dist.apache.org [2], which are signed with the key with fingerprint 1F302569A96CFFD5 [3], * all artifacts to be deployed to the Maven Central Repository [4], * source code tag "release-1.5.0-rc2" [5], Please use this document for coordinating testing efforts: [6] The vote will be open for at least 72 hours. It is adopted by majority approval, with at least 3 PMC affirmative votes. Thanks, Your friendly Release Manager [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa? projectId=12315522&version=12341764 [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc2/ [3] https://dist.apache.org/repos/dist/release/flink/KEYS [4] https://repository.apache.org/content/repositories/orgapacheflink-1155 [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h= 37af4d7e7072958a6d8bdfc49de2ed3a5f66c889 [6] https://docs.google.com/document/d/1rJe_6yDPBurnhipmcSeCnpYFnr1SAuHyOQ N2-08mJYc/edit?usp=sharing Pro-tip: you can create a settings.xml file with these contents: <settings> <activeProfiles> <activeProfile>flink-1.5.0</activeProfile> </activeProfiles> <profiles> <profile> <id>flink-1.5.0</id> <repositories> <repository> <id>flink-1.5.0</id> <url> https://repository.apache.org/content/repositories/orgapacheflink-1155/ </url> </repository> <repository> <id>archetype</id> <url> https://repository.apache.org/content/repositories/orgapacheflink-1155/ </url> </repository> </repositories> </profile> </profiles> </settings> And reference that in you maven commands via --settings path/to/settings.xml. This is useful for creating a quickstart based on the staged release and for building against the staged jars. |
I ran the test suite twice and both failed with:
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.784 sec <<< FAILURE! - in org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest Time elapsed: 9.784 sec <<< ERROR! java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485) at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081) at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198) at org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348) at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:748) The test passes when run alone. On Thu, May 10, 2018 at 9:37 AM, Till Rohrmann <[hidden email]> wrote: > Hi everyone, > > it took some time to compile the next release candidate but here we are: > Please review and vote on the release candidate #2 for the version 1.5.0, > as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > > The complete staging area is available for your review, which includes: > * JIRA release notes [1], > * the official Apache source release and binary convenience releases to be > deployed to dist.apache.org [2], which are signed with the key with > fingerprint 1F302569A96CFFD5 [3], > * all artifacts to be deployed to the Maven Central Repository [4], > * source code tag "release-1.5.0-rc2" [5], > > Please use this document for coordinating testing efforts: [6] > > The vote will be open for at least 72 hours. It is adopted by majority > approval, with at least 3 PMC affirmative votes. > > Thanks, > Your friendly Release Manager > > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa? > projectId=12315522&version=12341764 > [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc2/ > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > [4] https://repository.apache.org/content/repositories/orgapacheflink-1155 > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h= > 37af4d7e7072958a6d8bdfc49de2ed3a5f66c889 > [6] https://docs.google.com/document/d/1rJe_6yDPBurnhipmcSeCnpYFnr1SAuHyOQ > N2-08mJYc/edit?usp=sharing > > Pro-tip: you can create a settings.xml file with these contents: > > <settings> > <activeProfiles> > <activeProfile>flink-1.5.0</activeProfile> > </activeProfiles> > <profiles> > <profile> > <id>flink-1.5.0</id> > <repositories> > <repository> > <id>flink-1.5.0</id> > <url> > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > </url> > </repository> > <repository> > <id>archetype</id> > <url> > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > </url> > </repository> > </repositories> > </profile> > </profiles> > </settings> > > And reference that in you maven commands via --settings > path/to/settings.xml. This is useful for creating a quickstart based on the > staged release and for building against the staged jars. > |
Hi Till,
I found that only file and kafka connectors are tested in the plan. @Gordon, shall we test the Kinesis connector? AFAIK, there're some major changes and AWS library upgrades in Flink 1.5. I would have tested it myself but I don't use Kinesis anymore. Thanks, Bowen On Thu, May 10, 2018 at 10:04 AM, Ted Yu <[hidden email]> wrote: > I ran the test suite twice and both failed with: > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.784 sec > <<< FAILURE! - in > org.apache.flink.runtime.jobmanager.scheduler. > ScheduleOrUpdateConsumersTest > org.apache.flink.runtime.jobmanager.scheduler. > ScheduleOrUpdateConsumersTest > Time elapsed: 9.784 sec <<< ERROR! > java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio. > NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$ > AbstractUnsafe.bind(AbstractChannel.java:485) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$ > HeadContext.bind(DefaultChannelPipeline.java:1081) > at > org.apache.flink.shaded.netty4.io.netty.channel. > AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext. > java:502) > at > org.apache.flink.shaded.netty4.io.netty.channel. > AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > at > org.apache.flink.shaded.netty4.io.netty.channel. > DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind( > AbstractChannel.java:198) > at > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run( > AbstractBootstrap.java:348) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > at > org.apache.flink.shaded.netty4.io.netty.channel.nio. > NioEventLoop.run(NioEventLoop.java:357) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > DefaultThreadFactory$DefaultRunnableDecorator.run( > DefaultThreadFactory.java:137) > at java.lang.Thread.run(Thread.java:748) > > The test passes when run alone. > > On Thu, May 10, 2018 at 9:37 AM, Till Rohrmann <[hidden email]> > wrote: > > > Hi everyone, > > > > it took some time to compile the next release candidate but here we are: > > Please review and vote on the release candidate #2 for the version 1.5.0, > > as follows: > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release and binary convenience releases to > be > > deployed to dist.apache.org [2], which are signed with the key with > > fingerprint 1F302569A96CFFD5 [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag "release-1.5.0-rc2" [5], > > > > Please use this document for coordinating testing efforts: [6] > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Your friendly Release Manager > > > > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa? > > projectId=12315522&version=12341764 > > [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc2/ > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] https://repository.apache.org/content/repositories/ > orgapacheflink-1155 > > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h= > > 37af4d7e7072958a6d8bdfc49de2ed3a5f66c889 > > [6] https://docs.google.com/document/d/1rJe_ > 6yDPBurnhipmcSeCnpYFnr1SAuHyOQ > > N2-08mJYc/edit?usp=sharing > > > > Pro-tip: you can create a settings.xml file with these contents: > > > > <settings> > > <activeProfiles> > > <activeProfile>flink-1.5.0</activeProfile> > > </activeProfiles> > > <profiles> > > <profile> > > <id>flink-1.5.0</id> > > <repositories> > > <repository> > > <id>flink-1.5.0</id> > > <url> > > > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > > </url> > > </repository> > > <repository> > > <id>archetype</id> > > <url> > > > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > > </url> > > </repository> > > </repositories> > > </profile> > > </profiles> > > </settings> > > > > And reference that in you maven commands via --settings > > path/to/settings.xml. This is useful for creating a quickstart based on > the > > staged release and for building against the staged jars. > > > |
Hi Bowen,
Thanks for bringing this up! Yes, I think we should definitely always test the Kinesis connector for releases. FYI, I think you can also add modification suggestions to the test plan so that the release manager is aware of that. Some of the more major Kinesis connector changes that I know of, in 1.5.0: [FLINK-8484] Fix Kinesis consumer re-reading closed shards on restart [FLINK-8648] Customizable shard-to-subtask assignment There are also some other more minor changes such as adding metrics and exposing access to some internal methods / classes for more flexibility. As you mentioned, also taking account that we had some AWS library upgrades, we should definitely include Kinesis connector in the test plan. Cheers, Gordon On 11 May 2018 at 1:08:34 AM, Bowen Li ([hidden email]) wrote: Hi Till, I found that only file and kafka connectors are tested in the plan. @Gordon, shall we test the Kinesis connector? AFAIK, there're some major changes and AWS library upgrades in Flink 1.5. I would have tested it myself but I don't use Kinesis anymore. Thanks, Bowen On Thu, May 10, 2018 at 10:04 AM, Ted Yu <[hidden email]> wrote: > I ran the test suite twice and both failed with: > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.784 sec > <<< FAILURE! - in > org.apache.flink.runtime.jobmanager.scheduler. > ScheduleOrUpdateConsumersTest > org.apache.flink.runtime.jobmanager.scheduler. > ScheduleOrUpdateConsumersTest > Time elapsed: 9.784 sec <<< ERROR! > java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio. > NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$ > AbstractUnsafe.bind(AbstractChannel.java:485) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$ > HeadContext.bind(DefaultChannelPipeline.java:1081) > at > org.apache.flink.shaded.netty4.io.netty.channel. > AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext. > java:502) > at > org.apache.flink.shaded.netty4.io.netty.channel. > AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > at > org.apache.flink.shaded.netty4.io.netty.channel. > DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind( > AbstractChannel.java:198) > at > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run( > AbstractBootstrap.java:348) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > at > org.apache.flink.shaded.netty4.io.netty.channel.nio. > NioEventLoop.run(NioEventLoop.java:357) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > DefaultThreadFactory$DefaultRunnableDecorator.run( > DefaultThreadFactory.java:137) > at java.lang.Thread.run(Thread.java:748) > > The test passes when run alone. > > On Thu, May 10, 2018 at 9:37 AM, Till Rohrmann <[hidden email]> > wrote: > > > Hi everyone, > > > > it took some time to compile the next release candidate but here we are: > > Please review and vote on the release candidate #2 for the version 1.5.0, > > as follows: > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release and binary convenience releases to > be > > deployed to dist.apache.org [2], which are signed with the key with > > fingerprint 1F302569A96CFFD5 [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag "release-1.5.0-rc2" [5], > > > > Please use this document for coordinating testing efforts: [6] > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Your friendly Release Manager > > > > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa? > > projectId=12315522&version=12341764 > > [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc2/ > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] https://repository.apache.org/content/repositories/ > orgapacheflink-1155 > > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h= > > 37af4d7e7072958a6d8bdfc49de2ed3a5f66c889 > > [6] https://docs.google.com/document/d/1rJe_ > 6yDPBurnhipmcSeCnpYFnr1SAuHyOQ > > N2-08mJYc/edit?usp=sharing > > > > Pro-tip: you can create a settings.xml file with these contents: > > > > <settings> > > <activeProfiles> > > <activeProfile>flink-1.5.0</activeProfile> > > </activeProfiles> > > <profiles> > > <profile> > > <id>flink-1.5.0</id> > > <repositories> > > <repository> > > <id>flink-1.5.0</id> > > <url> > > > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > > </url> > > </repository> > > <repository> > > <id>archetype</id> > > <url> > > > > https://repository.apache.org/content/repositories/orgapacheflink-1155/ > > </url> > > </repository> > > </repositories> > > </profile> > > </profiles> > > </settings> > > > > And reference that in you maven commands via --settings > > path/to/settings.xml. This is useful for creating a quickstart based on > the > > staged release and for building against the staged jars. > > > |
@TedYu - looks like a port collision in the testing setup.
Will look into that, but I would not consider that a release blocker. On Fri, May 11, 2018 at 5:16 AM, Tzu-Li (Gordon) Tai <[hidden email]> wrote: > Hi Bowen, > > Thanks for bringing this up! > > Yes, I think we should definitely always test the Kinesis connector for > releases. > FYI, I think you can also add modification suggestions to the test plan so > that the release manager is aware of that. > > Some of the more major Kinesis connector changes that I know of, in 1.5.0: > [FLINK-8484] Fix Kinesis consumer re-reading closed shards on restart > [FLINK-8648] Customizable shard-to-subtask assignment > > There are also some other more minor changes such as adding metrics and > exposing > access to some internal methods / classes for more flexibility. > > As you mentioned, also taking account that we had some AWS library > upgrades, > we should definitely include Kinesis connector in the test plan. > > Cheers, > Gordon > On 11 May 2018 at 1:08:34 AM, Bowen Li ([hidden email]) wrote: > > Hi Till, > > I found that only file and kafka connectors are tested in the plan. > > @Gordon, shall we test the Kinesis connector? AFAIK, there're some major > changes and AWS library upgrades in Flink 1.5. I would have tested it > myself but I don't use Kinesis anymore. > > Thanks, > Bowen > > > On Thu, May 10, 2018 at 10:04 AM, Ted Yu <[hidden email]> wrote: > > > I ran the test suite twice and both failed with: > > > > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.784 > sec > > <<< FAILURE! - in > > org.apache.flink.runtime.jobmanager.scheduler. > > ScheduleOrUpdateConsumersTest > > org.apache.flink.runtime.jobmanager.scheduler. > > ScheduleOrUpdateConsumersTest > > Time elapsed: 9.784 sec <<< ERROR! > > java.net.BindException: Address already in use > > at sun.nio.ch.Net.bind0(Native Method) > > at sun.nio.ch.Net.bind(Net.java:433) > > at sun.nio.ch.Net.bind(Net.java:425) > > at > > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > > > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > > at > > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio. > > NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > > at > > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$ > > AbstractUnsafe.bind(AbstractChannel.java:485) > > at > > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$ > > HeadContext.bind(DefaultChannelPipeline.java:1081) > > at > > org.apache.flink.shaded.netty4.io.netty.channel. > > AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext. > > java:502) > > at > > org.apache.flink.shaded.netty4.io.netty.channel. > > AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > > > at > > org.apache.flink.shaded.netty4.io.netty.channel. > > DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > > at > > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind( > > AbstractChannel.java:198) > > at > > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run( > > > AbstractBootstrap.java:348) > > at > > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > > SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > > > at > > org.apache.flink.shaded.netty4.io.netty.channel.nio. > > NioEventLoop.run(NioEventLoop.java:357) > > at > > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > > SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > > at > > org.apache.flink.shaded.netty4.io.netty.util.concurrent. > > DefaultThreadFactory$DefaultRunnableDecorator.run( > > DefaultThreadFactory.java:137) > > at java.lang.Thread.run(Thread.java:748) > > > > The test passes when run alone. > > > > On Thu, May 10, 2018 at 9:37 AM, Till Rohrmann <[hidden email]> > > wrote: > > > > > Hi everyone, > > > > > > it took some time to compile the next release candidate but here we > are: > > > Please review and vote on the release candidate #2 for the version > 1.5.0, > > > as follows: > > > [ ] +1, Approve the release > > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > > > > The complete staging area is available for your review, which > includes: > > > * JIRA release notes [1], > > > * the official Apache source release and binary convenience releases > to > > be > > > deployed to dist.apache.org [2], which are signed with the key with > > > fingerprint 1F302569A96CFFD5 [3], > > > * all artifacts to be deployed to the Maven Central Repository [4], > > > * source code tag "release-1.5.0-rc2" [5], > > > > > > Please use this document for coordinating testing efforts: [6] > > > > > > The vote will be open for at least 72 hours. It is adopted by majority > > > approval, with at least 3 PMC affirmative votes. > > > > > > Thanks, > > > Your friendly Release Manager > > > > > > [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa? > > > projectId=12315522&version=12341764 > > > [2] http://people.apache.org/~trohrmann/flink-1.5.0-rc2/ > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > [4] https://repository.apache.org/content/repositories/ > > orgapacheflink-1155 > > > [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=commit;h= > > > 37af4d7e7072958a6d8bdfc49de2ed3a5f66c889 > > > [6] https://docs.google.com/document/d/1rJe_ > > 6yDPBurnhipmcSeCnpYFnr1SAuHyOQ > > > N2-08mJYc/edit?usp=sharing > > > > > > Pro-tip: you can create a settings.xml file with these contents: > > > > > > <settings> > > > <activeProfiles> > > > <activeProfile>flink-1.5.0</activeProfile> > > > </activeProfiles> > > > <profiles> > > > <profile> > > > <id>flink-1.5.0</id> > > > <repositories> > > > <repository> > > > <id>flink-1.5.0</id> > > > <url> > > > > > > https://repository.apache.org/content/repositories/ > orgapacheflink-1155/ > > > </url> > > > </repository> > > > <repository> > > > <id>archetype</id> > > > <url> > > > > > > https://repository.apache.org/content/repositories/ > orgapacheflink-1155/ > > > </url> > > > </repository> > > > </repositories> > > > </profile> > > > </profiles> > > > </settings> > > > > > > And reference that in you maven commands via --settings > > > path/to/settings.xml. This is useful for creating a quickstart based > on > > the > > > staged release and for building against the staged jars. > > > > > > |
In reply to this post by Till Rohrmann
While building from source failing with following error :
Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) on project flink-bucketing-sink-test MVN Version : 3.0.5 Command : mvn clean install -DskipTests -Dscala.version=2.11.7 -Pvendor-repos -Dhadoop.version=2.7.3.2.6.2.0-205 > build-output_2.log -- Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ |
Can you try out mvn 3.5.2 ?
I don't get the error when running the command line you gave. BTW 2.7.3.2.6.2.0-205 was quite old release. Cheers On Mon, May 14, 2018 at 7:15 AM, shashank734 <[hidden email]> wrote: > While building from source failing with following error : > > Failed to execute goal > org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce > (dependency-convergence) on project flink-bucketing-sink-test > > > MVN Version : 3.0.5 > > Command : mvn clean install -DskipTests -Dscala.version=2.11.7 > -Pvendor-repos -Dhadoop.version=2.7.3.2.6.2.0-205 > build-output_2.log > > > > > -- > Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ > |
Thanks for the feedback. I actually have to cancel this RC because of the
following issues [1, 2, 3, 4]. The issues are already fixed and I will create a new RC asap. @Ted, thanks for reporting the problem with the BindException. We should look into why this happens but as Stephan said it should not be a release blocker because we couldn't see it so far on Travis. @Bowen, you're right that we should definitely test the Kinesis connector since we don't have an end-to-end test for this connector. Maybe the community could help here once I've created the next RC. @Shashank, I created the release artifacts also with Maven 3.0.5. Thus, it should actually work. I'll try it with the same command you've posted to the ML in order to see whether I can reproduce it. [1] https://issues.apache.org/jira/browse/FLINK-9358 [2] https://issues.apache.org/jira/browse/FLINK-9336 [3] https://issues.apache.org/jira/browse/FLINK-9246 [4] https://issues.apache.org/jira/browse/FLINK-9201 [5] https://issues.apache.org/jira/browse/FLINK-9234 Cheers, Till On Mon, May 14, 2018 at 6:33 PM, Ted Yu <[hidden email]> wrote: > Can you try out mvn 3.5.2 ? > > I don't get the error when running the command line you gave. > > BTW 2.7.3.2.6.2.0-205 was quite old release. > > Cheers > > On Mon, May 14, 2018 at 7:15 AM, shashank734 <[hidden email]> > wrote: > > > While building from source failing with following error : > > > > Failed to execute goal > > org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce > > (dependency-convergence) on project flink-bucketing-sink-test > > > > > > MVN Version : 3.0.5 > > > > Command : mvn clean install -DskipTests -Dscala.version=2.11.7 > > -Pvendor-repos -Dhadoop.version=2.7.3.2.6.2.0-205 > build-output_2.log > > > > > > > > > > -- > > Sent from: http://apache-flink-mailing-list-archive.1008284.n3. > nabble.com/ > > > |
We've had the enforcer-plugin issue before and neither Timo nor me could
reproduce it locally. See FLINK-9091 <https://issues.apache.org/jira/browse/FLINK-9091> and FLINK-9145 <https://issues.apache.org/jira/browse/FLINK-9145>. On 15.05.2018 14:50, Till Rohrmann wrote: > Thanks for the feedback. I actually have to cancel this RC because of the > following issues [1, 2, 3, 4]. The issues are already fixed and I will > create a new RC asap. > > @Ted, thanks for reporting the problem with the BindException. We should > look into why this happens but as Stephan said it should not be a release > blocker because we couldn't see it so far on Travis. > > @Bowen, you're right that we should definitely test the Kinesis connector > since we don't have an end-to-end test for this connector. Maybe the > community could help here once I've created the next RC. > > @Shashank, I created the release artifacts also with Maven 3.0.5. Thus, it > should actually work. I'll try it with the same command you've posted to > the ML in order to see whether I can reproduce it. > > [1] https://issues.apache.org/jira/browse/FLINK-9358 > [2] https://issues.apache.org/jira/browse/FLINK-9336 > [3] https://issues.apache.org/jira/browse/FLINK-9246 > [4] https://issues.apache.org/jira/browse/FLINK-9201 > [5] https://issues.apache.org/jira/browse/FLINK-9234 > > Cheers, > Till > > On Mon, May 14, 2018 at 6:33 PM, Ted Yu <[hidden email]> wrote: > >> Can you try out mvn 3.5.2 ? >> >> I don't get the error when running the command line you gave. >> >> BTW 2.7.3.2.6.2.0-205 was quite old release. >> >> Cheers >> >> On Mon, May 14, 2018 at 7:15 AM, shashank734 <[hidden email]> >> wrote: >> >>> While building from source failing with following error : >>> >>> Failed to execute goal >>> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce >>> (dependency-convergence) on project flink-bucketing-sink-test >>> >>> >>> MVN Version : 3.0.5 >>> >>> Command : mvn clean install -DskipTests -Dscala.version=2.11.7 >>> -Pvendor-repos -Dhadoop.version=2.7.3.2.6.2.0-205 > build-output_2.log >>> >>> >>> >>> >>> -- >>> Sent from: http://apache-flink-mailing-list-archive.1008284.n3. >> nabble.com/ |
In reply to this post by Till Rohrmann
Hi,
Regarding the Kinesis consumer, fwiw we backported most of the changes to 1.4.x in our fork and running our pipelines with those. Thanks, Thomas On Tue, May 15, 2018 at 5:50 AM, Till Rohrmann <[hidden email]> wrote: > Thanks for the feedback. I actually have to cancel this RC because of the > following issues [1, 2, 3, 4]. The issues are already fixed and I will > create a new RC asap. > > @Ted, thanks for reporting the problem with the BindException. We should > look into why this happens but as Stephan said it should not be a release > blocker because we couldn't see it so far on Travis. > > @Bowen, you're right that we should definitely test the Kinesis connector > since we don't have an end-to-end test for this connector. Maybe the > community could help here once I've created the next RC. > > @Shashank, I created the release artifacts also with Maven 3.0.5. Thus, it > should actually work. I'll try it with the same command you've posted to > the ML in order to see whether I can reproduce it. > > [1] https://issues.apache.org/jira/browse/FLINK-9358 > [2] https://issues.apache.org/jira/browse/FLINK-9336 > [3] https://issues.apache.org/jira/browse/FLINK-9246 > [4] https://issues.apache.org/jira/browse/FLINK-9201 > [5] https://issues.apache.org/jira/browse/FLINK-9234 > > Cheers, > Till > > On Mon, May 14, 2018 at 6:33 PM, Ted Yu <[hidden email]> wrote: > > > Can you try out mvn 3.5.2 ? > > > > I don't get the error when running the command line you gave. > > > > BTW 2.7.3.2.6.2.0-205 was quite old release. > > > > Cheers > > > > On Mon, May 14, 2018 at 7:15 AM, shashank734 <[hidden email]> > > wrote: > > > > > While building from source failing with following error : > > > > > > Failed to execute goal > > > org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce > > > (dependency-convergence) on project flink-bucketing-sink-test > > > > > > > > > MVN Version : 3.0.5 > > > > > > Command : mvn clean install -DskipTests -Dscala.version=2.11.7 > > > -Pvendor-repos -Dhadoop.version=2.7.3.2.6.2.0-205 > build-output_2.log > > > > > > > > > > > > > > > -- > > > Sent from: http://apache-flink-mailing-list-archive.1008284.n3. > > nabble.com/ > > > > > > |
In reply to this post by Ted Yu
Same error on 3.5.2 ... Let me check rc3 also.
-- Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ |
Free forum by Nabble | Edit this page |