Hi everyone,
Please review and vote on the release candidate #1 for the version 1.10.0, as follows: [ ] +1, Approve the release [ ] -1, Do not approve the release (please provide specific comments) The complete staging area is available for your review, which includes: * JIRA release notes [1], * the official Apache source release and binary convenience releases to be deployed to dist.apache.org [2], which are signed with the key with fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], * all artifacts to be deployed to the Maven Central Repository [4], * source code tag "release-1.10.0-rc1" [5], The announcement blog post is in the works. I will update this voting thread with a link to the pull request soon. The vote will be open for at least 72 hours. It is adopted by majority approval, with at least 3 PMC affirmative votes. Thanks, Yu & Gary [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ [3] https://dist.apache.org/repos/dist/release/flink/KEYS [4] https://repository.apache.org/content/repositories/orgapacheflink-1325 [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 |
Hi,
Thanks for creating this RC Gary & Yu. +1 (non-binding) from my side Because of instabilities during the testing period, I’ve manually tested some jobs (and streaming examples) on an EMR cluster, writing to S3 using newly unshaded/not relocated S3 plugin. Everything seems to works fine. Also I’m not aware of any blocking issues for this RC. Piotrek > On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > Hi everyone, > Please review and vote on the release candidate #1 for the version 1.10.0, > as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > > The complete staging area is available for your review, which includes: > * JIRA release notes [1], > * the official Apache source release and binary convenience releases to be > deployed to dist.apache.org [2], which are signed with the key with > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > * all artifacts to be deployed to the Maven Central Repository [4], > * source code tag "release-1.10.0-rc1" [5], > > The announcement blog post is in the works. I will update this voting > thread with a link to the pull request soon. > > The vote will be open for at least 72 hours. It is adopted by majority > approval, with at least 3 PMC affirmative votes. > > Thanks, > Yu & Gary > > [1] > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > [4] https://repository.apache.org/content/repositories/orgapacheflink-1325 > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 |
+1 (binding)
- I verified the checksum - I verified the signatures - I eyeballed the diff in pom files between 1.9 and 1.10 and checked any newly added dependencies. They are ok. If the 1.9 licensing was correct the licensing on this should also be correct - I manually installed Flink Python and ran the WordCount.py example on a (local) cluster and standalone Aljoscha On 30.01.20 10:49, Piotr Nowojski wrote: > Hi, > > Thanks for creating this RC Gary & Yu. > > +1 (non-binding) from my side > > Because of instabilities during the testing period, I’ve manually tested some jobs (and streaming examples) on an EMR cluster, writing to S3 using newly unshaded/not relocated S3 plugin. Everything seems to works fine. Also I’m not aware of any blocking issues for this RC. > > Piotrek > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: >> >> Hi everyone, >> Please review and vote on the release candidate #1 for the version 1.10.0, >> as follows: >> [ ] +1, Approve the release >> [ ] -1, Do not approve the release (please provide specific comments) >> >> >> The complete staging area is available for your review, which includes: >> * JIRA release notes [1], >> * the official Apache source release and binary convenience releases to be >> deployed to dist.apache.org [2], which are signed with the key with >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], >> * all artifacts to be deployed to the Maven Central Repository [4], >> * source code tag "release-1.10.0-rc1" [5], >> >> The announcement blog post is in the works. I will update this voting >> thread with a link to the pull request soon. >> >> The vote will be open for at least 72 hours. It is adopted by majority >> approval, with at least 3 PMC affirmative votes. >> >> Thanks, >> Yu & Gary >> >> [1] >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS >> [4] https://repository.apache.org/content/repositories/orgapacheflink-1325 >> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > |
Hi,
+1 (non-binding) Thanks for driving this, Gary & Yu. - Verified signatures and checksums - Maven build from source skip tests - Start local cluster and web ui is accessible - Submit example of both batch and streaming, run well and log no exception. - Verified pom files point to the 1.10.0 version - Take a quick look to release note - Run examples of flink-example-table in IDE Retest Table TPC-DS 10T batch performance: - The performance is not regressed. - Some improvements because we put hive related jars to flink/lib. Hive integration: - Test hive related common features and test type support on the cluster. No problem. SQL client: - Common features work well. Best, Jingsong Lee On Fri, Jan 31, 2020 at 5:00 PM Aljoscha Krettek <[hidden email]> wrote: > +1 (binding) > > - I verified the checksum > - I verified the signatures > - I eyeballed the diff in pom files between 1.9 and 1.10 and checked any > newly added dependencies. They are ok. If the 1.9 licensing was correct > the licensing on this should also be correct > - I manually installed Flink Python and ran the WordCount.py example on > a (local) cluster and standalone > > Aljoscha > > On 30.01.20 10:49, Piotr Nowojski wrote: > > Hi, > > > > Thanks for creating this RC Gary & Yu. > > > > +1 (non-binding) from my side > > > > Because of instabilities during the testing period, I’ve manually tested > some jobs (and streaming examples) on an EMR cluster, writing to S3 using > newly unshaded/not relocated S3 plugin. Everything seems to works fine. > Also I’m not aware of any blocking issues for this RC. > > > > Piotrek > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > >> > >> Hi everyone, > >> Please review and vote on the release candidate #1 for the version > 1.10.0, > >> as follows: > >> [ ] +1, Approve the release > >> [ ] -1, Do not approve the release (please provide specific comments) > >> > >> > >> The complete staging area is available for your review, which includes: > >> * JIRA release notes [1], > >> * the official Apache source release and binary convenience releases to > be > >> deployed to dist.apache.org [2], which are signed with the key with > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > >> * all artifacts to be deployed to the Maven Central Repository [4], > >> * source code tag "release-1.10.0-rc1" [5], > >> > >> The announcement blog post is in the works. I will update this voting > >> thread with a link to the pull request soon. > >> > >> The vote will be open for at least 72 hours. It is adopted by majority > >> approval, with at least 3 PMC affirmative votes. > >> > >> Thanks, > >> Yu & Gary > >> > >> [1] > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> [4] > https://repository.apache.org/content/repositories/orgapacheflink-1325 > >> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > -- Best, Jingsong Lee |
In reply to this post by Aljoscha Krettek-2
+1 (non-binding) No blockers, but I did run into a couple of things.
I upgraded flink-training-exercises to 1.10; after minor adjustments, all the tests pass. With Java 11 I ran into WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.flink.core.memory.MemoryUtils WARNING: Please consider reporting this to the maintainers of org.apache.flink.core.memory.MemoryUtils which is the subject of https://issues.apache.org/jira/browse/FLINK-15094. I suggest we add some mention of this to the release notes. I have been using the now removed ExternalCatalog interface. I found the documentation for the new Catalog API insufficient for figuring out how to adapt. David On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek <[hidden email]> wrote: > +1 (binding) > > - I verified the checksum > - I verified the signatures > - I eyeballed the diff in pom files between 1.9 and 1.10 and checked any > newly added dependencies. They are ok. If the 1.9 licensing was correct > the licensing on this should also be correct > - I manually installed Flink Python and ran the WordCount.py example on > a (local) cluster and standalone > > Aljoscha > > On 30.01.20 10:49, Piotr Nowojski wrote: > > Hi, > > > > Thanks for creating this RC Gary & Yu. > > > > +1 (non-binding) from my side > > > > Because of instabilities during the testing period, I’ve manually tested > some jobs (and streaming examples) on an EMR cluster, writing to S3 using > newly unshaded/not relocated S3 plugin. Everything seems to works fine. > Also I’m not aware of any blocking issues for this RC. > > > > Piotrek > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > >> > >> Hi everyone, > >> Please review and vote on the release candidate #1 for the version > 1.10.0, > >> as follows: > >> [ ] +1, Approve the release > >> [ ] -1, Do not approve the release (please provide specific comments) > >> > >> > >> The complete staging area is available for your review, which includes: > >> * JIRA release notes [1], > >> * the official Apache source release and binary convenience releases to > be > >> deployed to dist.apache.org [2], which are signed with the key with > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > >> * all artifacts to be deployed to the Maven Central Repository [4], > >> * source code tag "release-1.10.0-rc1" [5], > >> > >> The announcement blog post is in the works. I will update this voting > >> thread with a link to the pull request soon. > >> > >> The vote will be open for at least 72 hours. It is adopted by majority > >> approval, with at least 3 PMC affirmative votes. > >> > >> Thanks, > >> Yu & Gary > >> > >> [1] > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> [4] > https://repository.apache.org/content/repositories/orgapacheflink-1325 > >> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > |
Thanks Gary & Yu,
+1 (non-binding) from my side. - I'm not aware of any blockers (except unfinished documents) - Verified building from source (tests not skipped) - Verified running nightly e2e tests - Verified running example jobs in local/standalone/yarn setups. Everything seems to work fine. - Played around with memory configurations on local/standalone/yarn setups. Everything works as expected. - Checked on the release notes, particularly the memory management part. Thank you~ Xintong Song On Fri, Jan 31, 2020 at 6:47 PM David Anderson <[hidden email]> wrote: > +1 (non-binding) No blockers, but I did run into a couple of things. > > I upgraded flink-training-exercises to 1.10; after minor adjustments, all > the tests pass. > > With Java 11 I ran into > > WARNING: An illegal reflective access operation has occurred > WARNING: Illegal reflective access by > org.apache.flink.core.memory.MemoryUtils > WARNING: Please consider reporting this to the maintainers of > org.apache.flink.core.memory.MemoryUtils > > which is the subject of https://issues.apache.org/jira/browse/FLINK-15094. > I suggest we add some mention of this to the release notes. > > I have been using the now removed ExternalCatalog interface. I found the > documentation for the new Catalog API insufficient for figuring out how to > adapt. > > David > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek <[hidden email]> > wrote: > > > +1 (binding) > > > > - I verified the checksum > > - I verified the signatures > > - I eyeballed the diff in pom files between 1.9 and 1.10 and checked any > > newly added dependencies. They are ok. If the 1.9 licensing was correct > > the licensing on this should also be correct > > - I manually installed Flink Python and ran the WordCount.py example on > > a (local) cluster and standalone > > > > Aljoscha > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > Hi, > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > +1 (non-binding) from my side > > > > > > Because of instabilities during the testing period, I’ve manually > tested > > some jobs (and streaming examples) on an EMR cluster, writing to S3 using > > newly unshaded/not relocated S3 plugin. Everything seems to works fine. > > Also I’m not aware of any blocking issues for this RC. > > > > > > Piotrek > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > >> > > >> Hi everyone, > > >> Please review and vote on the release candidate #1 for the version > > 1.10.0, > > >> as follows: > > >> [ ] +1, Approve the release > > >> [ ] -1, Do not approve the release (please provide specific comments) > > >> > > >> > > >> The complete staging area is available for your review, which > includes: > > >> * JIRA release notes [1], > > >> * the official Apache source release and binary convenience releases > to > > be > > >> deployed to dist.apache.org [2], which are signed with the key with > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > >> * all artifacts to be deployed to the Maven Central Repository [4], > > >> * source code tag "release-1.10.0-rc1" [5], > > >> > > >> The announcement blog post is in the works. I will update this voting > > >> thread with a link to the pull request soon. > > >> > > >> The vote will be open for at least 72 hours. It is adopted by majority > > >> approval, with at least 3 PMC affirmative votes. > > >> > > >> Thanks, > > >> Yu & Gary > > >> > > >> [1] > > >> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > >> [4] > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > >> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > |
Hi everyone,
-1, because flink-kubernetes does not have the correct NOTICE file. Here is the issue to track the problem [1]. [1] https://issues.apache.org/jira/browse/FLINK-15837 Cheers, Till On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email]> wrote: > Thanks Gary & Yu, > > +1 (non-binding) from my side. > > - I'm not aware of any blockers (except unfinished documents) > - Verified building from source (tests not skipped) > - Verified running nightly e2e tests > - Verified running example jobs in local/standalone/yarn setups. Everything > seems to work fine. > - Played around with memory configurations on local/standalone/yarn setups. > Everything works as expected. > - Checked on the release notes, particularly the memory management part. > > Thank you~ > > Xintong Song > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson <[hidden email]> > wrote: > > > +1 (non-binding) No blockers, but I did run into a couple of things. > > > > I upgraded flink-training-exercises to 1.10; after minor adjustments, all > > the tests pass. > > > > With Java 11 I ran into > > > > WARNING: An illegal reflective access operation has occurred > > WARNING: Illegal reflective access by > > org.apache.flink.core.memory.MemoryUtils > > WARNING: Please consider reporting this to the maintainers of > > org.apache.flink.core.memory.MemoryUtils > > > > which is the subject of > https://issues.apache.org/jira/browse/FLINK-15094. > > I suggest we add some mention of this to the release notes. > > > > I have been using the now removed ExternalCatalog interface. I found the > > documentation for the new Catalog API insufficient for figuring out how > to > > adapt. > > > > David > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek <[hidden email]> > > wrote: > > > > > +1 (binding) > > > > > > - I verified the checksum > > > - I verified the signatures > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and checked > any > > > newly added dependencies. They are ok. If the 1.9 licensing was correct > > > the licensing on this should also be correct > > > - I manually installed Flink Python and ran the WordCount.py example on > > > a (local) cluster and standalone > > > > > > Aljoscha > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > Hi, > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > +1 (non-binding) from my side > > > > > > > > Because of instabilities during the testing period, I’ve manually > > tested > > > some jobs (and streaming examples) on an EMR cluster, writing to S3 > using > > > newly unshaded/not relocated S3 plugin. Everything seems to works fine. > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > Piotrek > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > > >> > > > >> Hi everyone, > > > >> Please review and vote on the release candidate #1 for the version > > > 1.10.0, > > > >> as follows: > > > >> [ ] +1, Approve the release > > > >> [ ] -1, Do not approve the release (please provide specific > comments) > > > >> > > > >> > > > >> The complete staging area is available for your review, which > > includes: > > > >> * JIRA release notes [1], > > > >> * the official Apache source release and binary convenience releases > > to > > > be > > > >> deployed to dist.apache.org [2], which are signed with the key with > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > >> * all artifacts to be deployed to the Maven Central Repository [4], > > > >> * source code tag "release-1.10.0-rc1" [5], > > > >> > > > >> The announcement blog post is in the works. I will update this > voting > > > >> thread with a link to the pull request soon. > > > >> > > > >> The vote will be open for at least 72 hours. It is adopted by > majority > > > >> approval, with at least 3 PMC affirmative votes. > > > >> > > > >> Thanks, > > > >> Yu & Gary > > > >> > > > >> [1] > > > >> > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > >> [4] > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > >> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > |
Hi folks,
I found another issue related to Blink planner that ClassCastException would be thrown when use ConnectorDescriptor to register the Source. Not sure if it is a blocker. The issue can be found in [1], anyway, it's better to fix this issue in new RC. Best, Jincheng [1] https://issues.apache.org/jira/browse/FLINK-15840 Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > Hi everyone, > > -1, because flink-kubernetes does not have the correct NOTICE file. > > Here is the issue to track the problem [1]. > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > Cheers, > Till > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email]> > wrote: > > > Thanks Gary & Yu, > > > > +1 (non-binding) from my side. > > > > - I'm not aware of any blockers (except unfinished documents) > > - Verified building from source (tests not skipped) > > - Verified running nightly e2e tests > > - Verified running example jobs in local/standalone/yarn setups. > Everything > > seems to work fine. > > - Played around with memory configurations on local/standalone/yarn > setups. > > Everything works as expected. > > - Checked on the release notes, particularly the memory management part. > > > > Thank you~ > > > > Xintong Song > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson <[hidden email]> > > wrote: > > > > > +1 (non-binding) No blockers, but I did run into a couple of things. > > > > > > I upgraded flink-training-exercises to 1.10; after minor adjustments, > all > > > the tests pass. > > > > > > With Java 11 I ran into > > > > > > WARNING: An illegal reflective access operation has occurred > > > WARNING: Illegal reflective access by > > > org.apache.flink.core.memory.MemoryUtils > > > WARNING: Please consider reporting this to the maintainers of > > > org.apache.flink.core.memory.MemoryUtils > > > > > > which is the subject of > > https://issues.apache.org/jira/browse/FLINK-15094. > > > I suggest we add some mention of this to the release notes. > > > > > > I have been using the now removed ExternalCatalog interface. I found > the > > > documentation for the new Catalog API insufficient for figuring out how > > to > > > adapt. > > > > > > David > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek <[hidden email] > > > > > wrote: > > > > > > > +1 (binding) > > > > > > > > - I verified the checksum > > > > - I verified the signatures > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and checked > > any > > > > newly added dependencies. They are ok. If the 1.9 licensing was > correct > > > > the licensing on this should also be correct > > > > - I manually installed Flink Python and ran the WordCount.py example > on > > > > a (local) cluster and standalone > > > > > > > > Aljoscha > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > Hi, > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > Because of instabilities during the testing period, I’ve manually > > > tested > > > > some jobs (and streaming examples) on an EMR cluster, writing to S3 > > using > > > > newly unshaded/not relocated S3 plugin. Everything seems to works > fine. > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > Piotrek > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > > > >> > > > > >> Hi everyone, > > > > >> Please review and vote on the release candidate #1 for the version > > > > 1.10.0, > > > > >> as follows: > > > > >> [ ] +1, Approve the release > > > > >> [ ] -1, Do not approve the release (please provide specific > > comments) > > > > >> > > > > >> > > > > >> The complete staging area is available for your review, which > > > includes: > > > > >> * JIRA release notes [1], > > > > >> * the official Apache source release and binary convenience > releases > > > to > > > > be > > > > >> deployed to dist.apache.org [2], which are signed with the key > with > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > > >> * all artifacts to be deployed to the Maven Central Repository > [4], > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > >> > > > > >> The announcement blog post is in the works. I will update this > > voting > > > > >> thread with a link to the pull request soon. > > > > >> > > > > >> The vote will be open for at least 72 hours. It is adopted by > > majority > > > > >> approval, with at least 3 PMC affirmative votes. > > > > >> > > > > >> Thanks, > > > > >> Yu & Gary > > > > >> > > > > >> [1] > > > > >> > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > >> [2] > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > >> [4] > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > >> [5] > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > |
Hi all,
I also have a issue[1] which I think it's great to be included in 1.10 release. The pr is already under review. [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > Hi folks, > > I found another issue related to Blink planner that ClassCastException > would be thrown when use ConnectorDescriptor to register the Source. > Not sure if it is a blocker. The issue can be found in [1], anyway, it's > better to fix this issue in new RC. > > Best, > Jincheng > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > > > Hi everyone, > > > > -1, because flink-kubernetes does not have the correct NOTICE file. > > > > Here is the issue to track the problem [1]. > > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > > > Cheers, > > Till > > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email]> > > wrote: > > > > > Thanks Gary & Yu, > > > > > > +1 (non-binding) from my side. > > > > > > - I'm not aware of any blockers (except unfinished documents) > > > - Verified building from source (tests not skipped) > > > - Verified running nightly e2e tests > > > - Verified running example jobs in local/standalone/yarn setups. > > Everything > > > seems to work fine. > > > - Played around with memory configurations on local/standalone/yarn > > setups. > > > Everything works as expected. > > > - Checked on the release notes, particularly the memory management > part. > > > > > > Thank you~ > > > > > > Xintong Song > > > > > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson <[hidden email]> > > > wrote: > > > > > > > +1 (non-binding) No blockers, but I did run into a couple of things. > > > > > > > > I upgraded flink-training-exercises to 1.10; after minor adjustments, > > all > > > > the tests pass. > > > > > > > > With Java 11 I ran into > > > > > > > > WARNING: An illegal reflective access operation has occurred > > > > WARNING: Illegal reflective access by > > > > org.apache.flink.core.memory.MemoryUtils > > > > WARNING: Please consider reporting this to the maintainers of > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > which is the subject of > > > https://issues.apache.org/jira/browse/FLINK-15094. > > > > I suggest we add some mention of this to the release notes. > > > > > > > > I have been using the now removed ExternalCatalog interface. I found > > the > > > > documentation for the new Catalog API insufficient for figuring out > how > > > to > > > > adapt. > > > > > > > > David > > > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > [hidden email] > > > > > > > wrote: > > > > > > > > > +1 (binding) > > > > > > > > > > - I verified the checksum > > > > > - I verified the signatures > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and > checked > > > any > > > > > newly added dependencies. They are ok. If the 1.9 licensing was > > correct > > > > > the licensing on this should also be correct > > > > > - I manually installed Flink Python and ran the WordCount.py > example > > on > > > > > a (local) cluster and standalone > > > > > > > > > > Aljoscha > > > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > > Hi, > > > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > > > Because of instabilities during the testing period, I’ve manually > > > > tested > > > > > some jobs (and streaming examples) on an EMR cluster, writing to S3 > > > using > > > > > newly unshaded/not relocated S3 plugin. Everything seems to works > > fine. > > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > > > Piotrek > > > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > > > > >> > > > > > >> Hi everyone, > > > > > >> Please review and vote on the release candidate #1 for the > version > > > > > 1.10.0, > > > > > >> as follows: > > > > > >> [ ] +1, Approve the release > > > > > >> [ ] -1, Do not approve the release (please provide specific > > > comments) > > > > > >> > > > > > >> > > > > > >> The complete staging area is available for your review, which > > > > includes: > > > > > >> * JIRA release notes [1], > > > > > >> * the official Apache source release and binary convenience > > releases > > > > to > > > > > be > > > > > >> deployed to dist.apache.org [2], which are signed with the key > > with > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > > > >> * all artifacts to be deployed to the Maven Central Repository > > [4], > > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > > >> > > > > > >> The announcement blog post is in the works. I will update this > > > voting > > > > > >> thread with a link to the pull request soon. > > > > > >> > > > > > >> The vote will be open for at least 72 hours. It is adopted by > > > majority > > > > > >> approval, with at least 3 PMC affirmative votes. > > > > > >> > > > > > >> Thanks, > > > > > >> Yu & Gary > > > > > >> > > > > > >> [1] > > > > > >> > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > > >> [2] > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > >> [4] > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > > >> [5] > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > > > > > > > -- Benchao Li School of Electronics Engineering and Computer Science, Peking University Tel:+86-15650713730 Email: [hidden email]; [hidden email] |
Thanks Jincheng,
FLINK-15840 [1] should be a blocker, lead to "TableEnvironment.from/scan(string path)" cannot be used for all temporaryTable and catalogTable (not DataStreamTable). Of course, it can be bypassed by "TableEnvironment.sqlQuery("select * from t")", but "from/scan" are very important api of TableEnvironment and pure TableApi can't be used seriously. [1] https://issues.apache.org/jira/browse/FLINK-15840 Best, Jingsong Lee On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> wrote: > Hi all, > > I also have a issue[1] which I think it's great to be included in 1.10 > release. The pr is already under review. > > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 > > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > > > Hi folks, > > > > I found another issue related to Blink planner that ClassCastException > > would be thrown when use ConnectorDescriptor to register the Source. > > Not sure if it is a blocker. The issue can be found in [1], anyway, it's > > better to fix this issue in new RC. > > > > Best, > > Jincheng > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > > > > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > > > > > Hi everyone, > > > > > > -1, because flink-kubernetes does not have the correct NOTICE file. > > > > > > Here is the issue to track the problem [1]. > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > > > > > Cheers, > > > Till > > > > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email]> > > > wrote: > > > > > > > Thanks Gary & Yu, > > > > > > > > +1 (non-binding) from my side. > > > > > > > > - I'm not aware of any blockers (except unfinished documents) > > > > - Verified building from source (tests not skipped) > > > > - Verified running nightly e2e tests > > > > - Verified running example jobs in local/standalone/yarn setups. > > > Everything > > > > seems to work fine. > > > > - Played around with memory configurations on local/standalone/yarn > > > setups. > > > > Everything works as expected. > > > > - Checked on the release notes, particularly the memory management > > part. > > > > > > > > Thank you~ > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson <[hidden email]> > > > > wrote: > > > > > > > > > +1 (non-binding) No blockers, but I did run into a couple of > things. > > > > > > > > > > I upgraded flink-training-exercises to 1.10; after minor > adjustments, > > > all > > > > > the tests pass. > > > > > > > > > > With Java 11 I ran into > > > > > > > > > > WARNING: An illegal reflective access operation has occurred > > > > > WARNING: Illegal reflective access by > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > WARNING: Please consider reporting this to the maintainers of > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > > > which is the subject of > > > > https://issues.apache.org/jira/browse/FLINK-15094. > > > > > I suggest we add some mention of this to the release notes. > > > > > > > > > > I have been using the now removed ExternalCatalog interface. I > found > > > the > > > > > documentation for the new Catalog API insufficient for figuring out > > how > > > > to > > > > > adapt. > > > > > > > > > > David > > > > > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > > [hidden email] > > > > > > > > > wrote: > > > > > > > > > > > +1 (binding) > > > > > > > > > > > > - I verified the checksum > > > > > > - I verified the signatures > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and > > checked > > > > any > > > > > > newly added dependencies. They are ok. If the 1.9 licensing was > > > correct > > > > > > the licensing on this should also be correct > > > > > > - I manually installed Flink Python and ran the WordCount.py > > example > > > on > > > > > > a (local) cluster and standalone > > > > > > > > > > > > Aljoscha > > > > > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > > > Hi, > > > > > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > > > > > Because of instabilities during the testing period, I’ve > manually > > > > > tested > > > > > > some jobs (and streaming examples) on an EMR cluster, writing to > S3 > > > > using > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to works > > > fine. > > > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > > > > > Piotrek > > > > > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > > > > > >> > > > > > > >> Hi everyone, > > > > > > >> Please review and vote on the release candidate #1 for the > > version > > > > > > 1.10.0, > > > > > > >> as follows: > > > > > > >> [ ] +1, Approve the release > > > > > > >> [ ] -1, Do not approve the release (please provide specific > > > > comments) > > > > > > >> > > > > > > >> > > > > > > >> The complete staging area is available for your review, which > > > > > includes: > > > > > > >> * JIRA release notes [1], > > > > > > >> * the official Apache source release and binary convenience > > > releases > > > > > to > > > > > > be > > > > > > >> deployed to dist.apache.org [2], which are signed with the > key > > > with > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > > > > >> * all artifacts to be deployed to the Maven Central Repository > > > [4], > > > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > > > >> > > > > > > >> The announcement blog post is in the works. I will update this > > > > voting > > > > > > >> thread with a link to the pull request soon. > > > > > > >> > > > > > > >> The vote will be open for at least 72 hours. It is adopted by > > > > majority > > > > > > >> approval, with at least 3 PMC affirmative votes. > > > > > > >> > > > > > > >> Thanks, > > > > > > >> Yu & Gary > > > > > > >> > > > > > > >> [1] > > > > > > >> > > > > > > > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > > > >> [2] > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > >> [4] > > > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > > > >> [5] > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > Benchao Li > School of Electronics Engineering and Computer Science, Peking University > Tel:+86-15650713730 > Email: [hidden email]; [hidden email] > -- Best, Jingsong Lee |
As part of testing the RC, I run into the following issue with a test case
that runs a job from a packaged jar on a MiniCluster. This test had to be modified due to the client-side API changes in 1.10. The issue is that the jar file that also contains the entry point isn't part of the user classpath on the task manager. The entry point is executed successfully; when removing all user code from the job graph, the test passes. If the jar isn't shipped automatically to the task manager, what do I need to set for it to occur? Thanks, Thomas @ClassRule public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new MiniClusterResource( new MiniClusterResourceConfiguration.Builder() .build()); @Test(timeout = 30000) public void test() throws Exception { final URI restAddress = MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get(); Configuration config = new Configuration(); config.setString(JobManagerOptions.ADDRESS, restAddress.getHost()); config.setString(RestOptions.ADDRESS, restAddress.getHost()); config.setInteger(RestOptions.PORT, restAddress.getPort()); config.set(CoreOptions.DEFAULT_PARALLELISM, 1); config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME); String entryPoint = "my.TestFlinkJob"; PackagedProgram.Builder program = PackagedProgram.newBuilder() .setJarFile(new File(JAR_PATH)) .setEntryPointClassName(entryPoint); ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE, config, program.build()); } The user function deserialization error: org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot instantiate user function. at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269) at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.StreamCorruptedException: unexpected block data at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158) With 1.9, the driver code would be: PackagedProgram program = new PackagedProgram(new File(JAR_PATH), entryPoint, new String[]{}); RestClusterClient client = new RestClusterClient(config, "RemoteExecutor"); client.run(program, 1); On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <[hidden email]> wrote: > Thanks Jincheng, > > FLINK-15840 [1] should be a blocker, lead to > "TableEnvironment.from/scan(string path)" cannot be used for all > temporaryTable and catalogTable (not DataStreamTable). Of course, it can be > bypassed by "TableEnvironment.sqlQuery("select * from t")", but "from/scan" > are very important api of TableEnvironment and pure TableApi can't be used > seriously. > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > Best, > Jingsong Lee > > On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> wrote: > > > Hi all, > > > > I also have a issue[1] which I think it's great to be included in 1.10 > > release. The pr is already under review. > > > > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 > > > > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > > > > > Hi folks, > > > > > > I found another issue related to Blink planner that ClassCastException > > > would be thrown when use ConnectorDescriptor to register the Source. > > > Not sure if it is a blocker. The issue can be found in [1], anyway, > it's > > > better to fix this issue in new RC. > > > > > > Best, > > > Jincheng > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > > > > > > > > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > > > > > > > Hi everyone, > > > > > > > > -1, because flink-kubernetes does not have the correct NOTICE file. > > > > > > > > Here is the issue to track the problem [1]. > > > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > > > > > > > Cheers, > > > > Till > > > > > > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email]> > > > > wrote: > > > > > > > > > Thanks Gary & Yu, > > > > > > > > > > +1 (non-binding) from my side. > > > > > > > > > > - I'm not aware of any blockers (except unfinished documents) > > > > > - Verified building from source (tests not skipped) > > > > > - Verified running nightly e2e tests > > > > > - Verified running example jobs in local/standalone/yarn setups. > > > > Everything > > > > > seems to work fine. > > > > > - Played around with memory configurations on local/standalone/yarn > > > > setups. > > > > > Everything works as expected. > > > > > - Checked on the release notes, particularly the memory management > > > part. > > > > > > > > > > Thank you~ > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson < > [hidden email]> > > > > > wrote: > > > > > > > > > > > +1 (non-binding) No blockers, but I did run into a couple of > > things. > > > > > > > > > > > > I upgraded flink-training-exercises to 1.10; after minor > > adjustments, > > > > all > > > > > > the tests pass. > > > > > > > > > > > > With Java 11 I ran into > > > > > > > > > > > > WARNING: An illegal reflective access operation has occurred > > > > > > WARNING: Illegal reflective access by > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > WARNING: Please consider reporting this to the maintainers of > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > > > > > which is the subject of > > > > > https://issues.apache.org/jira/browse/FLINK-15094. > > > > > > I suggest we add some mention of this to the release notes. > > > > > > > > > > > > I have been using the now removed ExternalCatalog interface. I > > found > > > > the > > > > > > documentation for the new Catalog API insufficient for figuring > out > > > how > > > > > to > > > > > > adapt. > > > > > > > > > > > > David > > > > > > > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > > > [hidden email] > > > > > > > > > > > wrote: > > > > > > > > > > > > > +1 (binding) > > > > > > > > > > > > > > - I verified the checksum > > > > > > > - I verified the signatures > > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and > > > checked > > > > > any > > > > > > > newly added dependencies. They are ok. If the 1.9 licensing was > > > > correct > > > > > > > the licensing on this should also be correct > > > > > > > - I manually installed Flink Python and ran the WordCount.py > > > example > > > > on > > > > > > > a (local) cluster and standalone > > > > > > > > > > > > > > Aljoscha > > > > > > > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > > > > > > > Because of instabilities during the testing period, I’ve > > manually > > > > > > tested > > > > > > > some jobs (and streaming examples) on an EMR cluster, writing > to > > S3 > > > > > using > > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to > works > > > > fine. > > > > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > > > > > > > Piotrek > > > > > > > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> wrote: > > > > > > > >> > > > > > > > >> Hi everyone, > > > > > > > >> Please review and vote on the release candidate #1 for the > > > version > > > > > > > 1.10.0, > > > > > > > >> as follows: > > > > > > > >> [ ] +1, Approve the release > > > > > > > >> [ ] -1, Do not approve the release (please provide specific > > > > > comments) > > > > > > > >> > > > > > > > >> > > > > > > > >> The complete staging area is available for your review, > which > > > > > > includes: > > > > > > > >> * JIRA release notes [1], > > > > > > > >> * the official Apache source release and binary convenience > > > > releases > > > > > > to > > > > > > > be > > > > > > > >> deployed to dist.apache.org [2], which are signed with the > > key > > > > with > > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > > > > > >> * all artifacts to be deployed to the Maven Central > Repository > > > > [4], > > > > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > > > > >> > > > > > > > >> The announcement blog post is in the works. I will update > this > > > > > voting > > > > > > > >> thread with a link to the pull request soon. > > > > > > > >> > > > > > > > >> The vote will be open for at least 72 hours. It is adopted > by > > > > > majority > > > > > > > >> approval, with at least 3 PMC affirmative votes. > > > > > > > >> > > > > > > > >> Thanks, > > > > > > > >> Yu & Gary > > > > > > > >> > > > > > > > >> [1] > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > > > > >> [2] > > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > > >> [4] > > > > > > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > > > > >> [5] > > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Benchao Li > > School of Electronics Engineering and Computer Science, Peking University > > Tel:+86-15650713730 > > Email: [hidden email]; [hidden email] > > > > > -- > Best, Jingsong Lee > |
I filed a small issue regarding readability for memory configurations. It
is not a blocking issue. I already attached a PR. https://issues.apache.org/jira/browse/FLINK-15846 On Fri, Jan 31, 2020 at 9:20 PM Thomas Weise <[hidden email]> wrote: > As part of testing the RC, I run into the following issue with a test case > that runs a job from a packaged jar on a MiniCluster. This test had to be > modified due to the client-side API changes in 1.10. > > The issue is that the jar file that also contains the entry point isn't > part of the user classpath on the task manager. The entry point is executed > successfully; when removing all user code from the job graph, the test > passes. > > If the jar isn't shipped automatically to the task manager, what do I need > to set for it to occur? > > Thanks, > Thomas > > > @ClassRule > public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new > MiniClusterResource( > new MiniClusterResourceConfiguration.Builder() > .build()); > > @Test(timeout = 30000) > public void test() throws Exception { > final URI restAddress = > MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get(); > Configuration config = new Configuration(); > config.setString(JobManagerOptions.ADDRESS, restAddress.getHost()); > config.setString(RestOptions.ADDRESS, restAddress.getHost()); > config.setInteger(RestOptions.PORT, restAddress.getPort()); > config.set(CoreOptions.DEFAULT_PARALLELISM, 1); > config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME); > > String entryPoint = "my.TestFlinkJob"; > > PackagedProgram.Builder program = PackagedProgram.newBuilder() > .setJarFile(new File(JAR_PATH)) > .setEntryPointClassName(entryPoint); > > ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE, > config, program.build()); > } > > The user function deserialization error: > > org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot > instantiate user function. > at > > org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269) > at > > org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115) > at > > org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433) > at > > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.StreamCorruptedException: unexpected block data > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158) > > With 1.9, the driver code would be: > > PackagedProgram program = new PackagedProgram(new File(JAR_PATH), > entryPoint, new String[]{}); > RestClusterClient client = new RestClusterClient(config, > "RemoteExecutor"); > client.run(program, 1); > > On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <[hidden email]> > wrote: > > > Thanks Jincheng, > > > > FLINK-15840 [1] should be a blocker, lead to > > "TableEnvironment.from/scan(string path)" cannot be used for all > > temporaryTable and catalogTable (not DataStreamTable). Of course, it can > be > > bypassed by "TableEnvironment.sqlQuery("select * from t")", but > "from/scan" > > are very important api of TableEnvironment and pure TableApi can't be > used > > seriously. > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > Best, > > Jingsong Lee > > > > On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> wrote: > > > > > Hi all, > > > > > > I also have a issue[1] which I think it's great to be included in 1.10 > > > release. The pr is already under review. > > > > > > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 > > > > > > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > > > > > > > Hi folks, > > > > > > > > I found another issue related to Blink planner that > ClassCastException > > > > would be thrown when use ConnectorDescriptor to register the Source. > > > > Not sure if it is a blocker. The issue can be found in [1], anyway, > > it's > > > > better to fix this issue in new RC. > > > > > > > > Best, > > > > Jincheng > > > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > > > > > > > > > > > > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > > > > > > > > > Hi everyone, > > > > > > > > > > -1, because flink-kubernetes does not have the correct NOTICE file. > > > > > > > > > > Here is the issue to track the problem [1]. > > > > > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > > > > > > > > > Cheers, > > > > > Till > > > > > > > > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song < > [hidden email]> > > > > > wrote: > > > > > > > > > > > Thanks Gary & Yu, > > > > > > > > > > > > +1 (non-binding) from my side. > > > > > > > > > > > > - I'm not aware of any blockers (except unfinished documents) > > > > > > - Verified building from source (tests not skipped) > > > > > > - Verified running nightly e2e tests > > > > > > - Verified running example jobs in local/standalone/yarn setups. > > > > > Everything > > > > > > seems to work fine. > > > > > > - Played around with memory configurations on > local/standalone/yarn > > > > > setups. > > > > > > Everything works as expected. > > > > > > - Checked on the release notes, particularly the memory > management > > > > part. > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson < > > [hidden email]> > > > > > > wrote: > > > > > > > > > > > > > +1 (non-binding) No blockers, but I did run into a couple of > > > things. > > > > > > > > > > > > > > I upgraded flink-training-exercises to 1.10; after minor > > > adjustments, > > > > > all > > > > > > > the tests pass. > > > > > > > > > > > > > > With Java 11 I ran into > > > > > > > > > > > > > > WARNING: An illegal reflective access operation has occurred > > > > > > > WARNING: Illegal reflective access by > > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > WARNING: Please consider reporting this to the maintainers of > > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > > > > > > > which is the subject of > > > > > > https://issues.apache.org/jira/browse/FLINK-15094. > > > > > > > I suggest we add some mention of this to the release notes. > > > > > > > > > > > > > > I have been using the now removed ExternalCatalog interface. I > > > found > > > > > the > > > > > > > documentation for the new Catalog API insufficient for figuring > > out > > > > how > > > > > > to > > > > > > > adapt. > > > > > > > > > > > > > > David > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > > > > [hidden email] > > > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > +1 (binding) > > > > > > > > > > > > > > > > - I verified the checksum > > > > > > > > - I verified the signatures > > > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and > > > > checked > > > > > > any > > > > > > > > newly added dependencies. They are ok. If the 1.9 licensing > was > > > > > correct > > > > > > > > the licensing on this should also be correct > > > > > > > > - I manually installed Flink Python and ran the WordCount.py > > > > example > > > > > on > > > > > > > > a (local) cluster and standalone > > > > > > > > > > > > > > > > Aljoscha > > > > > > > > > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > > > > > > > > > Because of instabilities during the testing period, I’ve > > > manually > > > > > > > tested > > > > > > > > some jobs (and streaming examples) on an EMR cluster, writing > > to > > > S3 > > > > > > using > > > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to > > works > > > > > fine. > > > > > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > > > > > > > > > Piotrek > > > > > > > > > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> > wrote: > > > > > > > > >> > > > > > > > > >> Hi everyone, > > > > > > > > >> Please review and vote on the release candidate #1 for the > > > > version > > > > > > > > 1.10.0, > > > > > > > > >> as follows: > > > > > > > > >> [ ] +1, Approve the release > > > > > > > > >> [ ] -1, Do not approve the release (please provide > specific > > > > > > comments) > > > > > > > > >> > > > > > > > > >> > > > > > > > > >> The complete staging area is available for your review, > > which > > > > > > > includes: > > > > > > > > >> * JIRA release notes [1], > > > > > > > > >> * the official Apache source release and binary > convenience > > > > > releases > > > > > > > to > > > > > > > > be > > > > > > > > >> deployed to dist.apache.org [2], which are signed with > the > > > key > > > > > with > > > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > > > > > > > >> * all artifacts to be deployed to the Maven Central > > Repository > > > > > [4], > > > > > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > > > > > >> > > > > > > > > >> The announcement blog post is in the works. I will update > > this > > > > > > voting > > > > > > > > >> thread with a link to the pull request soon. > > > > > > > > >> > > > > > > > > >> The vote will be open for at least 72 hours. It is adopted > > by > > > > > > majority > > > > > > > > >> approval, with at least 3 PMC affirmative votes. > > > > > > > > >> > > > > > > > > >> Thanks, > > > > > > > > >> Yu & Gary > > > > > > > > >> > > > > > > > > >> [1] > > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > > > > > >> [2] > > > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > > > >> [4] > > > > > > > > > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > > > > > >> [5] > > > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Benchao Li > > > School of Electronics Engineering and Computer Science, Peking > University > > > Tel:+86-15650713730 > > > Email: [hidden email]; [hidden email] > > > > > > > > > -- > > Best, Jingsong Lee > > > |
Hi Steven,
I think the changes you proposed in FLINK-15846 might cause problem. Commented on JIRA ticket and -1 for including this in the new RC. Thank you~ Xintong Song On Mon, Feb 3, 2020 at 6:17 AM Steven Wu <[hidden email]> wrote: > I filed a small issue regarding readability for memory configurations. It > is not a blocking issue. I already attached a PR. > https://issues.apache.org/jira/browse/FLINK-15846 > > On Fri, Jan 31, 2020 at 9:20 PM Thomas Weise <[hidden email]> wrote: > > > As part of testing the RC, I run into the following issue with a test > case > > that runs a job from a packaged jar on a MiniCluster. This test had to be > > modified due to the client-side API changes in 1.10. > > > > The issue is that the jar file that also contains the entry point isn't > > part of the user classpath on the task manager. The entry point is > executed > > successfully; when removing all user code from the job graph, the test > > passes. > > > > If the jar isn't shipped automatically to the task manager, what do I > need > > to set for it to occur? > > > > Thanks, > > Thomas > > > > > > @ClassRule > > public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new > > MiniClusterResource( > > new MiniClusterResourceConfiguration.Builder() > > .build()); > > > > @Test(timeout = 30000) > > public void test() throws Exception { > > final URI restAddress = > > MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get(); > > Configuration config = new Configuration(); > > config.setString(JobManagerOptions.ADDRESS, restAddress.getHost()); > > config.setString(RestOptions.ADDRESS, restAddress.getHost()); > > config.setInteger(RestOptions.PORT, restAddress.getPort()); > > config.set(CoreOptions.DEFAULT_PARALLELISM, 1); > > config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME); > > > > String entryPoint = "my.TestFlinkJob"; > > > > PackagedProgram.Builder program = PackagedProgram.newBuilder() > > .setJarFile(new File(JAR_PATH)) > > .setEntryPointClassName(entryPoint); > > > > ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE, > > config, program.build()); > > } > > > > The user function deserialization error: > > > > org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot > > instantiate user function. > > at > > > > > org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269) > > at > > > > > org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115) > > at > > > > > org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433) > > at > > > > > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) > > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) > > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: java.io.StreamCorruptedException: unexpected block data > > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581) > > at > java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) > > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158) > > > > With 1.9, the driver code would be: > > > > PackagedProgram program = new PackagedProgram(new File(JAR_PATH), > > entryPoint, new String[]{}); > > RestClusterClient client = new RestClusterClient(config, > > "RemoteExecutor"); > > client.run(program, 1); > > > > On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <[hidden email]> > > wrote: > > > > > Thanks Jincheng, > > > > > > FLINK-15840 [1] should be a blocker, lead to > > > "TableEnvironment.from/scan(string path)" cannot be used for all > > > temporaryTable and catalogTable (not DataStreamTable). Of course, it > can > > be > > > bypassed by "TableEnvironment.sqlQuery("select * from t")", but > > "from/scan" > > > are very important api of TableEnvironment and pure TableApi can't be > > used > > > seriously. > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > > > Best, > > > Jingsong Lee > > > > > > On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> > wrote: > > > > > > > Hi all, > > > > > > > > I also have a issue[1] which I think it's great to be included in > 1.10 > > > > release. The pr is already under review. > > > > > > > > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 > > > > > > > > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > > > > > > > > > Hi folks, > > > > > > > > > > I found another issue related to Blink planner that > > ClassCastException > > > > > would be thrown when use ConnectorDescriptor to register the > Source. > > > > > Not sure if it is a blocker. The issue can be found in [1], anyway, > > > it's > > > > > better to fix this issue in new RC. > > > > > > > > > > Best, > > > > > Jincheng > > > > > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > > > > > > > > > > > > > > > > > > > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > > > > > > > > > > > Hi everyone, > > > > > > > > > > > > -1, because flink-kubernetes does not have the correct NOTICE > file. > > > > > > > > > > > > Here is the issue to track the problem [1]. > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > > > > > > > > > > > > Cheers, > > > > > > Till > > > > > > > > > > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song < > > [hidden email]> > > > > > > wrote: > > > > > > > > > > > > > Thanks Gary & Yu, > > > > > > > > > > > > > > +1 (non-binding) from my side. > > > > > > > > > > > > > > - I'm not aware of any blockers (except unfinished documents) > > > > > > > - Verified building from source (tests not skipped) > > > > > > > - Verified running nightly e2e tests > > > > > > > - Verified running example jobs in local/standalone/yarn > setups. > > > > > > Everything > > > > > > > seems to work fine. > > > > > > > - Played around with memory configurations on > > local/standalone/yarn > > > > > > setups. > > > > > > > Everything works as expected. > > > > > > > - Checked on the release notes, particularly the memory > > management > > > > > part. > > > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson < > > > [hidden email]> > > > > > > > wrote: > > > > > > > > > > > > > > > +1 (non-binding) No blockers, but I did run into a couple of > > > > things. > > > > > > > > > > > > > > > > I upgraded flink-training-exercises to 1.10; after minor > > > > adjustments, > > > > > > all > > > > > > > > the tests pass. > > > > > > > > > > > > > > > > With Java 11 I ran into > > > > > > > > > > > > > > > > WARNING: An illegal reflective access operation has occurred > > > > > > > > WARNING: Illegal reflective access by > > > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > WARNING: Please consider reporting this to the maintainers of > > > > > > > > org.apache.flink.core.memory.MemoryUtils > > > > > > > > > > > > > > > > which is the subject of > > > > > > > https://issues.apache.org/jira/browse/FLINK-15094. > > > > > > > > I suggest we add some mention of this to the release notes. > > > > > > > > > > > > > > > > I have been using the now removed ExternalCatalog interface. > I > > > > found > > > > > > the > > > > > > > > documentation for the new Catalog API insufficient for > figuring > > > out > > > > > how > > > > > > > to > > > > > > > > adapt. > > > > > > > > > > > > > > > > David > > > > > > > > > > > > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > > > > > [hidden email] > > > > > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > +1 (binding) > > > > > > > > > > > > > > > > > > - I verified the checksum > > > > > > > > > - I verified the signatures > > > > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 > and > > > > > checked > > > > > > > any > > > > > > > > > newly added dependencies. They are ok. If the 1.9 licensing > > was > > > > > > correct > > > > > > > > > the licensing on this should also be correct > > > > > > > > > - I manually installed Flink Python and ran the > WordCount.py > > > > > example > > > > > > on > > > > > > > > > a (local) cluster and standalone > > > > > > > > > > > > > > > > > > Aljoscha > > > > > > > > > > > > > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > Thanks for creating this RC Gary & Yu. > > > > > > > > > > > > > > > > > > > > +1 (non-binding) from my side > > > > > > > > > > > > > > > > > > > > Because of instabilities during the testing period, I’ve > > > > manually > > > > > > > > tested > > > > > > > > > some jobs (and streaming examples) on an EMR cluster, > writing > > > to > > > > S3 > > > > > > > using > > > > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to > > > works > > > > > > fine. > > > > > > > > > Also I’m not aware of any blocking issues for this RC. > > > > > > > > > > > > > > > > > > > > Piotrek > > > > > > > > > > > > > > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> > > wrote: > > > > > > > > > >> > > > > > > > > > >> Hi everyone, > > > > > > > > > >> Please review and vote on the release candidate #1 for > the > > > > > version > > > > > > > > > 1.10.0, > > > > > > > > > >> as follows: > > > > > > > > > >> [ ] +1, Approve the release > > > > > > > > > >> [ ] -1, Do not approve the release (please provide > > specific > > > > > > > comments) > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > >> The complete staging area is available for your review, > > > which > > > > > > > > includes: > > > > > > > > > >> * JIRA release notes [1], > > > > > > > > > >> * the official Apache source release and binary > > convenience > > > > > > releases > > > > > > > > to > > > > > > > > > be > > > > > > > > > >> deployed to dist.apache.org [2], which are signed with > > the > > > > key > > > > > > with > > > > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 > [3], > > > > > > > > > >> * all artifacts to be deployed to the Maven Central > > > Repository > > > > > > [4], > > > > > > > > > >> * source code tag "release-1.10.0-rc1" [5], > > > > > > > > > >> > > > > > > > > > >> The announcement blog post is in the works. I will > update > > > this > > > > > > > voting > > > > > > > > > >> thread with a link to the pull request soon. > > > > > > > > > >> > > > > > > > > > >> The vote will be open for at least 72 hours. It is > adopted > > > by > > > > > > > majority > > > > > > > > > >> approval, with at least 3 PMC affirmative votes. > > > > > > > > > >> > > > > > > > > > >> Thanks, > > > > > > > > > >> Yu & Gary > > > > > > > > > >> > > > > > > > > > >> [1] > > > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > > > > > > > > >> [2] > > > > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > > > > > > > > >> [3] > https://dist.apache.org/repos/dist/release/flink/KEYS > > > > > > > > > >> [4] > > > > > > > > > > > > > > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > > > > > > > > >> [5] > > > > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > Benchao Li > > > > School of Electronics Engineering and Computer Science, Peking > > University > > > > Tel:+86-15650713730 > > > > Email: [hidden email]; [hidden email] > > > > > > > > > > > > > -- > > > Best, Jingsong Lee > > > > > > |
In reply to this post by Thomas Weise
The above issue was resolved by adding
RemoteEnvironmentConfigUtils.setJarURLsToConfig(new String[] {JAR_PATH}, config); It might be helpful to provide migration instructions to users as part of this release. On Fri, Jan 31, 2020 at 9:20 PM Thomas Weise <[hidden email]> wrote: > As part of testing the RC, I run into the following issue with a test case > that runs a job from a packaged jar on a MiniCluster. This test had to be > modified due to the client-side API changes in 1.10. > > The issue is that the jar file that also contains the entry point isn't > part of the user classpath on the task manager. The entry point is executed > successfully; when removing all user code from the job graph, the test > passes. > > If the jar isn't shipped automatically to the task manager, what do I need > to set for it to occur? > > Thanks, > Thomas > > > @ClassRule > public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new > MiniClusterResource( > new MiniClusterResourceConfiguration.Builder() > .build()); > > @Test(timeout = 30000) > public void test() throws Exception { > final URI restAddress = > MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get(); > Configuration config = new Configuration(); > config.setString(JobManagerOptions.ADDRESS, restAddress.getHost()); > config.setString(RestOptions.ADDRESS, restAddress.getHost()); > config.setInteger(RestOptions.PORT, restAddress.getPort()); > config.set(CoreOptions.DEFAULT_PARALLELISM, 1); > config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME); > > String entryPoint = "my.TestFlinkJob"; > > PackagedProgram.Builder program = PackagedProgram.newBuilder() > .setJarFile(new File(JAR_PATH)) > .setEntryPointClassName(entryPoint); > > ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE, > config, program.build()); > } > > The user function deserialization error: > > org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot > instantiate user function. > at > org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269) > at > org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.StreamCorruptedException: unexpected block data > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158) > > With 1.9, the driver code would be: > > PackagedProgram program = new PackagedProgram(new File(JAR_PATH), > entryPoint, new String[]{}); > RestClusterClient client = new RestClusterClient(config, > "RemoteExecutor"); > client.run(program, 1); > > On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <[hidden email]> > wrote: > >> Thanks Jincheng, >> >> FLINK-15840 [1] should be a blocker, lead to >> "TableEnvironment.from/scan(string path)" cannot be used for all >> temporaryTable and catalogTable (not DataStreamTable). Of course, it can >> be >> bypassed by "TableEnvironment.sqlQuery("select * from t")", but >> "from/scan" >> are very important api of TableEnvironment and pure TableApi can't be used >> seriously. >> >> [1] https://issues.apache.org/jira/browse/FLINK-15840 >> >> Best, >> Jingsong Lee >> >> On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> wrote: >> >> > Hi all, >> > >> > I also have a issue[1] which I think it's great to be included in 1.10 >> > release. The pr is already under review. >> > >> > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 >> > >> > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: >> > >> > > Hi folks, >> > > >> > > I found another issue related to Blink planner that ClassCastException >> > > would be thrown when use ConnectorDescriptor to register the Source. >> > > Not sure if it is a blocker. The issue can be found in [1], anyway, >> it's >> > > better to fix this issue in new RC. >> > > >> > > Best, >> > > Jincheng >> > > >> > > [1] https://issues.apache.org/jira/browse/FLINK-15840 >> > > >> > > >> > > >> > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: >> > > >> > > > Hi everyone, >> > > > >> > > > -1, because flink-kubernetes does not have the correct NOTICE file. >> > > > >> > > > Here is the issue to track the problem [1]. >> > > > >> > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 >> > > > >> > > > Cheers, >> > > > Till >> > > > >> > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email] >> > >> > > > wrote: >> > > > >> > > > > Thanks Gary & Yu, >> > > > > >> > > > > +1 (non-binding) from my side. >> > > > > >> > > > > - I'm not aware of any blockers (except unfinished documents) >> > > > > - Verified building from source (tests not skipped) >> > > > > - Verified running nightly e2e tests >> > > > > - Verified running example jobs in local/standalone/yarn setups. >> > > > Everything >> > > > > seems to work fine. >> > > > > - Played around with memory configurations on >> local/standalone/yarn >> > > > setups. >> > > > > Everything works as expected. >> > > > > - Checked on the release notes, particularly the memory management >> > > part. >> > > > > >> > > > > Thank you~ >> > > > > >> > > > > Xintong Song >> > > > > >> > > > > >> > > > > >> > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson < >> [hidden email]> >> > > > > wrote: >> > > > > >> > > > > > +1 (non-binding) No blockers, but I did run into a couple of >> > things. >> > > > > > >> > > > > > I upgraded flink-training-exercises to 1.10; after minor >> > adjustments, >> > > > all >> > > > > > the tests pass. >> > > > > > >> > > > > > With Java 11 I ran into >> > > > > > >> > > > > > WARNING: An illegal reflective access operation has occurred >> > > > > > WARNING: Illegal reflective access by >> > > > > > org.apache.flink.core.memory.MemoryUtils >> > > > > > WARNING: Please consider reporting this to the maintainers of >> > > > > > org.apache.flink.core.memory.MemoryUtils >> > > > > > >> > > > > > which is the subject of >> > > > > https://issues.apache.org/jira/browse/FLINK-15094. >> > > > > > I suggest we add some mention of this to the release notes. >> > > > > > >> > > > > > I have been using the now removed ExternalCatalog interface. I >> > found >> > > > the >> > > > > > documentation for the new Catalog API insufficient for figuring >> out >> > > how >> > > > > to >> > > > > > adapt. >> > > > > > >> > > > > > David >> > > > > > >> > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < >> > > [hidden email] >> > > > > >> > > > > > wrote: >> > > > > > >> > > > > > > +1 (binding) >> > > > > > > >> > > > > > > - I verified the checksum >> > > > > > > - I verified the signatures >> > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and >> > > checked >> > > > > any >> > > > > > > newly added dependencies. They are ok. If the 1.9 licensing >> was >> > > > correct >> > > > > > > the licensing on this should also be correct >> > > > > > > - I manually installed Flink Python and ran the WordCount.py >> > > example >> > > > on >> > > > > > > a (local) cluster and standalone >> > > > > > > >> > > > > > > Aljoscha >> > > > > > > >> > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: >> > > > > > > > Hi, >> > > > > > > > >> > > > > > > > Thanks for creating this RC Gary & Yu. >> > > > > > > > >> > > > > > > > +1 (non-binding) from my side >> > > > > > > > >> > > > > > > > Because of instabilities during the testing period, I’ve >> > manually >> > > > > > tested >> > > > > > > some jobs (and streaming examples) on an EMR cluster, writing >> to >> > S3 >> > > > > using >> > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to >> works >> > > > fine. >> > > > > > > Also I’m not aware of any blocking issues for this RC. >> > > > > > > > >> > > > > > > > Piotrek >> > > > > > > > >> > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> >> wrote: >> > > > > > > >> >> > > > > > > >> Hi everyone, >> > > > > > > >> Please review and vote on the release candidate #1 for the >> > > version >> > > > > > > 1.10.0, >> > > > > > > >> as follows: >> > > > > > > >> [ ] +1, Approve the release >> > > > > > > >> [ ] -1, Do not approve the release (please provide specific >> > > > > comments) >> > > > > > > >> >> > > > > > > >> >> > > > > > > >> The complete staging area is available for your review, >> which >> > > > > > includes: >> > > > > > > >> * JIRA release notes [1], >> > > > > > > >> * the official Apache source release and binary convenience >> > > > releases >> > > > > > to >> > > > > > > be >> > > > > > > >> deployed to dist.apache.org [2], which are signed with the >> > key >> > > > with >> > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], >> > > > > > > >> * all artifacts to be deployed to the Maven Central >> Repository >> > > > [4], >> > > > > > > >> * source code tag "release-1.10.0-rc1" [5], >> > > > > > > >> >> > > > > > > >> The announcement blog post is in the works. I will update >> this >> > > > > voting >> > > > > > > >> thread with a link to the pull request soon. >> > > > > > > >> >> > > > > > > >> The vote will be open for at least 72 hours. It is adopted >> by >> > > > > majority >> > > > > > > >> approval, with at least 3 PMC affirmative votes. >> > > > > > > >> >> > > > > > > >> Thanks, >> > > > > > > >> Yu & Gary >> > > > > > > >> >> > > > > > > >> [1] >> > > > > > > >> >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 >> > > > > > > >> [2] >> > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ >> > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS >> > > > > > > >> [4] >> > > > > > > >> > > > >> https://repository.apache.org/content/repositories/orgapacheflink-1325 >> > > > > > > >> [5] >> > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> > >> > -- >> > >> > Benchao Li >> > School of Electronics Engineering and Computer Science, Peking >> University >> > Tel:+86-15650713730 >> > Email: [hidden email]; [hidden email] >> > >> >> >> -- >> Best, Jingsong Lee >> > |
+1 (binding)
- Built Flink locally - Tested quickstart by writing simple, WordCount-like jobs - Submitted them to Yarn both "per-job" and "session" mode For Thomas' comment, I agree that in this release we change how some of the execution options are propagated through the stack. This was done as part of the "Executors" effort, which required all parameters to be passed to an executor through a Configuration object. That said, the classes and methods mentioned in the test are Internal and they were not meant to be used directly by the users. This is also the reason that their behaviour was not also described in the 1.9 documentation. But if this is something worth documenting, then we could in the future create an page in the "Internals" of the documentation where we can describe the "Executors" and how to use them "at-your-own-risk". What do you think @Thomas Weise ? Cheers, Kostas On Mon, Feb 3, 2020 at 5:44 AM Thomas Weise <[hidden email]> wrote: > > The above issue was resolved by adding > > RemoteEnvironmentConfigUtils.setJarURLsToConfig(new String[] {JAR_PATH}, > config); > > It might be helpful to provide migration instructions to users as part of > this release. > > > On Fri, Jan 31, 2020 at 9:20 PM Thomas Weise <[hidden email]> wrote: > > > As part of testing the RC, I run into the following issue with a test case > > that runs a job from a packaged jar on a MiniCluster. This test had to be > > modified due to the client-side API changes in 1.10. > > > > The issue is that the jar file that also contains the entry point isn't > > part of the user classpath on the task manager. The entry point is executed > > successfully; when removing all user code from the job graph, the test > > passes. > > > > If the jar isn't shipped automatically to the task manager, what do I need > > to set for it to occur? > > > > Thanks, > > Thomas > > > > > > @ClassRule > > public static final MiniClusterResource MINI_CLUSTER_RESOURCE = new > > MiniClusterResource( > > new MiniClusterResourceConfiguration.Builder() > > .build()); > > > > @Test(timeout = 30000) > > public void test() throws Exception { > > final URI restAddress = > > MINI_CLUSTER_RESOURCE.getMiniCluster().getRestAddress().get(); > > Configuration config = new Configuration(); > > config.setString(JobManagerOptions.ADDRESS, restAddress.getHost()); > > config.setString(RestOptions.ADDRESS, restAddress.getHost()); > > config.setInteger(RestOptions.PORT, restAddress.getPort()); > > config.set(CoreOptions.DEFAULT_PARALLELISM, 1); > > config.setString(DeploymentOptions.TARGET, RemoteExecutor.NAME); > > > > String entryPoint = "my.TestFlinkJob"; > > > > PackagedProgram.Builder program = PackagedProgram.newBuilder() > > .setJarFile(new File(JAR_PATH)) > > .setEntryPointClassName(entryPoint); > > > > ClientUtils.executeProgram(DefaultExecutorServiceLoader.INSTANCE, > > config, program.build()); > > } > > > > The user function deserialization error: > > > > org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot > > instantiate user function. > > at > > org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:269) > > at > > org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:115) > > at > > org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433) > > at > > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461) > > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707) > > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532) > > at java.lang.Thread.run(Thread.java:748) > > Caused by: java.io.StreamCorruptedException: unexpected block data > > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1581) > > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278) > > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2158) > > > > With 1.9, the driver code would be: > > > > PackagedProgram program = new PackagedProgram(new File(JAR_PATH), > > entryPoint, new String[]{}); > > RestClusterClient client = new RestClusterClient(config, > > "RemoteExecutor"); > > client.run(program, 1); > > > > On Fri, Jan 31, 2020 at 9:16 PM Jingsong Li <[hidden email]> > > wrote: > > > >> Thanks Jincheng, > >> > >> FLINK-15840 [1] should be a blocker, lead to > >> "TableEnvironment.from/scan(string path)" cannot be used for all > >> temporaryTable and catalogTable (not DataStreamTable). Of course, it can > >> be > >> bypassed by "TableEnvironment.sqlQuery("select * from t")", but > >> "from/scan" > >> are very important api of TableEnvironment and pure TableApi can't be used > >> seriously. > >> > >> [1] https://issues.apache.org/jira/browse/FLINK-15840 > >> > >> Best, > >> Jingsong Lee > >> > >> On Sat, Feb 1, 2020 at 12:47 PM Benchao Li <[hidden email]> wrote: > >> > >> > Hi all, > >> > > >> > I also have a issue[1] which I think it's great to be included in 1.10 > >> > release. The pr is already under review. > >> > > >> > [1] https://issues.apache.org/jira/projects/FLINK/issues/FLINK-15494 > >> > > >> > jincheng sun <[hidden email]> 于2020年2月1日周六 下午12:33写道: > >> > > >> > > Hi folks, > >> > > > >> > > I found another issue related to Blink planner that ClassCastException > >> > > would be thrown when use ConnectorDescriptor to register the Source. > >> > > Not sure if it is a blocker. The issue can be found in [1], anyway, > >> it's > >> > > better to fix this issue in new RC. > >> > > > >> > > Best, > >> > > Jincheng > >> > > > >> > > [1] https://issues.apache.org/jira/browse/FLINK-15840 > >> > > > >> > > > >> > > > >> > > Till Rohrmann <[hidden email]> 于2020年1月31日周五 下午10:29写道: > >> > > > >> > > > Hi everyone, > >> > > > > >> > > > -1, because flink-kubernetes does not have the correct NOTICE file. > >> > > > > >> > > > Here is the issue to track the problem [1]. > >> > > > > >> > > > [1] https://issues.apache.org/jira/browse/FLINK-15837 > >> > > > > >> > > > Cheers, > >> > > > Till > >> > > > > >> > > > On Fri, Jan 31, 2020 at 2:34 PM Xintong Song <[hidden email] > >> > > >> > > > wrote: > >> > > > > >> > > > > Thanks Gary & Yu, > >> > > > > > >> > > > > +1 (non-binding) from my side. > >> > > > > > >> > > > > - I'm not aware of any blockers (except unfinished documents) > >> > > > > - Verified building from source (tests not skipped) > >> > > > > - Verified running nightly e2e tests > >> > > > > - Verified running example jobs in local/standalone/yarn setups. > >> > > > Everything > >> > > > > seems to work fine. > >> > > > > - Played around with memory configurations on > >> local/standalone/yarn > >> > > > setups. > >> > > > > Everything works as expected. > >> > > > > - Checked on the release notes, particularly the memory management > >> > > part. > >> > > > > > >> > > > > Thank you~ > >> > > > > > >> > > > > Xintong Song > >> > > > > > >> > > > > > >> > > > > > >> > > > > On Fri, Jan 31, 2020 at 6:47 PM David Anderson < > >> [hidden email]> > >> > > > > wrote: > >> > > > > > >> > > > > > +1 (non-binding) No blockers, but I did run into a couple of > >> > things. > >> > > > > > > >> > > > > > I upgraded flink-training-exercises to 1.10; after minor > >> > adjustments, > >> > > > all > >> > > > > > the tests pass. > >> > > > > > > >> > > > > > With Java 11 I ran into > >> > > > > > > >> > > > > > WARNING: An illegal reflective access operation has occurred > >> > > > > > WARNING: Illegal reflective access by > >> > > > > > org.apache.flink.core.memory.MemoryUtils > >> > > > > > WARNING: Please consider reporting this to the maintainers of > >> > > > > > org.apache.flink.core.memory.MemoryUtils > >> > > > > > > >> > > > > > which is the subject of > >> > > > > https://issues.apache.org/jira/browse/FLINK-15094. > >> > > > > > I suggest we add some mention of this to the release notes. > >> > > > > > > >> > > > > > I have been using the now removed ExternalCatalog interface. I > >> > found > >> > > > the > >> > > > > > documentation for the new Catalog API insufficient for figuring > >> out > >> > > how > >> > > > > to > >> > > > > > adapt. > >> > > > > > > >> > > > > > David > >> > > > > > > >> > > > > > On Fri, Jan 31, 2020 at 10:00 AM Aljoscha Krettek < > >> > > [hidden email] > >> > > > > > >> > > > > > wrote: > >> > > > > > > >> > > > > > > +1 (binding) > >> > > > > > > > >> > > > > > > - I verified the checksum > >> > > > > > > - I verified the signatures > >> > > > > > > - I eyeballed the diff in pom files between 1.9 and 1.10 and > >> > > checked > >> > > > > any > >> > > > > > > newly added dependencies. They are ok. If the 1.9 licensing > >> was > >> > > > correct > >> > > > > > > the licensing on this should also be correct > >> > > > > > > - I manually installed Flink Python and ran the WordCount.py > >> > > example > >> > > > on > >> > > > > > > a (local) cluster and standalone > >> > > > > > > > >> > > > > > > Aljoscha > >> > > > > > > > >> > > > > > > On 30.01.20 10:49, Piotr Nowojski wrote: > >> > > > > > > > Hi, > >> > > > > > > > > >> > > > > > > > Thanks for creating this RC Gary & Yu. > >> > > > > > > > > >> > > > > > > > +1 (non-binding) from my side > >> > > > > > > > > >> > > > > > > > Because of instabilities during the testing period, I’ve > >> > manually > >> > > > > > tested > >> > > > > > > some jobs (and streaming examples) on an EMR cluster, writing > >> to > >> > S3 > >> > > > > using > >> > > > > > > newly unshaded/not relocated S3 plugin. Everything seems to > >> works > >> > > > fine. > >> > > > > > > Also I’m not aware of any blocking issues for this RC. > >> > > > > > > > > >> > > > > > > > Piotrek > >> > > > > > > > > >> > > > > > > >> On 27 Jan 2020, at 22:06, Gary Yao <[hidden email]> > >> wrote: > >> > > > > > > >> > >> > > > > > > >> Hi everyone, > >> > > > > > > >> Please review and vote on the release candidate #1 for the > >> > > version > >> > > > > > > 1.10.0, > >> > > > > > > >> as follows: > >> > > > > > > >> [ ] +1, Approve the release > >> > > > > > > >> [ ] -1, Do not approve the release (please provide specific > >> > > > > comments) > >> > > > > > > >> > >> > > > > > > >> > >> > > > > > > >> The complete staging area is available for your review, > >> which > >> > > > > > includes: > >> > > > > > > >> * JIRA release notes [1], > >> > > > > > > >> * the official Apache source release and binary convenience > >> > > > releases > >> > > > > > to > >> > > > > > > be > >> > > > > > > >> deployed to dist.apache.org [2], which are signed with the > >> > key > >> > > > with > >> > > > > > > >> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > >> > > > > > > >> * all artifacts to be deployed to the Maven Central > >> Repository > >> > > > [4], > >> > > > > > > >> * source code tag "release-1.10.0-rc1" [5], > >> > > > > > > >> > >> > > > > > > >> The announcement blog post is in the works. I will update > >> this > >> > > > > voting > >> > > > > > > >> thread with a link to the pull request soon. > >> > > > > > > >> > >> > > > > > > >> The vote will be open for at least 72 hours. It is adopted > >> by > >> > > > > majority > >> > > > > > > >> approval, with at least 3 PMC affirmative votes. > >> > > > > > > >> > >> > > > > > > >> Thanks, > >> > > > > > > >> Yu & Gary > >> > > > > > > >> > >> > > > > > > >> [1] > >> > > > > > > >> > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > >> > > > > > > >> [2] > >> > > > https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > >> > > > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> > > > > > > >> [4] > >> > > > > > > > >> > > > > >> https://repository.apache.org/content/repositories/orgapacheflink-1325 > >> > > > > > > >> [5] > >> > > > https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > >> > > > > > > > > >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > > >> > -- > >> > > >> > Benchao Li > >> > School of Electronics Engineering and Computer Science, Peking > >> University > >> > Tel:+86-15650713730 > >> > Email: [hidden email]; [hidden email] > >> > > >> > >> > >> -- > >> Best, Jingsong Lee > >> > > |
In reply to this post by Gary Yao-4
Hi everyone,
I am hereby canceling the vote due to: FLINK-15837 FLINK-15840 Another RC will be created later today. Best, Gary On Mon, Jan 27, 2020 at 10:06 PM Gary Yao <[hidden email]> wrote: > Hi everyone, > Please review and vote on the release candidate #1 for the version 1.10.0, > as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific comments) > > > The complete staging area is available for your review, which includes: > * JIRA release notes [1], > * the official Apache source release and binary convenience releases to be > deployed to dist.apache.org [2], which are signed with the key with > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > * all artifacts to be deployed to the Maven Central Repository [4], > * source code tag "release-1.10.0-rc1" [5], > > The announcement blog post is in the works. I will update this voting > thread with a link to the pull request soon. > > The vote will be open for at least 72 hours. It is adopted by majority > approval, with at least 3 PMC affirmative votes. > > Thanks, > Yu & Gary > > [1] > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > [4] https://repository.apache.org/content/repositories/orgapacheflink-1325 > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > |
I found another issue with the Kinesis connector:
https://issues.apache.org/jira/browse/FLINK-15868 On Mon, Feb 3, 2020 at 3:35 AM Gary Yao <[hidden email]> wrote: > Hi everyone, > > I am hereby canceling the vote due to: > > FLINK-15837 > FLINK-15840 > > Another RC will be created later today. > > Best, > Gary > > On Mon, Jan 27, 2020 at 10:06 PM Gary Yao <[hidden email]> wrote: > > > Hi everyone, > > Please review and vote on the release candidate #1 for the version > 1.10.0, > > as follows: > > [ ] +1, Approve the release > > [ ] -1, Do not approve the release (please provide specific comments) > > > > > > The complete staging area is available for your review, which includes: > > * JIRA release notes [1], > > * the official Apache source release and binary convenience releases to > be > > deployed to dist.apache.org [2], which are signed with the key with > > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > > * all artifacts to be deployed to the Maven Central Repository [4], > > * source code tag "release-1.10.0-rc1" [5], > > > > The announcement blog post is in the works. I will update this voting > > thread with a link to the pull request soon. > > > > The vote will be open for at least 72 hours. It is adopted by majority > > approval, with at least 3 PMC affirmative votes. > > > > Thanks, > > Yu & Gary > > > > [1] > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > [4] > https://repository.apache.org/content/repositories/orgapacheflink-1325 > > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > > > |
I opened a PR for FLINK-15868
<https://issues.apache.org/jira/browse/FLINK-15868>: https://github.com/apache/flink/pull/11006 With that change, I was able to run an application that consumes from Kinesis. I should have data tomorrow regarding the performance. Two questions/observations: 1) Is the low watermark display in the UI still broken? 2) Was there a change in how job recovery reflects in the uptime metric? Didn't uptime previously reset to 0 on recovery (now it just keeps increasing)? Thanks, Thomas On Mon, Feb 3, 2020 at 10:55 AM Thomas Weise <[hidden email]> wrote: > I found another issue with the Kinesis connector: > > https://issues.apache.org/jira/browse/FLINK-15868 > > > On Mon, Feb 3, 2020 at 3:35 AM Gary Yao <[hidden email]> wrote: > >> Hi everyone, >> >> I am hereby canceling the vote due to: >> >> FLINK-15837 >> FLINK-15840 >> >> Another RC will be created later today. >> >> Best, >> Gary >> >> On Mon, Jan 27, 2020 at 10:06 PM Gary Yao <[hidden email]> wrote: >> >> > Hi everyone, >> > Please review and vote on the release candidate #1 for the version >> 1.10.0, >> > as follows: >> > [ ] +1, Approve the release >> > [ ] -1, Do not approve the release (please provide specific comments) >> > >> > >> > The complete staging area is available for your review, which includes: >> > * JIRA release notes [1], >> > * the official Apache source release and binary convenience releases to >> be >> > deployed to dist.apache.org [2], which are signed with the key with >> > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], >> > * all artifacts to be deployed to the Maven Central Repository [4], >> > * source code tag "release-1.10.0-rc1" [5], >> > >> > The announcement blog post is in the works. I will update this voting >> > thread with a link to the pull request soon. >> > >> > The vote will be open for at least 72 hours. It is adopted by majority >> > approval, with at least 3 PMC affirmative votes. >> > >> > Thanks, >> > Yu & Gary >> > >> > [1] >> > >> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 >> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ >> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS >> > [4] >> https://repository.apache.org/content/repositories/orgapacheflink-1325 >> > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 >> > >> > |
Another critical issue is FLINK-15858[1].
It is indeed a regression. But we don''t want to block release. Will try our best to fix it. [1] https://issues.apache.org/jira/browse/FLINK-15858 Best, Jingsong Lee On Tue, Feb 4, 2020 at 9:56 AM Thomas Weise <[hidden email]> wrote: > I opened a PR for FLINK-15868 > <https://issues.apache.org/jira/browse/FLINK-15868>: > https://github.com/apache/flink/pull/11006 > > With that change, I was able to run an application that consumes from > Kinesis. > > I should have data tomorrow regarding the performance. > > Two questions/observations: > > 1) Is the low watermark display in the UI still broken? > 2) Was there a change in how job recovery reflects in the uptime metric? > Didn't uptime previously reset to 0 on recovery (now it just keeps > increasing)? > > Thanks, > Thomas > > > > > On Mon, Feb 3, 2020 at 10:55 AM Thomas Weise <[hidden email]> wrote: > > > I found another issue with the Kinesis connector: > > > > https://issues.apache.org/jira/browse/FLINK-15868 > > > > > > On Mon, Feb 3, 2020 at 3:35 AM Gary Yao <[hidden email]> wrote: > > > >> Hi everyone, > >> > >> I am hereby canceling the vote due to: > >> > >> FLINK-15837 > >> FLINK-15840 > >> > >> Another RC will be created later today. > >> > >> Best, > >> Gary > >> > >> On Mon, Jan 27, 2020 at 10:06 PM Gary Yao <[hidden email]> wrote: > >> > >> > Hi everyone, > >> > Please review and vote on the release candidate #1 for the version > >> 1.10.0, > >> > as follows: > >> > [ ] +1, Approve the release > >> > [ ] -1, Do not approve the release (please provide specific comments) > >> > > >> > > >> > The complete staging area is available for your review, which > includes: > >> > * JIRA release notes [1], > >> > * the official Apache source release and binary convenience releases > to > >> be > >> > deployed to dist.apache.org [2], which are signed with the key with > >> > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > >> > * all artifacts to be deployed to the Maven Central Repository [4], > >> > * source code tag "release-1.10.0-rc1" [5], > >> > > >> > The announcement blog post is in the works. I will update this voting > >> > thread with a link to the pull request soon. > >> > > >> > The vote will be open for at least 72 hours. It is adopted by majority > >> > approval, with at least 3 PMC affirmative votes. > >> > > >> > Thanks, > >> > Yu & Gary > >> > > >> > [1] > >> > > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > >> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > >> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> > [4] > >> https://repository.apache.org/content/repositories/orgapacheflink-1325 > >> > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > >> > > >> > > > -- Best, Jingsong Lee |
In reply to this post by Thomas Weise
Hi Thomas,
> 2) Was there a change in how job recovery reflects in the uptime metric? > Didn't uptime previously reset to 0 on recovery (now it just keeps > increasing)? The uptime is the difference between the current time and the time when the job transitioned to RUNNING state. By default we no longer transition the job out of the RUNNING state when restarting. This has something to do with the new scheduler which enables pipelined region failover by default [1]. Actually we enabled pipelined region failover already in the binary distribution of Flink 1.9 by setting: jobmanager.execution.failover-strategy: region in the default flink-conf.yaml. Unless you have removed this config option or you are using a custom yaml, you should be seeing this behavior in Flink 1.9. If you do not want region failover, set jobmanager.execution.failover-strategy: full > 1) Is the low watermark display in the UI still broken? I was not aware that this is broken. Is there an issue tracking this bug? Best, Gary [1] https://issues.apache.org/jira/browse/FLINK-14651 On Tue, Feb 4, 2020 at 2:56 AM Thomas Weise <[hidden email]> wrote: > I opened a PR for FLINK-15868 > <https://issues.apache.org/jira/browse/FLINK-15868>: > https://github.com/apache/flink/pull/11006 > > With that change, I was able to run an application that consumes from > Kinesis. > > I should have data tomorrow regarding the performance. > > Two questions/observations: > > 1) Is the low watermark display in the UI still broken? > 2) Was there a change in how job recovery reflects in the uptime metric? > Didn't uptime previously reset to 0 on recovery (now it just keeps > increasing)? > > Thanks, > Thomas > > > > > On Mon, Feb 3, 2020 at 10:55 AM Thomas Weise <[hidden email]> wrote: > > > I found another issue with the Kinesis connector: > > > > https://issues.apache.org/jira/browse/FLINK-15868 > > > > > > On Mon, Feb 3, 2020 at 3:35 AM Gary Yao <[hidden email]> wrote: > > > >> Hi everyone, > >> > >> I am hereby canceling the vote due to: > >> > >> FLINK-15837 > >> FLINK-15840 > >> > >> Another RC will be created later today. > >> > >> Best, > >> Gary > >> > >> On Mon, Jan 27, 2020 at 10:06 PM Gary Yao <[hidden email]> wrote: > >> > >> > Hi everyone, > >> > Please review and vote on the release candidate #1 for the version > >> 1.10.0, > >> > as follows: > >> > [ ] +1, Approve the release > >> > [ ] -1, Do not approve the release (please provide specific comments) > >> > > >> > > >> > The complete staging area is available for your review, which > includes: > >> > * JIRA release notes [1], > >> > * the official Apache source release and binary convenience releases > to > >> be > >> > deployed to dist.apache.org [2], which are signed with the key with > >> > fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3], > >> > * all artifacts to be deployed to the Maven Central Repository [4], > >> > * source code tag "release-1.10.0-rc1" [5], > >> > > >> > The announcement blog post is in the works. I will update this voting > >> > thread with a link to the pull request soon. > >> > > >> > The vote will be open for at least 72 hours. It is adopted by majority > >> > approval, with at least 3 PMC affirmative votes. > >> > > >> > Thanks, > >> > Yu & Gary > >> > > >> > [1] > >> > > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845 > >> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc1/ > >> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS > >> > [4] > >> https://repository.apache.org/content/repositories/orgapacheflink-1325 > >> > [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc1 > >> > > >> > > > |
Free forum by Nabble | Edit this page |