Dear community!
After careful testing and three release candidates, I take a chance and hereby start a vote. Please vote on releasing the following candidate as Apache Flink version 0.9.0. ------------------------------------------------------------- The commit to be voted on: e4643d248350c5c2727d93aafd227e544f7d5690 Branch: release-0.9.0-rc3 ( https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git;a=shortlog;h=refs/heads/release-0.9.0-rc3 ) The release artifacts to be voted on can be found at: http://people.apache.org/~mxm/flink-0.9.0-rc3/ Release artifacts are signed with the key with fingerprint C2909CBF: http://www.apache.org/dist/flink/KEYS The staging repository for this release can be found at: https://repository.apache.org/content/repositories/orgapacheflink-1041 ------------------------------------------------------------- Please vote on releasing this package as Apache Flink 0.9.0. The vote is open for the next 72 hours and passes if a majority of at least three +1 PMC votes are cast. The vote ends on Saturday (June 20, 2015). [ ] +1 Release this package as Apache Flink 0.9.0 [ ] -1 Do not release this package because ... The following commits have been added since the last release candidate (release-0.9.0-rc2): 0914a7d [FLINK-2210] Table API support for aggregation on columns with null values adc3e0e [FLINK-2203] handling null values for RowSerializer fa02e5f [FLINK-2174] allow comments in 'slaves' file 44b969e [FLINK-2226][YARN] fail application on failed single-job cluster job 57810b5 [build] merge transitive notice files to shaded notices 57955d9 [FLINK-2120][runtime] rename AbstractJobVertex to JobVertex 6a9782a [legal] Updates LICENSE/NOTICE file of source and binary distribution d690633 [FLINK-2225] [scheduler] Excludes static code paths from co-location constraint to avoid scheduling problems 0a8142a [hotfix] Removed execute() that followed print() in quickstart wordcount jobs 2359b49 [docs] Update obsolate cluster execution guide c2b1e12 [FLINK-2209] [docs] Document linking with jars not in the binary dist ccaa0b0 [FLINK-2221] [docs] Docs for not using local filesystem on the cluster as state backup d147c71 [hotfix] Some small fixups in README.md a300eec [FLINK-2219] [webfrontend] Fix IllegalArgumentException and decrease log level c4f3f48 [FLINK-2224] Log error cause in JobStatusChange 8232809 [FLINK-2216] Exclude javadoc jar from examples b342cf7 [readme] Synchronize tagline with intro, fix typos |
+1
Verified checksums and signatures Built from source Run bundled batch examples on a local setup Run streaming example with checkpointed operator state on cluster, killed taskmanagers underneath On Wed, Jun 17, 2015 at 2:35 AM, Maximilian Michels <[hidden email]> wrote: > Dear community! > > After careful testing and three release candidates, I take a chance and > hereby start a vote. > > Please vote on releasing the following candidate as Apache Flink version > 0.9.0. > > ------------------------------------------------------------- > The commit to be voted on: > e4643d248350c5c2727d93aafd227e544f7d5690 > > Branch: > release-0.9.0-rc3 ( > > https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git;a=shortlog;h=refs/heads/release-0.9.0-rc3 > ) > > The release artifacts to be voted on can be found at: > http://people.apache.org/~mxm/flink-0.9.0-rc3/ > > Release artifacts are signed with the key with fingerprint C2909CBF: > http://www.apache.org/dist/flink/KEYS > > The staging repository for this release can be found at: > https://repository.apache.org/content/repositories/orgapacheflink-1041 > ------------------------------------------------------------- > > Please vote on releasing this package as Apache Flink 0.9.0. > > The vote is open for the next 72 hours and passes if a majority of at least > three +1 PMC votes are cast. > > The vote ends on Saturday (June 20, 2015). > > [ ] +1 Release this package as Apache Flink 0.9.0 > [ ] -1 Do not release this package because ... > > > The following commits have been added since the last release candidate > (release-0.9.0-rc2): > > 0914a7d [FLINK-2210] Table API support for aggregation on columns with null > values > adc3e0e [FLINK-2203] handling null values for RowSerializer > fa02e5f [FLINK-2174] allow comments in 'slaves' file > 44b969e [FLINK-2226][YARN] fail application on failed single-job cluster > job > 57810b5 [build] merge transitive notice files to shaded notices > 57955d9 [FLINK-2120][runtime] rename AbstractJobVertex to JobVertex > 6a9782a [legal] Updates LICENSE/NOTICE file of source and binary > distribution > d690633 [FLINK-2225] [scheduler] Excludes static code paths from > co-location constraint to avoid scheduling problems > 0a8142a [hotfix] Removed execute() that followed print() in quickstart > wordcount jobs > 2359b49 [docs] Update obsolate cluster execution guide > c2b1e12 [FLINK-2209] [docs] Document linking with jars not in the binary > dist > ccaa0b0 [FLINK-2221] [docs] Docs for not using local filesystem on the > cluster as state backup > d147c71 [hotfix] Some small fixups in README.md > a300eec [FLINK-2219] [webfrontend] Fix IllegalArgumentException and > decrease log level > c4f3f48 [FLINK-2224] Log error cause in JobStatusChange > 8232809 [FLINK-2216] Exclude javadoc jar from examples > b342cf7 [readme] Synchronize tagline with intro, fix typos > |
-1
There is a bug in the newly introduced Null-Value support in RowSerializer: The serializer was changed to write booleans that signify if a field is null. For comparison this still uses the TupleComparatorBase (via CaseClassComparator) which is not aware of these changes. The reason why no Unit-Test found this problem is that it only occurs if very long keys are used that exceed the normalised-key length. Only then do we actually have to compare the binary data. I see three options: - Revert the relevant Table API changes - Create a new RowComparator that does not derive from CaseClassComparator but basically copies almost all the code - Add support for null-values in Tuples and Case classes as well, thereby bringing all composite types in sync regarding null-values. On Wed, 17 Jun 2015 at 16:30 Márton Balassi <[hidden email]> wrote: > +1 > > Verified checksums and signatures > Built from source > Run bundled batch examples on a local setup > Run streaming example with checkpointed operator state on cluster, killed > taskmanagers underneath > > On Wed, Jun 17, 2015 at 2:35 AM, Maximilian Michels <[hidden email]> > wrote: > > > Dear community! > > > > After careful testing and three release candidates, I take a chance and > > hereby start a vote. > > > > Please vote on releasing the following candidate as Apache Flink version > > 0.9.0. > > > > ------------------------------------------------------------- > > The commit to be voted on: > > e4643d248350c5c2727d93aafd227e544f7d5690 > > > > Branch: > > release-0.9.0-rc3 ( > > > > > https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git;a=shortlog;h=refs/heads/release-0.9.0-rc3 > > ) > > > > The release artifacts to be voted on can be found at: > > http://people.apache.org/~mxm/flink-0.9.0-rc3/ > > > > Release artifacts are signed with the key with fingerprint C2909CBF: > > http://www.apache.org/dist/flink/KEYS > > > > The staging repository for this release can be found at: > > https://repository.apache.org/content/repositories/orgapacheflink-1041 > > ------------------------------------------------------------- > > > > Please vote on releasing this package as Apache Flink 0.9.0. > > > > The vote is open for the next 72 hours and passes if a majority of at > least > > three +1 PMC votes are cast. > > > > The vote ends on Saturday (June 20, 2015). > > > > [ ] +1 Release this package as Apache Flink 0.9.0 > > [ ] -1 Do not release this package because ... > > > > > > The following commits have been added since the last release candidate > > (release-0.9.0-rc2): > > > > 0914a7d [FLINK-2210] Table API support for aggregation on columns with > null > > values > > adc3e0e [FLINK-2203] handling null values for RowSerializer > > fa02e5f [FLINK-2174] allow comments in 'slaves' file > > 44b969e [FLINK-2226][YARN] fail application on failed single-job cluster > > job > > 57810b5 [build] merge transitive notice files to shaded notices > > 57955d9 [FLINK-2120][runtime] rename AbstractJobVertex to JobVertex > > 6a9782a [legal] Updates LICENSE/NOTICE file of source and binary > > distribution > > d690633 [FLINK-2225] [scheduler] Excludes static code paths from > > co-location constraint to avoid scheduling problems > > 0a8142a [hotfix] Removed execute() that followed print() in quickstart > > wordcount jobs > > 2359b49 [docs] Update obsolate cluster execution guide > > c2b1e12 [FLINK-2209] [docs] Document linking with jars not in the binary > > dist > > ccaa0b0 [FLINK-2221] [docs] Docs for not using local filesystem on the > > cluster as state backup > > d147c71 [hotfix] Some small fixups in README.md > > a300eec [FLINK-2219] [webfrontend] Fix IllegalArgumentException and > > decrease log level > > c4f3f48 [FLINK-2224] Log error cause in JobStatusChange > > 8232809 [FLINK-2216] Exclude javadoc jar from examples > > b342cf7 [readme] Synchronize tagline with intro, fix typos > > > |
On 17 Jun 2015, at 18:05, Aljoscha Krettek <[hidden email]> wrote: > -1 > > There is a bug in the newly introduced Null-Value support in RowSerializer: > The serializer was changed to write booleans that signify if a field is > null. For comparison this still uses the TupleComparatorBase (via > CaseClassComparator) which is not aware of these changes. > > The reason why no Unit-Test found this problem is that it only occurs if > very long keys are used that exceed the normalised-key length. Only then do > we actually have to compare the binary data. > > I see three options: > - Revert the relevant Table API changes > - Create a new RowComparator that does not derive from CaseClassComparator > but basically copies almost all the code > - Add support for null-values in Tuples and Case classes as well, thereby > bringing all composite types in sync regarding null-values. I vote vor option 1 for now. |
I also vote for reverting the Table API changes.
On Wed, Jun 17, 2015 at 6:16 PM, Ufuk Celebi <[hidden email]> wrote: > > On 17 Jun 2015, at 18:05, Aljoscha Krettek <[hidden email]> wrote: > > > -1 > > > > There is a bug in the newly introduced Null-Value support in > RowSerializer: > > The serializer was changed to write booleans that signify if a field is > > null. For comparison this still uses the TupleComparatorBase (via > > CaseClassComparator) which is not aware of these changes. > > > > The reason why no Unit-Test found this problem is that it only occurs if > > very long keys are used that exceed the normalised-key length. Only then > do > > we actually have to compare the binary data. > > > > I see three options: > > - Revert the relevant Table API changes > > - Create a new RowComparator that does not derive from > CaseClassComparator > > but basically copies almost all the code > > - Add support for null-values in Tuples and Case classes as well, thereby > > bringing all composite types in sync regarding null-values. > > I vote vor option 1 for now. > |
+1 I also think it's the cleanest solution for now. The table API still
works, just without support for null values. On Thu, 18 Jun 2015 at 10:08 Maximilian Michels <[hidden email]> wrote: > I also vote for reverting the Table API changes. > > On Wed, Jun 17, 2015 at 6:16 PM, Ufuk Celebi <[hidden email]> wrote: > > > > > On 17 Jun 2015, at 18:05, Aljoscha Krettek <[hidden email]> wrote: > > > > > -1 > > > > > > There is a bug in the newly introduced Null-Value support in > > RowSerializer: > > > The serializer was changed to write booleans that signify if a field is > > > null. For comparison this still uses the TupleComparatorBase (via > > > CaseClassComparator) which is not aware of these changes. > > > > > > The reason why no Unit-Test found this problem is that it only occurs > if > > > very long keys are used that exceed the normalised-key length. Only > then > > do > > > we actually have to compare the binary data. > > > > > > I see three options: > > > - Revert the relevant Table API changes > > > - Create a new RowComparator that does not derive from > > CaseClassComparator > > > but basically copies almost all the code > > > - Add support for null-values in Tuples and Case classes as well, > thereby > > > bringing all composite types in sync regarding null-values. > > > > I vote vor option 1 for now. > > > |
+1 for reverting.
On Thu, Jun 18, 2015 at 10:11 AM Aljoscha Krettek <[hidden email]> wrote: > +1 I also think it's the cleanest solution for now. The table API still > works, just without support for null values. > > On Thu, 18 Jun 2015 at 10:08 Maximilian Michels <[hidden email]> wrote: > > > I also vote for reverting the Table API changes. > > > > On Wed, Jun 17, 2015 at 6:16 PM, Ufuk Celebi <[hidden email]> wrote: > > > > > > > > On 17 Jun 2015, at 18:05, Aljoscha Krettek <[hidden email]> > wrote: > > > > > > > -1 > > > > > > > > There is a bug in the newly introduced Null-Value support in > > > RowSerializer: > > > > The serializer was changed to write booleans that signify if a field > is > > > > null. For comparison this still uses the TupleComparatorBase (via > > > > CaseClassComparator) which is not aware of these changes. > > > > > > > > The reason why no Unit-Test found this problem is that it only occurs > > if > > > > very long keys are used that exceed the normalised-key length. Only > > then > > > do > > > > we actually have to compare the binary data. > > > > > > > > I see three options: > > > > - Revert the relevant Table API changes > > > > - Create a new RowComparator that does not derive from > > > CaseClassComparator > > > > but basically copies almost all the code > > > > - Add support for null-values in Tuples and Case classes as well, > > thereby > > > > bringing all composite types in sync regarding null-values. > > > > > > I vote vor option 1 for now. > > > > > > |
Free forum by Nabble | Edit this page |