[DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

[DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Aljoscha Krettek-2
Hi,

this has come up a few times now and I think we need to discuss the
guarantees that we want to officially give for this. What I mean by
cross-version compatibility is using, say, a Flink 1.10 Kafka connector
dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
1.10.1. In the past, this hast mostly worked. I think this was largely
by accident, though.

The problem is that connectors, which might be annotated as @Public can
use classes internally which are annotated as @Internal or
@PublicEvolving. If those internal dependencies change, then the
connector jar can not be used for different versions. This has happened
at least in [1] and [2], where [2] was caused by the interplay of [3]
and [4]. The initial release note on [4] was saying that the Kafka 0.9
connector can be used with Flink 1.11, but this was rendered wrong by
[3]. (Also, sorry for all the []s ...)

What should we do about it? So far our strategy for ensuring that
jars/dependencies are compatible between versions was "hope". If we want
to really verify this compatibility we would have to ensure that
"public" code does not transitively use any "non-public" dependencies.

An alternative would be to say we don't support any cross-version
compatibility between Flink versions. If users want to use an older
connector they would have to copy the code, make sure it compiles
against a newer Flink version and then manage that themselves

What do you think?

Best,
Aljoscha

[1] https://issues.apache.org/jira/browse/FLINK-13586
[2] https://github.com/apache/flink/pull/12699#discussion_r442013514
[3] https://issues.apache.org/jira/browse/FLINK-17376
[4] https://issues.apache.org/jira/browse/FLINK-15115
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Konstantin Knauf-4
Hi Aljoscha,

Thank you for bringing this up. IMHO the situation is different for minor &
patch version upgrades.

1) I don't think we need to provide any guarantees across Flink minor
versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
recompile their user JARs when upgrading the cluster to the next minor
version of Apache Flink.

2) We should aim for compatibility across patch versions (1.10.0 ->
1.10.1). Being able to apply a patch to the Flink runtime/framework without
updating the dependencies of each user jar running on the cluster seems
really useful to me.

According to the discussion in [1] around stability guarantees for
@PublicEvolving, this would mean that connectors can use @Public
and @PublicEvolving classes, but not @Internal classes, right? This
generally seems reasonable to me. If we want to make it easy for our users,
we need to aim for stable interfaces anyway.

Cheers,

Konstantin

[]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html

On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <[hidden email]>
wrote:

> Hi,
>
> this has come up a few times now and I think we need to discuss the
> guarantees that we want to officially give for this. What I mean by
> cross-version compatibility is using, say, a Flink 1.10 Kafka connector
> dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
> 1.10.1. In the past, this hast mostly worked. I think this was largely
> by accident, though.
>
> The problem is that connectors, which might be annotated as @Public can
> use classes internally which are annotated as @Internal or
> @PublicEvolving. If those internal dependencies change, then the
> connector jar can not be used for different versions. This has happened
> at least in [1] and [2], where [2] was caused by the interplay of [3]
> and [4]. The initial release note on [4] was saying that the Kafka 0.9
> connector can be used with Flink 1.11, but this was rendered wrong by
> [3]. (Also, sorry for all the []s ...)
>
> What should we do about it? So far our strategy for ensuring that
> jars/dependencies are compatible between versions was "hope". If we want
> to really verify this compatibility we would have to ensure that
> "public" code does not transitively use any "non-public" dependencies.
>
> An alternative would be to say we don't support any cross-version
> compatibility between Flink versions. If users want to use an older
> connector they would have to copy the code, make sure it compiles
> against a newer Flink version and then manage that themselves
>
> What do you think?
>
> Best,
> Aljoscha
>
> [1] https://issues.apache.org/jira/browse/FLINK-13586
> [2] https://github.com/apache/flink/pull/12699#discussion_r442013514
> [3] https://issues.apache.org/jira/browse/FLINK-17376
> [4] https://issues.apache.org/jira/browse/FLINK-15115
>


--

Konstantin Knauf

https://twitter.com/snntrable

https://github.com/knaufk
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Aljoscha Krettek-2
Yes, I would say the situation is different for minor vs. patch.

Side note: in a version like 1.x.y most people in the Flink community
see x as the major version and y as the minor version. I know that this
is not proper semver and I could be wrong about how people see it. It
doesn't change anything about the discussion, though.

Another side note: I said earlier that we would have to check the whole
transitive closure for usage of "internal" classes. This is wrong: we
only need to check the first level, i.e. code that is directly used by a
connector (or other jar that we want to use across versions). This is
still something that we don't do, so the fact that it works can be
somewhat attributed to luck and the right people looking at the right
thing at the right time.

Aljoscha

On 23.06.20 17:25, Konstantin Knauf wrote:

> Hi Aljoscha,
>
> Thank you for bringing this up. IMHO the situation is different for minor &
> patch version upgrades.
>
> 1) I don't think we need to provide any guarantees across Flink minor
> versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
> recompile their user JARs when upgrading the cluster to the next minor
> version of Apache Flink.
>
> 2) We should aim for compatibility across patch versions (1.10.0 ->
> 1.10.1). Being able to apply a patch to the Flink runtime/framework without
> updating the dependencies of each user jar running on the cluster seems
> really useful to me.
>
> According to the discussion in [1] around stability guarantees for
> @PublicEvolving, this would mean that connectors can use @Public
> and @PublicEvolving classes, but not @Internal classes, right? This
> generally seems reasonable to me. If we want to make it easy for our users,
> we need to aim for stable interfaces anyway.
>
> Cheers,
>
> Konstantin
>
> []
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html
>
> On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <[hidden email]>
> wrote:
>
>> Hi,
>>
>> this has come up a few times now and I think we need to discuss the
>> guarantees that we want to officially give for this. What I mean by
>> cross-version compatibility is using, say, a Flink 1.10 Kafka connector
>> dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
>> 1.10.1. In the past, this hast mostly worked. I think this was largely
>> by accident, though.
>>
>> The problem is that connectors, which might be annotated as @Public can
>> use classes internally which are annotated as @Internal or
>> @PublicEvolving. If those internal dependencies change, then the
>> connector jar can not be used for different versions. This has happened
>> at least in [1] and [2], where [2] was caused by the interplay of [3]
>> and [4]. The initial release note on [4] was saying that the Kafka 0.9
>> connector can be used with Flink 1.11, but this was rendered wrong by
>> [3]. (Also, sorry for all the []s ...)
>>
>> What should we do about it? So far our strategy for ensuring that
>> jars/dependencies are compatible between versions was "hope". If we want
>> to really verify this compatibility we would have to ensure that
>> "public" code does not transitively use any "non-public" dependencies.
>>
>> An alternative would be to say we don't support any cross-version
>> compatibility between Flink versions. If users want to use an older
>> connector they would have to copy the code, make sure it compiles
>> against a newer Flink version and then manage that themselves
>>
>> What do you think?
>>
>> Best,
>> Aljoscha
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-13586
>> [2] https://github.com/apache/flink/pull/12699#discussion_r442013514
>> [3] https://issues.apache.org/jira/browse/FLINK-17376
>> [4] https://issues.apache.org/jira/browse/FLINK-15115
>>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Chesnay Schepler-3
Given that there are no compatibility guarantees for  @PublicEvolving
classes I'm not sure how much value there is in basing this discussion
on their current state.

Here's the thing; if we want connectors to be compatible across
versions, then this should also work for user-defined connectors (e.g.,
distributed via flink-packages.org).
Because if this is not the case, it just reinforces the need of having
all connectors be maintained by us.

This implies that compatibility testing _our connectors_ is not sufficient,
and will require us to either redefine our guarantees for
@PublicEvolving classes, or define a @Public API for connectors to use.

And this doesn't just apply to connectors, but also other APIs.
Take the ProcessFunction for example, that we recommend people to use in
the docs and mailing lists. No guarantees whatsoever.

We have to rethink our compatibility story, because it just isn't sound.
Yes, we got burned in the past by having too much be @Public and forcing
us to keep things around for longer than we wanted,
but then we did a full 180 and stopped giving _any_ guarantees for all
new APIs.

It is about time we change this.

On 24/06/2020 09:15, Aljoscha Krettek wrote:

> Yes, I would say the situation is different for minor vs. patch.
>
> Side note: in a version like 1.x.y most people in the Flink community
> see x as the major version and y as the minor version. I know that
> this is not proper semver and I could be wrong about how people see
> it. It doesn't change anything about the discussion, though.
>
> Another side note: I said earlier that we would have to check the
> whole transitive closure for usage of "internal" classes. This is
> wrong: we only need to check the first level, i.e. code that is
> directly used by a connector (or other jar that we want to use across
> versions). This is still something that we don't do, so the fact that
> it works can be somewhat attributed to luck and the right people
> looking at the right thing at the right time.
>
> Aljoscha
>
> On 23.06.20 17:25, Konstantin Knauf wrote:
>> Hi Aljoscha,
>>
>> Thank you for bringing this up. IMHO the situation is different for
>> minor &
>> patch version upgrades.
>>
>> 1) I don't think we need to provide any guarantees across Flink minor
>> versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
>> recompile their user JARs when upgrading the cluster to the next minor
>> version of Apache Flink.
>>
>> 2) We should aim for compatibility across patch versions (1.10.0 ->
>> 1.10.1). Being able to apply a patch to the Flink runtime/framework
>> without
>> updating the dependencies of each user jar running on the cluster seems
>> really useful to me.
>>
>> According to the discussion in [1] around stability guarantees for
>> @PublicEvolving, this would mean that connectors can use @Public
>> and @PublicEvolving classes, but not @Internal classes, right? This
>> generally seems reasonable to me. If we want to make it easy for our
>> users,
>> we need to aim for stable interfaces anyway.
>>
>> Cheers,
>>
>> Konstantin
>>
>> []
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html 
>>
>>
>> On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <[hidden email]>
>> wrote:
>>
>>> Hi,
>>>
>>> this has come up a few times now and I think we need to discuss the
>>> guarantees that we want to officially give for this. What I mean by
>>> cross-version compatibility is using, say, a Flink 1.10 Kafka connector
>>> dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
>>> 1.10.1. In the past, this hast mostly worked. I think this was largely
>>> by accident, though.
>>>
>>> The problem is that connectors, which might be annotated as @Public can
>>> use classes internally which are annotated as @Internal or
>>> @PublicEvolving. If those internal dependencies change, then the
>>> connector jar can not be used for different versions. This has happened
>>> at least in [1] and [2], where [2] was caused by the interplay of [3]
>>> and [4]. The initial release note on [4] was saying that the Kafka 0.9
>>> connector can be used with Flink 1.11, but this was rendered wrong by
>>> [3]. (Also, sorry for all the []s ...)
>>>
>>> What should we do about it? So far our strategy for ensuring that
>>> jars/dependencies are compatible between versions was "hope". If we
>>> want
>>> to really verify this compatibility we would have to ensure that
>>> "public" code does not transitively use any "non-public" dependencies.
>>>
>>> An alternative would be to say we don't support any cross-version
>>> compatibility between Flink versions. If users want to use an older
>>> connector they would have to copy the code, make sure it compiles
>>> against a newer Flink version and then manage that themselves
>>>
>>> What do you think?
>>>
>>> Best,
>>> Aljoscha
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-13586
>>> [2] https://github.com/apache/flink/pull/12699#discussion_r442013514
>>> [3] https://issues.apache.org/jira/browse/FLINK-17376
>>> [4] https://issues.apache.org/jira/browse/FLINK-15115
>>>
>>
>>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Aljoscha Krettek-2
I recently did change the attitude there, see for example the new
WatermarkStrategy and WatermarkGenerator and code around that. [1] [2]

I think this is up to the judgement of folks that add new APIs,
sometimes we don't know very well how things will shake out but I agree
with Chesnay that we have been to liberal with @PublicEvolving.

Coming back to cross-version compatibility for jars, I think all code
that uses @PublicEvolving API is not guaranteed to be compatible across
versions. I didn't check manually but I would say this probably affects
all connectors and we shouldn't recommend people use them across
different (minor, that is 1.x) versions. For patch versions it should be
fine with our recent discussion of not breaking @PublicEvolving for
patch (1.x.y) releases.

Aljoscha

[1]
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.java
[2]
https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java

On 24.06.20 09:39, Chesnay Schepler wrote:

> Given that there are no compatibility guarantees for  @PublicEvolving
> classes I'm not sure how much value there is in basing this discussion
> on their current state.
>
> Here's the thing; if we want connectors to be compatible across
> versions, then this should also work for user-defined connectors (e.g.,
> distributed via flink-packages.org).
> Because if this is not the case, it just reinforces the need of having
> all connectors be maintained by us.
>
> This implies that compatibility testing _our connectors_ is not sufficient,
> and will require us to either redefine our guarantees for
> @PublicEvolving classes, or define a @Public API for connectors to use.
>
> And this doesn't just apply to connectors, but also other APIs.
> Take the ProcessFunction for example, that we recommend people to use in
> the docs and mailing lists. No guarantees whatsoever.
>
> We have to rethink our compatibility story, because it just isn't sound.
> Yes, we got burned in the past by having too much be @Public and forcing
> us to keep things around for longer than we wanted,
> but then we did a full 180 and stopped giving _any_ guarantees for all
> new APIs.
>
> It is about time we change this.
>
> On 24/06/2020 09:15, Aljoscha Krettek wrote:
>> Yes, I would say the situation is different for minor vs. patch.
>>
>> Side note: in a version like 1.x.y most people in the Flink community
>> see x as the major version and y as the minor version. I know that
>> this is not proper semver and I could be wrong about how people see
>> it. It doesn't change anything about the discussion, though.
>>
>> Another side note: I said earlier that we would have to check the
>> whole transitive closure for usage of "internal" classes. This is
>> wrong: we only need to check the first level, i.e. code that is
>> directly used by a connector (or other jar that we want to use across
>> versions). This is still something that we don't do, so the fact that
>> it works can be somewhat attributed to luck and the right people
>> looking at the right thing at the right time.
>>
>> Aljoscha
>>
>> On 23.06.20 17:25, Konstantin Knauf wrote:
>>> Hi Aljoscha,
>>>
>>> Thank you for bringing this up. IMHO the situation is different for
>>> minor &
>>> patch version upgrades.
>>>
>>> 1) I don't think we need to provide any guarantees across Flink minor
>>> versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect users to
>>> recompile their user JARs when upgrading the cluster to the next minor
>>> version of Apache Flink.
>>>
>>> 2) We should aim for compatibility across patch versions (1.10.0 ->
>>> 1.10.1). Being able to apply a patch to the Flink runtime/framework
>>> without
>>> updating the dependencies of each user jar running on the cluster seems
>>> really useful to me.
>>>
>>> According to the discussion in [1] around stability guarantees for
>>> @PublicEvolving, this would mean that connectors can use @Public
>>> and @PublicEvolving classes, but not @Internal classes, right? This
>>> generally seems reasonable to me. If we want to make it easy for our
>>> users,
>>> we need to aim for stable interfaces anyway.
>>>
>>> Cheers,
>>>
>>> Konstantin
>>>
>>> []
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html 
>>>
>>>
>>> On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <[hidden email]>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> this has come up a few times now and I think we need to discuss the
>>>> guarantees that we want to officially give for this. What I mean by
>>>> cross-version compatibility is using, say, a Flink 1.10 Kafka connector
>>>> dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with Flink
>>>> 1.10.1. In the past, this hast mostly worked. I think this was largely
>>>> by accident, though.
>>>>
>>>> The problem is that connectors, which might be annotated as @Public can
>>>> use classes internally which are annotated as @Internal or
>>>> @PublicEvolving. If those internal dependencies change, then the
>>>> connector jar can not be used for different versions. This has happened
>>>> at least in [1] and [2], where [2] was caused by the interplay of [3]
>>>> and [4]. The initial release note on [4] was saying that the Kafka 0.9
>>>> connector can be used with Flink 1.11, but this was rendered wrong by
>>>> [3]. (Also, sorry for all the []s ...)
>>>>
>>>> What should we do about it? So far our strategy for ensuring that
>>>> jars/dependencies are compatible between versions was "hope". If we
>>>> want
>>>> to really verify this compatibility we would have to ensure that
>>>> "public" code does not transitively use any "non-public" dependencies.
>>>>
>>>> An alternative would be to say we don't support any cross-version
>>>> compatibility between Flink versions. If users want to use an older
>>>> connector they would have to copy the code, make sure it compiles
>>>> against a newer Flink version and then manage that themselves
>>>>
>>>> What do you think?
>>>>
>>>> Best,
>>>> Aljoscha
>>>>
>>>> [1] https://issues.apache.org/jira/browse/FLINK-13586
>>>> [2] https://github.com/apache/flink/pull/12699#discussion_r442013514
>>>> [3] https://issues.apache.org/jira/browse/FLINK-17376
>>>> [4] https://issues.apache.org/jira/browse/FLINK-15115
>>>>
>>>
>>>
>>
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Cross-version compatibility guarantees of Flink Modules/Jars

Chesnay Schepler-3
 > /I didn't check manually but I would say this probably affects all
connectors and we shouldn't recommend people use them across different
(minor, that is 1.x) versions. For patch versions it should be fine with
our recent discussion of not breaking @PublicEvolving for patch (1.x.y)
releases.
/
/
/
Well yes, but this does effectively mean that all user-jars should be
rebuilt against the new Flink version.
I don't think there even is a connector that is /really/ @Public, and
there won't be many jobs getting away with not using any connector.
//

On 24/06/2020 10:38, Aljoscha Krettek wrote:

> I recently did change the attitude there, see for example the new
> WatermarkStrategy and WatermarkGenerator and code around that. [1] [2]
>
> I think this is up to the judgement of folks that add new APIs,
> sometimes we don't know very well how things will shake out but I
> agree with Chesnay that we have been to liberal with @PublicEvolving.
>
> Coming back to cross-version compatibility for jars, I think all code
> that uses @PublicEvolving API is not guaranteed to be compatible
> across versions. I didn't check manually but I would say this probably
> affects all connectors and we shouldn't recommend people use them
> across different (minor, that is 1.x) versions. For patch versions it
> should be fine with our recent discussion of not breaking
> @PublicEvolving for patch (1.x.y) releases.
>
> Aljoscha
>
> [1]
> https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.java
> [2]
> https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.java
>
> On 24.06.20 09:39, Chesnay Schepler wrote:
>> Given that there are no compatibility guarantees for  @PublicEvolving
>> classes I'm not sure how much value there is in basing this
>> discussion on their current state.
>>
>> Here's the thing; if we want connectors to be compatible across
>> versions, then this should also work for user-defined connectors
>> (e.g., distributed via flink-packages.org).
>> Because if this is not the case, it just reinforces the need of
>> having all connectors be maintained by us.
>>
>> This implies that compatibility testing _our connectors_ is not
>> sufficient,
>> and will require us to either redefine our guarantees for
>> @PublicEvolving classes, or define a @Public API for connectors to use.
>>
>> And this doesn't just apply to connectors, but also other APIs.
>> Take the ProcessFunction for example, that we recommend people to use
>> in the docs and mailing lists. No guarantees whatsoever.
>>
>> We have to rethink our compatibility story, because it just isn't sound.
>> Yes, we got burned in the past by having too much be @Public and
>> forcing us to keep things around for longer than we wanted,
>> but then we did a full 180 and stopped giving _any_ guarantees for
>> all new APIs.
>>
>> It is about time we change this.
>>
>> On 24/06/2020 09:15, Aljoscha Krettek wrote:
>>> Yes, I would say the situation is different for minor vs. patch.
>>>
>>> Side note: in a version like 1.x.y most people in the Flink
>>> community see x as the major version and y as the minor version. I
>>> know that this is not proper semver and I could be wrong about how
>>> people see it. It doesn't change anything about the discussion, though.
>>>
>>> Another side note: I said earlier that we would have to check the
>>> whole transitive closure for usage of "internal" classes. This is
>>> wrong: we only need to check the first level, i.e. code that is
>>> directly used by a connector (or other jar that we want to use
>>> across versions). This is still something that we don't do, so the
>>> fact that it works can be somewhat attributed to luck and the right
>>> people looking at the right thing at the right time.
>>>
>>> Aljoscha
>>>
>>> On 23.06.20 17:25, Konstantin Knauf wrote:
>>>> Hi Aljoscha,
>>>>
>>>> Thank you for bringing this up. IMHO the situation is different for
>>>> minor &
>>>> patch version upgrades.
>>>>
>>>> 1) I don't think we need to provide any guarantees across Flink minor
>>>> versions (e.g. 1.10.1 -> 1.11.0). It seems reasonable to expect
>>>> users to
>>>> recompile their user JARs when upgrading the cluster to the next minor
>>>> version of Apache Flink.
>>>>
>>>> 2) We should aim for compatibility across patch versions (1.10.0 ->
>>>> 1.10.1). Being able to apply a patch to the Flink runtime/framework
>>>> without
>>>> updating the dependencies of each user jar running on the cluster
>>>> seems
>>>> really useful to me.
>>>>
>>>> According to the discussion in [1] around stability guarantees for
>>>> @PublicEvolving, this would mean that connectors can use @Public
>>>> and @PublicEvolving classes, but not @Internal classes, right? This
>>>> generally seems reasonable to me. If we want to make it easy for
>>>> our users,
>>>> we need to aim for stable interfaces anyway.
>>>>
>>>> Cheers,
>>>>
>>>> Konstantin
>>>>
>>>> []
>>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Stability-guarantees-for-PublicEvolving-classes-tp41459.html 
>>>>
>>>>
>>>> On Tue, Jun 23, 2020 at 3:32 PM Aljoscha Krettek <[hidden email]>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> this has come up a few times now and I think we need to discuss the
>>>>> guarantees that we want to officially give for this. What I mean by
>>>>> cross-version compatibility is using, say, a Flink 1.10 Kafka
>>>>> connector
>>>>> dependency/jar with Flink 1.11, or a Flink 1.10.0 connector with
>>>>> Flink
>>>>> 1.10.1. In the past, this hast mostly worked. I think this was
>>>>> largely
>>>>> by accident, though.
>>>>>
>>>>> The problem is that connectors, which might be annotated as
>>>>> @Public can
>>>>> use classes internally which are annotated as @Internal or
>>>>> @PublicEvolving. If those internal dependencies change, then the
>>>>> connector jar can not be used for different versions. This has
>>>>> happened
>>>>> at least in [1] and [2], where [2] was caused by the interplay of [3]
>>>>> and [4]. The initial release note on [4] was saying that the Kafka
>>>>> 0.9
>>>>> connector can be used with Flink 1.11, but this was rendered wrong by
>>>>> [3]. (Also, sorry for all the []s ...)
>>>>>
>>>>> What should we do about it? So far our strategy for ensuring that
>>>>> jars/dependencies are compatible between versions was "hope". If
>>>>> we want
>>>>> to really verify this compatibility we would have to ensure that
>>>>> "public" code does not transitively use any "non-public"
>>>>> dependencies.
>>>>>
>>>>> An alternative would be to say we don't support any cross-version
>>>>> compatibility between Flink versions. If users want to use an older
>>>>> connector they would have to copy the code, make sure it compiles
>>>>> against a newer Flink version and then manage that themselves
>>>>>
>>>>> What do you think?
>>>>>
>>>>> Best,
>>>>> Aljoscha
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/FLINK-13586
>>>>> [2] https://github.com/apache/flink/pull/12699#discussion_r442013514
>>>>> [3] https://issues.apache.org/jira/browse/FLINK-17376
>>>>> [4] https://issues.apache.org/jira/browse/FLINK-15115
>>>>>
>>>>
>>>>
>>>
>>>
>>
>
>