|
Hi all!
I propose to remove the log upload via transfer.sh and rely on the S3 upload instead. The reason is that transfer.sh seems to be very unreliable (times out in many profiles recently) and it seems that we also often don't get access to uploaded logs (errors out on the transfer.sh website). Best, Stephan |
|
The S3 setup only works in the apache repo though; not on contributor
branches or PR builds. We can tighten the timeouts (already talked to Robert about that), at which point it doesn't hurt. On 14/02/2020 18:28, Stephan Ewen wrote: > Hi all! > > I propose to remove the log upload via transfer.sh and rely on the S3 > upload instead. > > The reason is that transfer.sh seems to be very unreliable (times out in > many profiles recently) and it seems that we also often don't get access to > uploaded logs (errors out on the transfer.sh website). > > Best, > Stephan > |
|
I agree that we need to fix this.
We could either misuse the "build artifact" feature of azure pipelines to publish the logs, or we set up something simple for Flink (like running an instance of https://github.com/lachs0r/0x0 or https://github.com/dutchcoders/transfer.sh :) ) On Fri, Feb 14, 2020 at 8:44 PM Chesnay Schepler <[hidden email]> wrote: > The S3 setup only works in the apache repo though; not on contributor > branches or PR builds. > > We can tighten the timeouts (already talked to Robert about that), at > which point it doesn't hurt. > > On 14/02/2020 18:28, Stephan Ewen wrote: > > Hi all! > > > > I propose to remove the log upload via transfer.sh and rely on the S3 > > upload instead. > > > > The reason is that transfer.sh seems to be very unreliable (times out in > > many profiles recently) and it seems that we also often don't get access > to > > uploaded logs (errors out on the transfer.sh website). > > > > Best, > > Stephan > > > > |
|
Tracking this here: https://issues.apache.org/jira/browse/FLINK-16122
On Sat, Feb 15, 2020 at 8:12 PM Robert Metzger <[hidden email]> wrote: > I agree that we need to fix this. > > We could either misuse the "build artifact" feature of azure pipelines to > publish the logs, or we set up something simple for Flink (like running an > instance of https://github.com/lachs0r/0x0 or > https://github.com/dutchcoders/transfer.sh :) ) > > On Fri, Feb 14, 2020 at 8:44 PM Chesnay Schepler <[hidden email]> > wrote: > >> The S3 setup only works in the apache repo though; not on contributor >> branches or PR builds. >> >> We can tighten the timeouts (already talked to Robert about that), at >> which point it doesn't hurt. >> >> On 14/02/2020 18:28, Stephan Ewen wrote: >> > Hi all! >> > >> > I propose to remove the log upload via transfer.sh and rely on the S3 >> > upload instead. >> > >> > The reason is that transfer.sh seems to be very unreliable (times out in >> > many profiles recently) and it seems that we also often don't get >> access to >> > uploaded logs (errors out on the transfer.sh website). >> > >> > Best, >> > Stephan >> > >> >> |
| Free forum by Nabble | Edit this page |
