[jira] [Commented] (FLINK-941) Possible deadlock after increasing my data set size

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (FLINK-941) Possible deadlock after increasing my data set size

Shang Yuanchun (Jira)

    [ https://issues.apache.org/jira/browse/FLINK-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14035670#comment-14035670 ]

Stephan Ewen commented on FLINK-941:
------------------------------------

I was able to repdroduce the problem, thanks to [~uce].

The issue is that Broadcast variables are not

I think the correct way to solve that is to make sure that we never allow backpressure on intermediate results that are consumed by multiple targets. That does depend on the redesigned handling of intermediate result partitions (which [~uce] has drafted).

> Possible deadlock after increasing my data set size
> ---------------------------------------------------
>
>                 Key: FLINK-941
>                 URL: https://issues.apache.org/jira/browse/FLINK-941
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: pre-apache-0.5.1
>            Reporter: Bastian Köcher
>            Assignee: Stephan Ewen
>         Attachments: IMPRO-3.SS14.G03.zip
>
>
> If I increase my data set, my algorithm stops at some point and doesn't continue anymore. I already waited a quite amount of time, but nothing happens. The linux processor explorer also displays that the process is sleeping and waiting for something to happen, could maybe be a deadlock.
> I attached the source of my program, the class HAC_2 is the actual algorithm.
> Changing the line 271 from "if(Integer.parseInt(tokens[0]) > 282)" to "if(Integer.parseInt(tokens[0]) > 283)" at my PC "enables" the bug. The numbers 282, 283 are the numbers of the documents in my test data and this line skips all documents with an id greater than that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)