[jira] [Commented] (FLINK-941) Possible deadlock after increasing my data set size

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (FLINK-941) Possible deadlock after increasing my data set size

Shang Yuanchun (Jira)

    [ https://issues.apache.org/jira/browse/FLINK-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14035689#comment-14035689 ]

Stephan Ewen commented on FLINK-941:
------------------------------------

Right ;-) The issue is that Broadcast variables are not taken into account when deciding where to place pipeline breakers.

You need a fix in flink. I am working on it now.

> Possible deadlock after increasing my data set size
> ---------------------------------------------------
>
>                 Key: FLINK-941
>                 URL: https://issues.apache.org/jira/browse/FLINK-941
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: pre-apache-0.5.1
>            Reporter: Bastian Köcher
>            Assignee: Stephan Ewen
>         Attachments: IMPRO-3.SS14.G03.zip
>
>
> If I increase my data set, my algorithm stops at some point and doesn't continue anymore. I already waited a quite amount of time, but nothing happens. The linux processor explorer also displays that the process is sleeping and waiting for something to happen, could maybe be a deadlock.
> I attached the source of my program, the class HAC_2 is the actual algorithm.
> Changing the line 271 from "if(Integer.parseInt(tokens[0]) > 282)" to "if(Integer.parseInt(tokens[0]) > 283)" at my PC "enables" the bug. The numbers 282, 283 are the numbers of the documents in my test data and this line skips all documents with an id greater than that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)