[jira] [Commented] (FLINK-958) Sorting a group when key is a Writable gives a NonSerializable Error

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Commented] (FLINK-958) Sorting a group when key is a Writable gives a NonSerializable Error

Shang Yuanchun (Jira)

    [ https://issues.apache.org/jira/browse/FLINK-958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037964#comment-14037964 ]

Artem Tsikiridis commented on FLINK-958:
----------------------------------------

Hello,

I use it in a different context in my application (hadoop abstraction layer) with a custom keyselector. :) . But the problem is not related to the fact that I sort on the same key as even when I try to sort on a different field (e.g. value) it occurs.

Well, it is definitely Hadoop's serialization in Comparators for the sorting. I am currently trying to understand what would be the proper way to make the ComparatorFactory aware of Hadoop's serialization.


> Sorting a group when key is a Writable gives a NonSerializable Error
> --------------------------------------------------------------------
>
>                 Key: FLINK-958
>                 URL: https://issues.apache.org/jira/browse/FLINK-958
>             Project: Flink
>          Issue Type: Bug
>          Components: Java API
>            Reporter: Artem Tsikiridis
>
> I cannot sort an unsorted group by key when the key Is a Writable. Is it a bug?
> Consider:
> {code}
> Grouping<Tuple2<Text, LongWritable>> res = myData
> .map(new MapFunction<Tuple2<String, Long>, Tuple2<Text, LongWritable>>() {
> @Override
> public Tuple2<Text, LongWritable> map(
> Tuple2<String, Long> value) throws Exception {
> return new Tuple2<Text, LongWritable>(new Text(value.f0), new LongWritable(value.f1));
> }
> })
> .groupBy(0).sortGroup(0, Order.ASCENDING);
> {code}
> This gives the following stacktrace:
> {code}
> Caused by: java.io.NotSerializableException: sun.misc.Launcher$AppClassLoader
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183)
> at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
> at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
> at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
> at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1377)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1173)
> at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
> at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
> at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
> at eu.stratosphere.util.InstantiationUtil.serializeObject(InstantiationUtil.java:250)
> at eu.stratosphere.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:231)
> at eu.stratosphere.api.java.typeutils.runtime.RuntimeComparatorFactory.writeParametersToConfig(RuntimeComparatorFactory.java:40)
> ... 14 more
> {code}
> Is it not supported to sort groupings with writables as keys? I understand this is a hadoop serialization error, but so far I am not able to fix it.
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.2#6252)