Spark Dataset aggregation similar to RDD aggregate

2020-02-15 02:23发布

RDD has a very useful method aggregate that allows to accumulate with some zero value and combine that across partitions. Is there any way to do that with Dataset[T]. As far as I see the specification via Scala doc, there is actually nothing capable of doing that. Even the reduce method allows to do things only for binary operations with T as both arguments. Any reason why? And if there is anything capable of doing the same?

Thanks a lot!

VK

1条回答
家丑人穷心不美
2楼-- · 2020-02-15 03:17

There are two different classes which can be used to achieve aggregate-like behavior in Dataset API:

Both provide additional finalization method (evaluate and finish respectively) which is used to generate final results and can be used for both global and by-key aggregations.

查看更多
登录 后发表回答