How to flatten nested lists in PySpark?

2019-02-17 05:56发布

I have an RDD structure like:

rdd = [[[1],[2],[3]], [[4],[5]], [[6]], [[7],[8],[9],[10]]]

and I want it to become:

rdd = [1,2,3,4,5,6,7,8,9,10]

How do I write a map or reduce function to make it work?

1条回答
We Are One
2楼-- · 2019-02-17 06:17

You can for example flatMap and use list comprehensions:

rdd.flatMap(lambda xs: [x[0] for x in xs])

or to make it a little bit more general:

from itertools import chain

rdd.flatMap(lambda xs: chain(*xs)).collect()
查看更多
登录 后发表回答