Split large array columns into multiple columns -

2020-07-16 09:24发布

问题:

I have:

+---+-------+-------+
| id|   var1|   var2|
+---+-------+-------+
|  a|[1,2,3]|[1,2,3]|
|  b|[2,3,4]|[2,3,4]|
+---+-------+-------+

I want:

+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
|  a|      1|      2|      3|      1|      2|      3|
|  b|      2|      3|      4|      2|      3|      4|
+---+-------+-------+-------+-------+-------+-------+

The solution provided by How to split a list to multiple columns in Pyspark?

df1.select('id', df1.var1[0], df1.var1[1], ...).show()

works, but some of my arrays are very long (max 332).

How can I write this so that it takes account of all length arrays?

回答1:

This solution will work for your problem, no matter the number of initial columns and the size of your arrays. Moreover, if a column has different array sizes (eg [1,2], [3,4,5]), it will result in the maximum number of columns with null values filling the gap.

from pyspark.sql import functions as F

df = spark.createDataFrame(sc.parallelize([['a', [1,2,3], [1,2,3]], ['b', [2,3,4], [2,3,4]]]), ["id", "var1", "var2"])

columns = df.drop('id').columns
df_sizes = df.select(*[F.size(col).alias(col) for col in columns])
df_max = df_sizes.agg(*[F.max(col).alias(col) for col in columns])
max_dict = df_max.collect()[0].asDict()

df_result = df.select('id', *[df[col][i] for col in columns for i in range(max_dict[col])])
df_result.show()
>>>
+---+-------+-------+-------+-------+-------+-------+
| id|var1[0]|var1[1]|var1[2]|var2[0]|var2[1]|var2[2]|
+---+-------+-------+-------+-------+-------+-------+
|  a|      1|      2|      3|      1|      2|      3|
|  b|      2|      3|      4|      2|      3|      4|
+---+-------+-------+-------+-------+-------+-------+


标签: pyspark