How to flatmap a nested Dataframe in Spark

2020-01-24 03:41发布

问题:

I have nested string like as shown below. I want to flat map them to produce unique rows in Spark

My dataframe has

A,B,"x,y,z",D

I want to convert it to produce output like

A,B,x,D
A,B,y,D
A,B,z,D

How can I do that.

Basically how can i do flat map and apply any function inside the Dataframe

Thanks

回答1:

Spark 2.0+

Dataset.flatMap:

val ds = df.as[(String, String, String, String)]
ds.flatMap { 
  case (x1, x2, x3, x4) => x3.split(",").map((x1, x2, _, x4))
}.toDF

Spark 1.3+.

Use split and explode functions:

val df = Seq(("A", "B", "x,y,z", "D")).toDF("x1", "x2", "x3", "x4")
df.withColumn("x3", explode(split($"x3", ",")))

Spark 1.x

DataFrame.explode (deprecated in Spark 2.x)

df.explode($"x3")(_.getAs[String](0).split(",").map(Tuple1(_)))