How to concatenate multiple columns into single co

2019-04-07 19:38发布

问题:

Let say I have the following dataframe:

agentName|original_dt|parsed_dt|   user|text|
+----------+-----------+---------+-------+----+
|qwertyuiop|          0|        0|16102.0|   0|

I wish to create a new dataframe with one more column that has the concatenation of all the elements of the row:

agentName|original_dt|parsed_dt|   user|text| newCol
+----------+-----------+---------+-------+----+
|qwertyuiop|          0|        0|16102.0|   0| [qwertyuiop, 0,0, 16102, 0]

Note: This is a just an example. The number of columns and names of them is not known. It is dynamic.

回答1:

I think this works perfect for your case here is with an example

val spark =
    SparkSession.builder().master("local").appName("test").getOrCreate()
  import spark.implicits._
  val data = spark.sparkContext.parallelize(
    Seq(
      ("qwertyuiop", 0, 0, 16102.0, 0)
    )
  ).toDF("agentName","original_dt","parsed_dt","user","text")


  val result = data.withColumn("newCol", split(concat_ws(";",  data.schema.fieldNames.map(c=> col(c)):_*), ";"))        
  result.show()

+----------+-----------+---------+-------+----+------------------------------+
|agentName |original_dt|parsed_dt|user   |text|newCol                        |
+----------+-----------+---------+-------+----+------------------------------+
|qwertyuiop|0          |0        |16102.0|0   |[qwertyuiop, 0, 0, 16102.0, 0]|
+----------+-----------+---------+-------+----+------------------------------+

Hope this helped!



回答2:

TL;DR Use struct function with Dataset.columns operator.

Quoting the scaladoc of struct function:

struct(colName: String, colNames: String*): Column Creates a new struct column that composes multiple input columns.

There are two variants: string-based for column names or using Column expressions (that gives you more flexibility on the calculation you want to apply on the concatenated columns).

From Dataset.columns:

columns: Array[String] Returns all column names as an array.


Your case would then look as follows:

scala> df.withColumn("newCol",
  struct(df.columns.head, df.columns.tail: _*)).
  show(false)
+----------+-----------+---------+-------+----+--------------------------+
|agentName |original_dt|parsed_dt|user   |text|newCol                    |
+----------+-----------+---------+-------+----+--------------------------+
|qwertyuiop|0          |0        |16102.0|0   |[qwertyuiop,0,0,16102.0,0]|
+----------+-----------+---------+-------+----+--------------------------+


回答3:

In general, you can merge multiple dataframe columns into one using array.

df.select($"*",array($"col1",$"col2").as("newCol")) \\$"*" will capture all existing columns

Here is the one line solution for your case:

df.select($"*",array($"agentName",$"original_dt",$"parsed_dt",$"user", $"text").as("newCol"))


回答4:

You can use udf function to concat all the columns into one. All you have to do is define a udf function and pass all the columns you want to concat to the udf function and call the udf function using .withColumn function of dataframe

Or

You can use concat_ws(java.lang.String sep, Column... exprs) function available for dataframe.

var df = Seq(("qwertyuiop",0,0,16102.0,0))
  .toDF("agentName","original_dt","parsed_dt","user","text")
df.withColumn("newCol", concat_ws(",",$"agentName",$"original_dt",$"parsed_dt",$"user",$"text"))
df.show(false)

Will give you output as

+----------+-----------+---------+-------+----+------------------------+
|agentName |original_dt|parsed_dt|user   |text|newCol                  |
+----------+-----------+---------+-------+----+------------------------+
|qwertyuiop|0          |0        |16102.0|0   |qwertyuiop,0,0,16102.0,0|
+----------+-----------+---------+-------+----+------------------------+

That will get you the result you want