How to add a constant column in a Spark DataFrame?

2019-01-01 06:07发布

问题:

I want to add a column in a DataFrame with some arbitrary value (that is the same for each row). I get an error when I use withColumn as follows:

dt.withColumn(\'new_column\', 10).head(5)

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-50-a6d0257ca2be> in <module>()
      1 dt = (messages
      2     .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias(\"dt\")))
----> 3 dt.withColumn(\'new_column\', 10).head(5)

/Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
   1166         [Row(age=2, name=u\'Alice\', age2=4), Row(age=5, name=u\'Bob\', age2=7)]
   1167         \"\"\"
-> 1168         return self.select(\'*\', col.alias(colName))
   1169 
   1170     @ignore_unicode_prefix

AttributeError: \'int\' object has no attribute \'alias\'

It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case):

dt.withColumn(\'new_column\', dt.messagetype - dt.messagetype + 10).head(5)

[Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10),
 Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)]

This is supremely hacky, right? I assume there is a more legit way to do this?

回答1:

Spark 2.2+

Spark 2.2 introduces typedLit to support Seq, Map, and Tuples (SPARK-19254) and following calls should be supported (Scala):

import org.apache.spark.sql.functions.typedLit

df.withColumn(\"some_array\", typedLit(Seq(1, 2, 3)))
df.withColumn(\"some_struct\", typedLit((\"foo\", 1, .0.3)))
df.withColumn(\"some_map\", typedLit(Map(\"key1\" -> 1, \"key2\" -> 2)))

Spark 1.3+ (lit), 1.4+ (array, struct), 2.0+ (map):

The second argument for DataFrame.withColumn should be a Column so you have to use a literal:

from pyspark.sql.functions import lit

df.withColumn(\'new_column\', lit(10))

If you need complex columns you can build these using blocks like array:

from pyspark.sql.functions import array, create_map, struct

df.withColumn(\"some_array\", array(lit(1), lit(2), lit(3)))
df.withColumn(\"some_struct\", struct(lit(\"foo\"), lit(1), lit(.3)))
df.withColumn(\"some_map\", create_map(lit(\"key1\"), lit(1), lit(\"key2\"), lit(2)))

Exactly the same methods can be used in Scala.

import org.apache.spark.sql.functions.{array, lit, map, struct}

df.withColumn(\"new_column\", lit(10))
df.withColumn(\"map\", map(lit(\"key1\"), lit(1), lit(\"key2\"), lit(2)))

To provide names for structs use either alias on each field:

df.withColumn(
    \"some_struct\",
    struct(lit(\"foo\").alias(\"x\"), lit(1).alias(\"y\"), lit(0.3).alias(\"z\"))
 )

or cast on the whole object

df.withColumn(
    \"some_struct\", 
    struct(lit(\"foo\"), lit(1), lit(0.3)).cast(\"struct<x: string, y: integer, z: double>\")
 )

It is also possible, although slower, to use an UDF.

Note:

The same constructs can be used to pass constant arguments to UDFs or SQL functions.



回答2:

In spark 2.2 there are two ways to add constant value in a column in DataFrame:

1) Using lit

2) Using typedLit.

The difference between the two is that typedLit can also handle parameterized scala types e.g. List, Seq, and Map

Sample DataFrame:

val df = spark.createDataFrame(Seq((0,\"a\"),(1,\"b\"),(2,\"c\"))).toDF(\"id\", \"col1\")

+---+----+
| id|col1|
+---+----+
|  0|   a|
|  1|   b|
+---+----+

1) Using lit: Adding constant string value in new column named newcol:

import org.apache.spark.sql.functions.lit
val newdf = df.withColumn(\"newcol\",lit(\"myval\"))

Result:

+---+----+------+
| id|col1|newcol|
+---+----+------+
|  0|   a| myval|
|  1|   b| myval|
+---+----+------+

2) Using typedLit:

import org.apache.spark.sql.functions.typedLit
df.withColumn(\"newcol\", typedLit((\"sample\", 10, .044)))

Result:

+---+----+-----------------+
| id|col1|           newcol|
+---+----+-----------------+
|  0|   a|[sample,10,0.044]|
|  1|   b|[sample,10,0.044]|
|  2|   c|[sample,10,0.044]|
+---+----+-----------------+