Spark dataframe reducebykey like operation

2019-03-29 19:07发布

问题:

I have a Spark dataframe with the following data (I use spark-csv to load the data in):

key,value
1,10
2,12
3,0
1,20

is there anything similar to spark RDD reduceByKey which can return a Spark DataFrame as: (basically, summing up for the same key values)

key,value
1,30
2,12
3,0

(I can transform the data to RDD and do a reduceByKey operation, but is there a more Spark DataFrame API way to do this?)

回答1:

If you don't care about column names you can use groupBy followed by sum:

df.groupBy($"key").sum("value")

otherwise it is better to replace sum with agg:

df.groupBy($"key").agg(sum($"value").alias("value"))

Finally you can use raw SQL:

df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")

See also DataFrame / Dataset groupBy behaviour/optimization



回答2:

How about this? I agree this still converts to rdd then to dataframe.

df.select('key','value').map(lambda x: x).reduceByKey(lambda a,b: a+b).toDF(['key','value'])