spark - scala: not a member of org.apache.spark.sq

2019-06-19 14:01发布

I am trying to convert a data frame to RDD, then perform some operations below to return tuples:

df.rdd.map { t=>
 (t._2 + "_" + t._3 , t)
}.take(5)

Then I got the error below. Anyone have any ideas? Thanks!

<console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
               (t._2 + "_" + t._3 , t)
                  ^

2条回答
叛逆
2楼-- · 2019-06-19 14:42

When you convert a DataFrame to RDD, you get an RDD[Row], so when you use map, your function receives a Row as parameter. Therefore, you must use the Row methods to access its members (note that the index starts from 0):

df.rdd.map { 
  row: Row => (row.getString(1) + "_" + row.getString(2), row)
}.take(5)

You can view more examples and check all methods available for Row objects in the Spark scaladoc.

Edit: I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:

import org.apache.spark.sql.functions._
val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))
查看更多
我想做一个坏孩纸
3楼-- · 2019-06-19 14:52

You can access every element of a Row like if it was a List or Array, it means using (index), however you can use the method get also.

For example:

df.rdd.map {t =>
  (t(2).toString + "_" + t(3).toString, t)
}.take(5)
查看更多
登录 后发表回答