get min and max from a specific column scala spark

2020-05-19 03:55发布

I would like to access to the min and max of a specific column from my dataframe but I don't have the header of the column, just its number, so I should I do using scala ?

maybe something like this :

val q = nextInt(ncol) //we pick a random value for a column number
col = df(q)
val minimum = col.min()

Sorry if this sounds like a silly question but I couldn't find any info on SO about this question :/

7条回答
戒情不戒烟
2楼-- · 2020-05-19 04:12

How about getting the column name from the metadata:

val selectedColumnName = df.columns(q) //pull the (q + 1)th column from the columns array
df.agg(min(selectedColumnName), max(selectedColumnName))
查看更多
够拽才男人
3楼-- · 2020-05-19 04:15

Here is a direct way to get the min and max from a dataframe with column names:

val df = Seq((1, 2), (3, 4), (5, 6)).toDF("A", "B")

df.show()
/*
+---+---+
|  A|  B|
+---+---+
|  1|  2|
|  3|  4|
|  5|  6|
+---+---+
*/

df.agg(min("A"), max("A")).show()
/*
+------+------+
|min(A)|max(A)|
+------+------+
|     1|     5|
+------+------+
*/

If you want to get the min and max values as separate variables, then you can convert the result of agg() above into a Row and use Row.getInt(index) to get the column values of the Row.

val min_max = df.agg(min("A"), max("A")).head()
// min_max: org.apache.spark.sql.Row = [1,5]

val col_min = min_max.getInt(0)
// col_min: Int = 1

val col_max = min_max.getInt(1)
// col_max: Int = 5
查看更多
我想做一个坏孩纸
4楼-- · 2020-05-19 04:20

In Java, we have to explicitly mention org.apache.spark.sql.functions that has implementation for min and max:

datasetFreq.agg(functions.min("Frequency"), functions.max("Frequency")).show();
查看更多
虎瘦雄心在
5楼-- · 2020-05-19 04:20

Hope this will help

val sales=sc.parallelize(List(
   ("West",  "Apple",  2.0, 10),
   ("West",  "Apple",  3.0, 15),
   ("West",  "Orange", 5.0, 15),
   ("South", "Orange", 3.0, 9),
   ("South", "Orange", 6.0, 18),
   ("East",  "Milk",   5.0, 5)))



val salesDf= sales.toDF("store","product","amount","quantity")

salesDf.registerTempTable("sales") 

val result=spark.sql("SELECT store, product, SUM(amount), MIN(amount), MAX(amount), SUM(quantity) from sales GROUP BY store, product")


//OR

salesDf.groupBy("store","product").agg(min("amount"),max("amount"),sum("amount"),sum("quantity")).show


//output
    +-----+-------+-----------+-----------+-----------+-------------+
    |store|product|min(amount)|max(amount)|sum(amount)|sum(quantity)|
    +-----+-------+-----------+-----------+-----------+-------------+
    |South| Orange|        3.0|        6.0|        9.0|           27|
    | West| Orange|        5.0|        5.0|        5.0|           15|
    | East|   Milk|        5.0|        5.0|        5.0|            5|
    | West|  Apple|        2.0|        3.0|        5.0|           25|
    +-----+-------+-----------+-----------+-----------+-------------+
查看更多
对你真心纯属浪费
6楼-- · 2020-05-19 04:22

Using spark functions min and max, you can find min or max values for any column in a data frame.

import org.apache.spark.sql.functions.{min, max}

val df = Seq((5, 2), (10, 1)).toDF("A", "B")

df.agg(max($"A"), min($"B")).show()
/*
+------+------+
|max(A)|min(B)|
+------+------+
|    10|     1|
+------+------+
*/
查看更多
你好瞎i
7楼-- · 2020-05-19 04:29

You can use pattern matching while assigning variable:

import org.apache.spark.sql.functions.{min, max}
import org.apache.spark.sql.Row

val Row(minValue: Double, maxValue: Double) = df.agg(min(q), max(q)).head

Where q is either a Column or a name of column (String). Assuming your data type is Double.

查看更多
登录 后发表回答