Spark get top N highest score results for each (it

2019-06-12 07:26发布

I have a DataFrame of the following format:

item_id1: Long, item_id2: Long, similarity_score: Double

What I'm trying to do is to get top N highest similarity_score records for each item_id1. So, for example:

1 2 0.5
1 3 0.4
1 4 0.3
2 1 0.5
2 3 0.4
2 4 0.3

With top 2 similar items would give:

1 2 0.5
1 3 0.4
2 1 0.5
2 3 0.4

I vaguely guess that it can be done by first grouping records by item_id1, then sorting in reverse by score and then limiting the results. But I'm stuck with how to implement it in Spark Scala.

Thank you.

1条回答
看我几分像从前
2楼-- · 2019-06-12 08:20

I would suggest to use window-functions for this:

 df
  .withColumn("rnk",row_number().over(Window.partitionBy($"item_id1").orderBy($"similarity_score")))
  .where($"rank"<=2)

Alternatively, you could use dense_rank/rank instead of row_number, depending on how to handle cases where the similarity-score is equal.

查看更多
登录 后发表回答