Why Spark ML ALS algorithm print RMSE = NaN?

2019-04-12 09:10发布

问题:

I use ALS to predict rating, this is my code:

val als = new ALS()
  .setMaxIter(5)
  .setRegParam(0.01)
  .setUserCol("user_id")
  .setItemCol("business_id")
  .setRatingCol("stars")
val model = als.fit(training)

// Evaluate the model by computing the RMSE on the test data
val predictions = model.transform(testing)
predictions.sort("user_id").show(1000)
val evaluator = new RegressionEvaluator()
  .setMetricName("rmse")
  .setLabelCol("stars")
  .setPredictionCol("prediction")
val rmse = evaluator.evaluate(predictions)
println(s"Root-mean-square error = $rmse")

But get some negative scores and RMSE is Nan:

+-------+-----------+---------+------------+
|user_id|business_id|    stars|  prediction|
+-------+-----------+---------+------------+
|      0|       2175|      4.0|   4.0388923|
|      0|       5753|      3.0|   2.6875196|
|      0|       9199|      4.0|   4.1753435|
|      0|      16416|      2.0|   -2.710618|
|      0|       6063|      3.0|         NaN|
|      0|      23076|      2.0|  -0.8930751|

Root-mean-square error = NaN

How to get a good result?

回答1:

Negative values don't matter as RMSE squares the values first. Probably you have empty prediction values. You could drop them:

predictions.na().drop(["prediction"])

Although, that can be a bit misleading, alternatively you could fill those values with your lowest/highest/average rating.

I'd also recommend to round x < min_rating and x > max_rating to the lowest/highest ratings, which would improve your RMSE.

EDIT:

Some extra info here: https://issues.apache.org/jira/browse/SPARK-14489



回答2:

Since Spark version 2.2.0 you can set the coldStartStrategy parameter to drop in order to drop any rows in the DataFrame of predictions that contain NaN values. The evaluation metric will then be computed over the non-NaN data and will be valid.

model.setColdStartStrategy("drop");


回答3:

A small correction will solve this issue:

prediction.na.drop()