Apache Spark ALS collaborative filtering results.

2019-03-20 20:07发布

I wanted to try out Spark for collaborative filtering using MLlib as explained in this tutorial: https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html The algorithm is based on the paper "Collaborative Filtering for Implicit Feedback Datasets", doing matrix factorization.

Everything is up and running using the 10 million Movielens data set. The data set it split into 80% training 10% test and 10% validation.

  • RMSE Baseline: 1.060505464225402
  • RMSE (train) = 0.7697248827452756
  • RMSE (validation) = 0.8057135933012889 for the model trained with rank = 24, lambda = 0.1, and Iterations = 10.
  • The best model improves the baseline by 23.94%.

Which are values similar to the tutorial, although with different training parameters.

I tried running the algorithm several times and always got recommendations that don't make any sense to me. Even rating only kids movies I get the following results:

For ratings:

  • personal rating: Toy Story (1995) rating: 4.0
  • personal rating: Jungle Book, The (1994) rating: 5.0
  • personal rating: Lion King, The (1994) rating: 5.0
  • personal rating: Mary Poppins (1964) rating: 4.0
  • personal rating: Alice in Wonderland (1951) rating: 5.0

Results:

Movies recommended for you:

  1. Life of Oharu, The (Saikaku ichidai onna) (1952)
  2. More (1998)
  3. Who's Singin' Over There? (a.k.a. Who Sings Over There) (Ko to tamo peva) (1980)
  4. Sundays and Cybele (Dimanches de Ville d'Avray, Les) (1962)
  5. Blue Light, The (Das Blaue Licht) (1932)
  6. Times of Harvey Milk, The (1984)
  7. Please Vote for Me (2007)
  8. Man Who Planted Trees, The (Homme qui plantait des arbres, L') (1987)
  9. Shawshank Redemption, The (1994)
  10. Only Yesterday (Omohide poro poro) (1991)

Which except for Only Yesterday doesn't seem to make any sense.

If there is anyone out there who knows how to interpret those results or get better ones I would really appreciate you sharing your knowledge.

Best regards

EDIT:

As suggested I trained another model with more factors:

  • Baseline error: 1.0587417035872992
  • RMSE (train) = 0.7679883378412548
  • RMSE (validation) = 0.8070339258049574 for the model trained with rank = 100, lambda = 0.1, and numIter = 10.

And different personal ratings:

  • personal rating: Star Wars: Episode VI - Return of the Jedi (1983) rating: 5.0
  • personal rating: Mission: Impossible (1996) rating: 4.0
  • personal rating: Die Hard: With a Vengeance (1995) rating: 4.0
  • personal rating: Batman Forever (1995) rating: 5.0
  • personal rating: Men in Black (1997) rating: 4.0
  • personal rating: Terminator 2: Judgment Day (1991) rating: 4.0
  • personal rating: Top Gun (1986) rating: 4.0
  • personal rating: Star Wars: Episode V - The Empire Strikes Back (1980) rating: 3.0
  • personal rating: Alien (1979) rating: 4.0

The recommended movies are:

Movies recommended for you:

  1. Carmen (1983)
  2. Silent Light (Stellet licht) (2007)
  3. Jesus (1979)
  4. Life of Oharu, The (Saikaku ichidai onna) (1952)
  5. Heart of America (2003)
  6. For the Bible Tells Me So (2007)
  7. More (1998)
  8. Legend of Leigh Bowery, The (2002)
  9. Funeral, The (Ososhiki) (1984)
  10. Longshots, The (2008)

Not one useful result.

EDIT2: With using the implicit feedback method, I get much better results! With the same action movies as above the recommendations are:

Movies recommended for you:

  1. Star Wars: Episode IV - A New Hope (a.k.a. Star Wars) (1977)
  2. Terminator, The (1984)
  3. Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)
  4. Die Hard (1988)
  5. Godfather, The (1972)
  6. Aliens (1986)
  7. Rock, The (1996)
  8. Independence Day (a.k.a. ID4) (1996)
  9. Star Trek II: The Wrath of Khan (1982)
  10. GoldenEye (1995)

That's more what I expected! The question is why the explicit version is so-so-so bad

4条回答
等我变得足够好
2楼-- · 2019-03-20 20:19

I have tried using the same dataset and following this Spark tutorial, I get the same (subjectively bad) results.

However, using a simpler method - for instance based on Pearson Correlation as a similarity measure - instead of matrix factorization, I get much, much better results. This means I would mostly get kid movies with your input preferences and the same input ratings file.

Unless you really need the factorization (which has a lot of advantages, though), I would suggest using another recommendation method.

查看更多
做自己的国王
3楼-- · 2019-03-20 20:24

Second what Vlad said, try correlation or Jaccard. I.e. ignore the rating numbers and just look at the binary "are these two movies together in a user's preference list or not". This was a game-changer for me when I was building my first recommender: http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html

Good luck

查看更多
Summer. ? 凉城
4楼-- · 2019-03-20 20:29

Collaborative Filtering just give you items that people, who have the same taste as you, really like. If you rate only kids movies, it doesn't mean that you will get recommended only kids movies. It just means that people who rated Toy Story, Jungle Book, Lion King, etc... as you did also like Life of Oharu, More, Who's Singin' Over There?, etc... You have a good animation on the wikipedia page: CF

I didn't read the link that you gave but one thing that you can change is the similarity measure you are using if you want to stay with collaborative filtering.

If you want recommendation based on your taste, you might try latent factor model like Matrix Factorization. Here the latent factor might discover that movie can be describe as features that describe the characteristics of rated objects. It might be that a movie is comic, children, horror, etc.. (You never really know what the latent factor are by the way). And if you only rate kids movies, you might get as recommendation others kids movies.

Hope it helps.

查看更多
乱世女痞
5楼-- · 2019-03-20 20:32

Note that the code you are running does not use implicit feedback, and is not quite the algorithm you refer to. Just make sure you are not using ALS.trainImplicit. You may need a different, lambda and rank. RMSE of 0.88 is "OK" for this data set; I am not clear that the example's values are optimal or just the one that the toy test produced. You use a different value still here. Maybe it's just not optimal yet.

It could even be stuff like bugs in the ALS implementation fixed since. Try comparing to another implementation of ALS if you can.

I always try to resist rationalizing the recommendations since our brains inevitably find some explanation even for random recommendations. But, hey, I can say that you did not get action, horror, crime drama, thrillers here. I find that kids movies go hand in hand with taste for arty movies, since, the kind of person who filled out their tastes for MovieLens way back when and rated kids movies were not actually kids, but parents, and maybe software engineer types old enough to have kids do tend to watch these sorts of foreign films you see.

查看更多
登录 后发表回答