I have a dataset of (user, product, review)
, and want to feed it into mllib's ALS algorithm.
The algorithm needs users and products to be numbers, while mine are String usernames and String SKUs.
Right now, I get the distinct users and SKUs, then assign numeric IDs to them outside of Spark.
I was wondering whether there was a better way of doing this. The one approach I've thought of is to write a custom RDD that essentially enumerates 1 through n
, then call zip on the two RDDs.
Another easy option, if using DataFrames and just concerned about the uniqueness is to use function MonotonicallyIncreasingID
Edit:
MonotonicallyIncreasingID
was deprecated and removed since Spark 2.0; it is now known asmonotonically_increasing_id
.Starting with Spark 1.0 there are two methods you can use to solve this easily:
RDD.zipWithIndex
is just likeSeq.zipWithIndex
, it adds contiguous (Long
) numbers. This needs to count the elements in each partition first, so your input will be evaluated twice. Cache your input RDD if you want to use this.RDD.zipWithUniqueId
also gives you uniqueLong
IDs, but they are not guaranteed to be contiguous. (They will only be contiguous if each partition has the same number of elements.) The upside is that this does not need to know anything about the input, so it will not cause double-evaluation.For a similar example use case, I just hashed the string values. See http://blog.cloudera.com/blog/2014/03/why-apache-spark-is-a-crossover-hit-for-data-scientists/
It sounds like you're already doing something like this, although hashing can be easier to manage.
Matei suggested here an approach to emulating
zipWithIndex
on an RDD, which amounts to assigning IDs within each partiition that are going to be globally unique: https://groups.google.com/forum/#!topic/spark-users/WxXvcn2gl1Emonotonically_increasing_id() appears to be the answer, but unfortunately won't work for ALS since it produces 64-bit numbers and ALS expects 32-bit ones (see my comment below radek1st's answer for deets).
The solution I found is to use zipWithIndex(), as mentioned in Darabos' answer. Here's how to implement it:
If you already have a single-column DataFrame with your distinct users called
userids
, you can create a lookup table (LUT) as follows:Now you can:
Do the same for items, obviously.
People have already recommended monotonically_increasing_id(), and mentioned the problem that it creates Longs, not Ints.
However, in my experience (caveat - Spark 1.6) - if you use it on a single executor (repartition to 1 before), there is no executor prefix used, and the number can be safely cast to Int. Obviously, you need to have less than Integer.MAX_VALUE rows.