I am new to Apache Spark, and I know that the core data structure is RDD. Now I am writing some apps which require element positional information. For example, after converting an ArrayList into a (Java)RDD, for each integer in RDD, I need to know its (global) array subscript. Is it possible to do it?
As I know, there is a take(int) function for RDD, so I believe the positional information is still maintained in RDD.
Essentially, RDD's zipWithIndex() method seems to do this, but it won't preserve the original ordering of the data the RDD was created from. At least you'll get a stable ordering.
val orig: RDD[String] = ...
val indexed: RDD[(String, Long)] = orig.zipWithIndex()
The reason you're unlikely to find something that preserves the order in the original data is buried in the API doc for zipWithIndex():
"Zips this RDD with its element indices. The ordering is first based
on the partition index and then the ordering of items within each
partition. So the first item in the first partition gets index 0, and
the last item in the last partition receives the largest index. This
is similar to Scala's zipWithIndex but it uses Long instead of Int as
the index type. This method needs to trigger a spark job when this RDD
contains more than one partitions."
So it looks like the original order is discarded. If preserving the original order is important to you, it looks like you need to add the index before you create the RDD.
I believe in most cases, zipWithIndex() will do the trick, and it will preserve the order. Read the comments again. My understanding is that it exactly means keep the order in the RDD.
scala> val r1 = sc.parallelize(List("a", "b", "c", "d", "e", "f", "g"), 3)
scala> val r2 = r1.zipWithIndex
scala> r2.foreach(println)
(c,2)
(d,3)
(e,4)
(f,5)
(g,6)
(a,0)
(b,1)
Above example confirm it. The red has 3 partitions, and a with index 0, b with index 1, etc.