This question already has an answer here:
-
Concatenating datasets of different RDDs in Apache spark using scala
2 answers
Help ,I have two RDDs, i want to merge to one RDD.This is my code.
val us1 = sc.parallelize(Array(("3L"), ("7L"),("5L"),("2L")))
val us2 = sc.parallelize(Array(("432L"), ("7123L"),("513L"),("1312L")))
Just use union:
val merged = us1.union(us2)
Documentation is here
Shotcut in Scala is:
val merged = us1 ++ us2
You need the RDD.union
These don't join on a key. Union doesn't really do anything itself, so it is low overhead. Note that the combined RDD will have all the partitions of the original RDDs, so you may want to coalesce after the union.
val x = sc.parallelize(Seq( (1, 3), (2, 4) ))
val y = sc.parallelize(Seq( (3, 5), (4, 7) ))
val z = x.union(y)
z.collect
res0: Array[(Int, Int)] = Array((1,3), (2,4), (3,5), (4,7))
API
def++(other: RDD[T]): RDD[T]
Return the union of this RDD and another one.
def++ API
def union(other: RDD[T]): RDD[T]
Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).
def union API