Is there any way I can convert a pair RDD back to a regular RDD?
Suppose I get a local csv file, and I first load it as a regular rdd
rdd = sc.textFile("$path/$csv")
Then I create a pair rdd (i.e. key is the string before "," and value is the string after ",")
pairRDD = rdd.map(lambda x : (x.split(",")[0], x.split(",")[1]))
I store the pairRDD by using the saveAsTextFile()
pairRDD.saveAsTextFile("$savePath")
However, as investigated, the stored file will contain some necessary characters, such as "u'", "(" and ")" (as pyspark simply calls toString(), to store key-value pairs) I was wondering if I can convert back to a regular rdd, so that the saved file wont contain "u'" or "(" and ")"? Or any other storage methods I can use to get rid of the unnecessary characters ?