Consider the following example:
JavaPairRDD<String, Row> R = input.textFile("test").mapToPair(new PairFunction<String, String, Row>() {
public Tuple2<String, Row> call(String arg0) throws Exception {
String[] parts = arg0.split(" ");
Row r = RowFactory.create(parts[0],parts[1]);
return new Tuple2<String, Row>(r.get(0).toString(), r);
}}).partitionBy(new HashPartitioner(20));
The code above creates an RDD named R
which is partitioned in 20 pieces by hashing on the first column of a txt file named "test".
Consider that the test.txt
file is of the following form:
...
valueA1 valueB1
valueA1 valueB2
valueA1 valueB3
valueA1 valueB4
...
In my context, I have a known value e.g., valueA1 and I want to retrieve all the other values. It is trivial to do it by using the existing filter operation with the specified value. However, I would like to avoid this since essentially the filter operation will be performed on the whole RDD.
Assume that the hash(valueA1)=3, I would like to perform a given operation only on partition 3. More generally, I am interested in dropping/selecting specific partitions from an RDD and perform operations on them.
From the SPARK API it seems that it is not possible directly is there a workaround to achieve the same thing?
For single keys you can use
lookup
method:For an efficient lookup you'll need a RDD which is partitioned, for example using
HashPartitioner
as below.If you want to simply filter partitions containing specific keys it can be done with
mapPartitionsWithIndex
: