Using Custom Hadoop input format for processing bi

2019-09-11 08:42发布

问题:

I have developed a hadoop based solution that process a binary file. This uses classic hadoop MR technique. The binary file is about 10GB and divided into 73 HDFS blocks, and the business logic written as map process operates on each of these 73 blocks. We have developed a customInputFormat and CustomRecordReader in Hadoop that returns key (intWritable) and value (BytesWritable) to the map function. The value is nothing but the contents of a HDFS block(bianry data). The business logic knows how to read this data.

Now, I would like to port this code in spark. I am a starter in spark and could run simple examples (wordcount, pi example) in spark. However, could not straightforward example to process binaryFiles in spark. I see there are two solutions for this use case. In the first, avoid using custom input format and record reader. Find a method (approach) in spark the creates a RDD for those HDFS blocks, use a map like method that feeds HDFS block content to the business logic. If this is not possible, I would like to re-use the custom input format and custom reader using some methods such as HadoopAPI, HadoopRDD etc. My problem:- I do not know whether the first approach is possible or not. If possible, can anyone please provide some pointers that contains examples? I was trying second approach but highly unsuccessful. Here is the code snippet I used

package org {  
object Driver {      
  def myFunc(key : IntWritable, content : BytesWritable):Int = {      
    println(key.get())
    println(content.getSize())
    return 1       
  }    
  def main(args: Array[String]) {       
    // create a spark context
    val conf = new SparkConf().setAppName("Dummy").setMaster("spark://<host>:7077")
    val sc = new SparkContext(conf)    
    println(sc)   
    val rd = sc.newAPIHadoopFile("hdfs:///user/hadoop/myBin.dat", classOf[RandomAccessInputFormat], classOf[IntWritable], classOf[BytesWritable])  
    val count = rd.map (x => myFunc(x._1, x._2)).reduce(_+_)
    println("The count is *****************************"+count)
  }
} 

}

Please note that the print statement in the main method prints 73 which is the number of blocks whereas the print statements inside the map function prints 0.

Can someone tell where I am doing wrong here? I think I am not using API the right way but failed to find some documentation/usage examples.

回答1:

A couple of problems at a glance. You define myFunc but call func. Your myFunc has no return type, so you can't call collect(). If your myFunc truly doesn't have a return value, you can do foreach instead of map.

collect() pulls the data in an RDD to the driver to allow you to do stuff with it locally (on the driver).



回答2:

I have made some progress in this issue. I am now using the below function which does the job

var hRDD = new NewHadoopRDD(sc, classOf[RandomAccessInputFormat], 
        classOf[IntWritable], 
        classOf[BytesWritable],
        job.getConfiguration() 
        )    

val count = hRDD.mapPartitionsWithInputSplit{ (split, iter) => myfuncPart(split, iter)}.collect()

However, landed up with another error the details of which i have posted here Issue in accessing HDFS file inside spark map function

15/10/30 11:11:39 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 40.221.94.235): java.io.IOException: No FileSystem for scheme: spark
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)