How to read multiple line elements in Spark?

2019-02-24 00:50发布

问题:

When you read a file in Spark using sc.textfile, it gives you elements, where each element is a separate line. However, I want each element to consist of N number of lines. I can't use delimiters either because there is none in that file. So, how can I make spark give me multiple line elements?

And I'm interested in doing so using the NLineInputFormat class. Is that possible to do so in Spark? I can see examples of that for MapReduce, but I don't have any clue how that would translate to in Spark.

回答1:

Yes, if you are getting the files from hadoop. You should be able to do it like this:

val records = sc.newAPIHadoopRDD(hadoopConf,classOf[NLineInputFormat],classOf[LongWritable],classOf[Text])

Here's the API doc.