Parsing PDF files in Hadoop Map Reduce

2020-07-14 09:38发布

I have to parse PDF files , that are in HDFS in a Map Reduce Program in Hadoop. So i get the PDF file from HDFS as Input splits and it has to be parsed and sent to the Mapper Class. For implementing this InputFormat I had gone through this link . How can the these input splits be parsed and converted into text format ?

2条回答
对你真心纯属浪费
2楼-- · 2020-07-14 10:26

It depends on your splits. I think (could be wrong) that you'll need each PDF as a whole in order to parse it. There are Java libraries to do this, and Google knows where they are.

Given that, you'll need to use an approach where you have the file as a whole when you're ready to parse it. Assuming you'd want to do that in the mapper, you'd need a reader that would hand whole files to the mapper. You could write your own reader to do this, or perhaps there's one already out there. You could possibly build a reader that scans the directory of PDFs and passes the name of each file as the key into the mapper and the contents as the value.

查看更多
我欲成王,谁敢阻挡
3楼-- · 2020-07-14 10:29

Processing PDF files in Hadoop can be done by extending FileInputFormat Class. Let the class extending it be WholeFileInputFormat. In the WholeFileInputFormat class you override the getRecordReader() method. Now each pdf will be received as an Individual Input Split. Then these individual splits can be parsed to extract the text. This link gives a clear example of understanding how to extend FileInputFormat.

查看更多
登录 后发表回答