I have to parse PDF files , that are in HDFS in a Map Reduce Program in Hadoop. So i get the PDF file from HDFS as Input splits and it has to be parsed and sent to the Mapper Class. For implementing this InputFormat I had gone through this link . How can the these input splits be parsed and converted into text format ?
相关问题
- Correctly parse PDF paragraphs with Python
- Set BaseUrl of an existing Pdf Document
- Spark on Yarn Container Failure
- How can I get all text from a PDF in Swift?
- Renaming named destinations in PDF files
相关文章
- Java写文件至HDFS失败
- mapreduce count example
- Python Sendgrid send email with PDF attachment fil
- C# MVC website PDF file in stored in byte array, d
- How To Programmatically Enable/Disable 'Displa
- How to reduce PDF file size programmatically in Ja
- Could you give me any clue Why 'Cannot call me
- Search and replace placeholder text in PDF with Py
It depends on your splits. I think (could be wrong) that you'll need each PDF as a whole in order to parse it. There are Java libraries to do this, and Google knows where they are.
Given that, you'll need to use an approach where you have the file as a whole when you're ready to parse it. Assuming you'd want to do that in the mapper, you'd need a reader that would hand whole files to the mapper. You could write your own reader to do this, or perhaps there's one already out there. You could possibly build a reader that scans the directory of PDFs and passes the name of each file as the key into the mapper and the contents as the value.
Processing PDF files in Hadoop can be done by extending FileInputFormat Class. Let the class extending it be WholeFileInputFormat. In the WholeFileInputFormat class you override the getRecordReader() method. Now each pdf will be received as an Individual Input Split. Then these individual splits can be parsed to extract the text. This link gives a clear example of understanding how to extend FileInputFormat.