How do I Index PDF files and search for keywords?

2019-01-23 07:52发布

问题:

What I have is a bunch of PDFs (few 100s). They don't have a proper structure nor do they have particular fields. All they have is lot of text.

What I am trying to do :

Index the PDFs and search for some keywords against the index. I am interested in finding if that particular keyword is in the PDF doc and if it is, I want the line where the keyword is found. If I searched for 'Google' in a PDF doc that has that term, I would like to see 'Google is a great search engine' which is the line in the PDF.

How I decided to do :

Either use SOLR or Whoosh but SOLR is looking good for inbuilt PDF support. I prefer to code in Python and Sunburst is a wrapper on SOLR which I like. SOLR's sample/example project has some price comparision based schema file. Now I am not sure if I can use SOLR to answer my problem.

What do you guys suggest? Any input is much appreciated.

回答1:

I think Solr fits your needs.

The "Highlighting" feature is what you are looking for.. For that you have to index and to store the documents in lucene index.

The highlighting feature returns a snipped, where the searched text is marked.

Look at this: http://wiki.apache.org/solr/HighlightingParameters



回答2:

Another offline/standalone solution:

  • https://github.com/WolfgangFahl/pdfindexer It uses PDFBox and Apache Lucene and will create a HTML index file with links to the pages in the PDF file for each keyword found.


回答3:

I once solved this by converting the PDF files to text with utilities as pdftotext (pdftohtml would also work I guess), generating a 'cache' of some sorts. Then using some grep I searched the text file cache for keywords.

This is slightly different from your proposed solution, but I can imagine you can call this from Python as well.