-->

How to process/extract .pst using hadoop Map reduc

2019-05-28 21:41发布

问题:

I am using MAPI tools (Its microsoft lib and in .NET) and then apache TIKA libraries to process and extract the pst from exchange server, which is not scalable.

How can I process/extracts pst using MR way ... Is there any tool, library available in java which I can use in my MR jobs. Any help would be great-full .

Jpst Lib internally uses: PstFile pstFile = new PstFile(java.io.File)

And the problem is for Hadoop API's we don't have anything close to java.io.File.

Following option is always there but not efficient:

  File tempFile = File.createTempFile("myfile", ".tmp");
  fs.moveToLocalFile(new Path (<HDFS pst path>) , new Path(tempFile.getAbsolutePath()) );
  PstFile pstFile = new PstFile(tempFile);

回答1:

Take a look at Behemoth (http://digitalpebble.blogspot.com/2011/05/processing-enron-dataset-using-behemoth.html). It combines Tika and Hadoop.

I've also written by own Hadoop + Tika jobs. The pattern is:

  1. Wrap all the pst files into sequencence or avro files.
  2. Write a map only job that reads the pst files form the avro files and writes it to the local disk.
  3. Run tika across the files.
  4. Write the output of tika back into a sequence file

Hope that help.s



回答2:

Its not possible to process PST file in mapper. after long analysis and debug it was found out that the API is not exposed properly and those API needs localfile system to store extracted pst contents. It directly cant store on HDFS. thats bottle-neck. And all those API's(libs that extract and process) are not free.

what we can do is extract outside hdfs and then we can process in MR jobs