What are the key differences to do map/reduce work on MongoDB using Hadoop map/reduce vs built-in map/reduce of Mongo?
When do I pick which map/reduce engine? what are the pros and cons of each engine to work on data stored in mongodb?
What are the key differences to do map/reduce work on MongoDB using Hadoop map/reduce vs built-in map/reduce of Mongo?
When do I pick which map/reduce engine? what are the pros and cons of each engine to work on data stored in mongodb?
My answer is based on knowledge and experience of Hadoop MR and learning of Mongo DB MR. Lets see what are major differences and then try to define criteria for selection: Differences are:
From the above I can suggest the following criteria for selection:
Select Mongo DB MR if you need simple group by and filtering, do not expect heavy shuffling between map and reduce. In other words - something simple.
Select hadoop MR if you're going to do complicated, computationally intense MR jobs (for example some regressions calculations). Having a lot or unpredictable size of data between map and reduce also suggests Hadoop MR.
Java is a stronger language with more libraries, especially statistical. That should be taken into account.
As of MongoDB 2.4 MapReduce jobs are no longer single threaded.
Also, see the Aggregation Framework for a higher-performance, declarative way to perform aggregates and other analytical workloads in MongoDB.
I don't have a lot of experience with Hadoop MR, but my impression is that it only works on HDFS, so you would have to duplicate all of your Mongo data in HDFS. If you are willing to duplicate all of your data, I would guess Hadoop MR is much faster and more robust than Mongo MR.
Item 3 is certainly incorrect when it comes to Hadoop. Processing colocation with the data is part of the foundation of Hadoop.