Google's Dremel is described here. What's the difference between Dremel and Mapreduce?
相关问题
- Spark on Yarn Container Failure
- How to export crash-free users from firebase?
- enableHiveSupport throws error in java spark code
- spark select and add columns with alias
- Unable to generate jar file for Hadoop
相关文章
- Java写文件至HDFS失败
- mapreduce count example
- BigQuery - Concatenate multiple rows into a single
- Where do you get Google Bigquery usage info (mainl
- How do I configure Google BigQuery command line to
- Could you give me any clue Why 'Cannot call me
- Hive error: parseexception missing EOF
- How can I make integration tests with google cloud
MapReduce is an abstract algorithm for how to split a problem up, distribute it, and combine the results. Dremel appears to be a specific tool for querying and analyzing datasets.
Dremel and MapReduce are not directly comparable, but rather they are complementary technologies.
MapReduce is not specifically designed for analyzing data - rather it's a software framework that allows a collection of nodes to tackle distributed computational problems for large datasets.
Dremel is a data analysis tool designed to quickly run queries on massive, structured datasets (such as log or event files). It supports a SQL-like syntax, but apart from table appends, it is read-only. It doesn't support update or create functions, nor does it feature table indexes. Data is organized in a "columnar" format, which contributes to very fast query speed. Google's BigQuery product is an implementation of Dremel accessible via RESTful API.
Hadoop (an open source implementation of MapReduce) in conjunction with the "Hive" data warehouse software, also allows data analysis for massive datasets using a SQL-style syntax. Hive essentially turns queries into MapReduce functions. In contrast to using a ColumIO format, Hive attempts to make queries quick by using techniques such as table indexing.
Check this article out. Dremel is the what the future of hive should (and will) be.
The major issue of MapReduce and solutions on top of it, like Pig, Hive etc, is that they have an inherent latency between running the job and getting the answer. Dremel uses a totally novel approach (came out in 2010 in that paper by google) which...
...to run almost realtime , interactive AND adhoc queries both of which MapReduce cannot. And Pig and Hive aren't real time
You should keep an eye on projects coming out of this. Is is pretty new for me too... so any other expert comments are welcome!
Edit: Dremel is what the future of HIVE (and not MapReduce as I mentioned before) should be. Hive right now provides a SQL like interface to run MapReduce jobs. Hive has very high latency, and so is not practical in ad-hoc data analysis. Dremel provides a very fast SQL like interface to the data by using a different technique than MapReduce.