Hive count(*) query is not invoking mapreduce

2019-08-13 06:16发布

I have external tables in hive, I am trying to run select count(*) from table_name query but the query returns instantaneously and gives result which is i think already stored. The result returned by query is not correct. Is there a way to force a map reduce job and make the query execute each time.

Note: This behavior is not followed for all external tables but some of them.

Versions used : Hive 0.14.0.2.2.6.0-2800, Hadoop 2.6.0.2.2.6.0-2800 (Hortonworks)

标签: hadoop hive
4条回答
对你真心纯属浪费
2楼-- · 2019-08-13 06:57

please try the below :

hive>set hive.fetch.task.conversion=none in your hive session and then trigger select count(*) operation in your hive session to mandate mapreduce

查看更多
forever°为你锁心
3楼-- · 2019-08-13 07:10

After some finding I have got a method that kicks off MR for counting number of records on orc table.

ANALYZE TABLE 'table name' PARTITION('partition columns') COMPUTE STATISTICS; --OR ANALYZE TABLE 'table name' COMPUTE STATISTICS;

This is not a direct alternative for count(*) but provides latest count of records in the table.

查看更多
太酷不给撩
4楼-- · 2019-08-13 07:17

From personal experience, COUNT(*) on an ORC table usually returns wrong figures -- i.e. it returns the number of rows on the first data file only. If the table was fed by multiple INSERTs then you are stuck.

With V0.13 you could fool the optimizer into running a dummy M/R job by adding a dummy "where 1=1" clause -- takes much longer, but actually counts the rows.

With 0.14 the optimizer got smarter, you must add a non-deterministic clause e.g. "where MYKEY is null". Assuming that MYKEY is a String, otherwise the "is null" clause may crash your query -- another ugly ORC bug.

By the way, a SELECT DISTINCT on partition key(s) will also return wrong results -- all existing partitions will be shown, even the empty ones. Not specific to ORC this time.

查看更多
beautiful°
5楼-- · 2019-08-13 07:19

Doing a wc -l on ORC data won't give you an accurate result, since the data is encoded. This would work if the data was stored in a simple text file format with one row per line.

Hive does not need to launch a MapReduce for count(*) of an ORC file since it can use the ORC metadata to determine the total count.

Use the orcfiledump command to analyse ORC data from the command line

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-ORCFileDumpUtility

查看更多
登录 后发表回答