How to use Hive without hadoop

2020-03-02 05:08发布

问题:

I am a new to NoSQL solutions and want to play with Hive. But installing HDFS/Hadoop takes a lot of resources and time (maybe without experience but I got no time to do this).

Are there ways to install and use Hive on a local machine without HDFS/Hadoop?

回答1:

yes you can run hive without hadoop 1.create your warehouse on your local system 2. give default fs as file:/// than you can run hive in local mode with out hadoop installation

In Hive-site.xml

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration>
      <property>
         <name>hive.metastore.schema.verification</name> 
         <value>false</value> 
      </property> 
     <property> 
      <!-- this should eventually be deprecated since the metastore should supply this --> 
        <name>hive.metastore.warehouse.dir</name> 
        <value>file:///tmp</value>
        <description></description> 
     </property>
     <property> 
        <name>fs.default.name</name> 
        <value>file:///tmp</value> 
     </property> 
</configuration>


回答2:

If you are just talking about experiencing Hive before making a decision you can just use a preconfigured VM as @Maltram suggested (Hortonworks, Cloudera, IBM and others all offer such VMs)

What you should keep in mind that you will not be able to use Hive in production without Hadoop and HDFS so if it is a problem for you, you should consider alternatives to Hive



回答3:

I would recomend you to use something like this.

http://hortonworks.com/products/hortonworks-sandbox/

Its a fully functional VM with everything you need to start right away.



回答4:

You cant, just download Hive, and run:

./bin/hiveserver2                                                                                                                                        
Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path

Hadoop is like a core, and Hive need some library from it.



回答5:

Update This answer is out-of-date : with Hive on Spark it is no longer necessary to have hdfs support.


Hive requires hdfs and map/reduce so you will need them. The other answer has some merit in the sense of recommending a simple / pre-configured means of getting all of the components there for you.

But the gist of it is: hive needs hadoop and m/r so in some degree you will need to deal with it.



标签: hadoop hive hdfs