get “ERROR: Can't get master address from ZooK

2019-01-23 16:11发布

I installed Hadoop2.2.0 and Hbase0.98.0 and here is what I do :

$ ./bin/start-hbase.sh 

$ ./bin/hbase shell

2.0.0-p353 :001 > list

then I got this:

ERROR: Can't get master address from ZooKeeper; znode data == null

Why am I getting this error ? Another question: do I need to run ./sbin/start-dfs.sh and ./sbin/start-yarn.sh before I run base ?

Also, what are used ./sbin/start-dfs.sh and ./sbin/start-yarn.sh for ?

Here is some of my conf doc :

hbase-sites.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://127.0.0.1:9000/hbase</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

    <property>
        <name>hbase.tmp.dir</name>
        <value>/Users/apple/Documents/tools/hbase-tmpdir/hbase-data</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/Users/apple/Documents/tools/hbase-zookeeper/zookeeper</value>
    </property>
</configuration>

core-sites.xml

<configuration>

  <property>
      <name>fs.defaultFS</name>
      <value>hdfs://localhost:9000</value>
      <description>The name of the default file system.</description>
  </property>

  <property>
      <name>hadoop.tmp.dir</name>
      <value>/Users/micmiu/tmp/hadoop</value>
      <description>A base for other temporary directories.</description>
  </property>

  <property>
      <name>io.native.lib.available</name>
      <value>false</value>
  </property>

</configuration>

yarn-sites.xml

<configuration>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

</configuration>

7条回答
Viruses.
2楼-- · 2019-01-23 16:26

This could also happen if the vm or the host machine is put to sleep ,Zookeeper will not stay live. Restarting the VM should solve the problem.

查看更多
【Aperson】
3楼-- · 2019-01-23 16:28

I had the exact same error. The Linux firewall was blocking connectivity. One can test ports via telnet. A quick fix is to turn off the firewall and see if it fixes it:

Completely disable the firewall on all of your nodes. Note: this command will not survive a reboot of your machines.

systemctl stop firewalld

Long term fix is that you must configure the firewall to allow the hbase ports.

Note, your version of hbase may use different ports: https://issues.apache.org/jira/browse/HBASE-10123

查看更多
Viruses.
4楼-- · 2019-01-23 16:31

One quick solution could be to Restart hbase:

1) Stop-hbase.sh
2) Start-hbase.sh
查看更多
该账号已被封号
5楼-- · 2019-01-23 16:37

You need to start zookeeper and then run Hbase-shell

{HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper

and you may want to check this property in hbase-env.sh

# Tell HBase whether it should manage its own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false

Refer to Source - Zookeeper

查看更多
相关推荐>>
6楼-- · 2019-01-23 16:38

If you just want to run HBase without going into Zookeeper management for standalone HBase, then remove all the property blocks from hbase-site.xml except the property block named hbase.rootdir.

Now run /bin/start-hbase.sh. HBase comes with its own Zookeeper, which gets started when you run /bin/start-hbase.sh, which will suffice if you are trying to get around things for the first time. Later you can put distributed mode configurations for Zookeeper.

You only need to run /sbin/start-dfs.sh for running HBase since the value of hbase.rootdir is set to hdfs://127.0.0.1:9000/hbase in your hbase-site.xml. If you change it to some location on local the filesystem using file:///some_location_on_local_filesystem, then you don't even need to run /sbin/start-dfs.sh.

hdfs://127.0.0.1:9000/hbase says it's a place on HDFS and /sbin/start-dfs.sh starts namenode and datanode which provides underlying API to access the HDFS file system. For knowing about Yarn, please look at http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/YARN.html.

查看更多
乱世女痞
7楼-- · 2019-01-23 16:44

The output from Hbase shell is quite high level that many misconfiguration would cause this message. To help yourself debug, it would be much better to look into the hbase log in

/var/log/hbase 

to figure out the root cause of the issue.

I had the same problem too. For me, my root cause was due to hadoop-kms having a conflicting port number with my hbase-master. Both of them are using port 16000 so my HMaster didn't even get started when I invoke hbase shell. After I fixed that, my hbase worked.

Again, kms port conflict might not be your root-cause. Strongly suggest looking into /var/log/hbase to find the root cause.

查看更多
登录 后发表回答