Any command to get active namenode for nameservice

2020-02-18 05:05发布

问题:

The command:

hdfs haadmin -getServiceState machine-98

Works only if you know the machine name. Is there any command like:

hdfs haadmin -getServiceState <nameservice>

which can tell you the IP/hostname of the active namenode?

回答1:

To print out the namenodes use this command:

hdfs getconf -namenodes

To print out the secondary namenodes:

hdfs getconf -secondaryNameNodes

To print out the backup namenodes:

hdfs getconf -backupNodes

Note: These commands were tested using Hadoop 2.4.0.

Update 10-31-2014:

Here is a python script that will read the NameNodes involved in Hadoop HA from the config file and determine which of them is active by using the hdfs haadmin command. This script is not fully tested as I do not have HA configured. Only tested the parsing using a sample file based on the Hadoop HA Documentation. Feel free to use and modify as needed.

#!/usr/bin/env python
# coding: UTF-8
import xml.etree.ElementTree as ET
import subprocess as SP
if __name__ == "__main__":
    hdfsSiteConfigFile = "/etc/hadoop/conf/hdfs-site.xml"

    tree = ET.parse(hdfsSiteConfigFile)
    root = tree.getroot()
    hasHadoopHAElement = False
    activeNameNode = None
    for property in root:
        if "dfs.ha.namenodes" in property.find("name").text:
            hasHadoopHAElement = True
            nameserviceId = property.find("name").text[len("dfs.ha.namenodes")+1:]
            nameNodes = property.find("value").text.split(",")
            for node in nameNodes:
                #get the namenode machine address then check if it is active node
                for n in root:
                    prefix = "dfs.namenode.rpc-address." + nameserviceId + "."
                    elementText = n.find("name").text
                    if prefix in elementText:
                        nodeAddress = n.find("value").text.split(":")[0]                

                        args = ["hdfs haadmin -getServiceState " + node]  
                        p = SP.Popen(args, shell=True, stdout=SP.PIPE, stderr=SP.PIPE)

                        for line in p.stdout.readlines():
                            if "active" in line.lower():
                                print "Active NameNode: " + node
                                break;
                        for err in p.stderr.readlines():
                            print "Error executing Hadoop HA command: ",err
            break            
    if not hasHadoopHAElement:
        print "Hadoop High-Availability configuration not found!"


回答2:

Found this:

https://gist.github.com/cnauroth/7ff52e9f80e7d856ddb3

This works out of the box on my CDH5 namenodes, although I'm not sure other hadoop distributions will have http://namenode:50070/jmx available - if not, I think it can be added by deploying Jolokia.

Example:

curl 'http://namenode1.example.com:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus'
{
  "beans" : [ {
    "name" : "Hadoop:service=NameNode,name=NameNodeStatus",
    "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
    "State" : "active",
    "NNRole" : "NameNode",
    "HostAndPort" : "namenode1.example.com:8020",
    "SecurityEnabled" : true,
    "LastHATransitionTime" : 1436283324548
  } ]

So by firing off one http request to each namenode (this should be quick) we can figure out which one is the active one.

It's also worth noting that if you talk WebHDFS REST API to an inactive namenode you will get a 403 Forbidden and the following JSON:

{"RemoteException":{"exception":"StandbyException","javaClassName":"org.apache.hadoop.ipc.StandbyException","message":"Operation category READ is not supported in state standby"}}


回答3:

You can do it in bash with hdfs cli calls, too. With the noted caveat that this takes a bit more time since it's a few calls to the API in succession, but this may be preferable to using a python script for some.

This was tested with Hadoop 2.6.0

get_active_nn(){
   ha_name=$1 #Needs the NameServiceID
   ha_ns_nodes=$(hdfs getconf -confKey dfs.ha.namenodes.${ha_name})
   active=""
   for node in $(echo ${ha_ns_nodes//,/ }); do
     state=$(hdfs haadmin -getServiceState $node)
     if [ "$state" == "active" ]; then
       active=$(hdfs getconf -confKey dfs.namenode.rpc-address.${ha_name}.${node})
       break
     fi
   done
   if [ -z "$active" ]; then
     >&2 echo "ERROR: no active namenode found for ${ha_name}"
     exit 1
   else
     echo $active
   fi
}


回答4:

After reading all the existing answers none seemed to combine the three steps of:

  1. Identifying the namenodes from the cluster.
  2. Resolving the node names to host:port.
  3. Checking the status of each node (without requiring cluster admin privs).

Solution below combines hdfs getconf calls and JMX service call for node status.

#!/usr/bin/env python

from subprocess import check_output
import urllib, json, sys

def get_name_nodes(clusterName):
    ha_ns_nodes=check_output(['hdfs', 'getconf', '-confKey',
        'dfs.ha.namenodes.' + clusterName])
    nodes = ha_ns_nodes.strip().split(',')
    nodeHosts = []
    for n in nodes:
        nodeHosts.append(get_node_hostport(clusterName, n))

    return nodeHosts

def get_node_hostport(clusterName, nodename):
    hostPort=check_output(
        ['hdfs','getconf','-confKey',
         'dfs.namenode.rpc-address.{0}.{1}'.format(clusterName, nodename)])
    return hostPort.strip()

def is_node_active(nn):
    jmxPort = 50070
    host, port = nn.split(':')
    url = "http://{0}:{1}/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus".format(
            host, jmxPort)
    nnstatus = urllib.urlopen(url)
    parsed = json.load(nnstatus)

    return parsed.get('beans', [{}])[0].get('State', '') == 'active'

def get_active_namenode(clusterName):
    for n in get_name_nodes(clusterName):
        if is_node_active(n):
            return n

clusterName = (sys.argv[1] if len(sys.argv) > 1 else None)
if not clusterName:
    raise Exception("Specify cluster name.")

print 'Cluster: {0}'.format(clusterName)
print "Nodes: {0}".format(get_name_nodes(clusterName))
print "Active Name Node: {0}".format(get_active_namenode(clusterName))


回答5:

In a High Availability Hadoop cluster, there will be 2 namenodes - one active and one standby.

To find the active namenode, we can try executing the test hdfs command on each of the namenodes and find the active name node corresponding to the successful run.

Below command executes successfully if the name node is active and fails if it is a standby node.

hadoop fs -test -e hdfs://<Name node>/

Unix script

active_node=''
if hadoop fs -test -e hdfs://<NameNode-1>/ ; then
active_node='<NameNode-1>'
elif hadoop fs -test -e hdfs://<NameNode-2>/ ; then
active_node='<NameNode-2>'
fi

echo "Active Dev Name node : $active_node"


回答6:

From java api, you can use HAUtil.getAddressOfActive(fileSystem).



回答7:

You can do a curl command to find out the Active and secondary Namenode for example

curl -u username -H "X-Requested-By: ambari" -X GET http://cluster-hostname:8080/api/v1/clusters//services/HDFS

Regards



回答8:

I found the below when i simply typed 'hdfs' and found a couple of helpful commands, which could be useful for someone who could maybe come here seeking for help.

hdfs getconf -namenodes

This above command will give you the service id of the namenode. Say, hn1.hadoop.com

hdfs getconf -secondaryNameNodes

This above command will give you the service id of the available secondary namenodes. Say , hn2.hadoop.com

hdfs getconf -backupNodes

This above command will get you the service id of backup nodes, if any.

hdfs getconf -nnRpcAddresses

This above command will give you info of name service id along with rpc port number. Say, hn1.hadoop.com:8020

                                  You're Welcome :)


回答9:

#!/usr/bin/python

import subprocess
import sys
import os, errno


def getActiveNameNode () :

    cmd_string="hdfs getconf -namenodes"
    process = subprocess.Popen(cmd_string, shell=True, stdout=subprocess.PIPE)
    out, err = process.communicate()
    NameNodes = out
    Value = NameNodes.split(" ")
    for val in Value :
        cmd_str="hadoop fs -test -e hdfs://"+val
        process = subprocess.Popen(cmd_str, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        out, err = process.communicate()
        if (err != "") :
            return val

def main():

    out = getActiveNameNode()
    print(out)

if __name__ == '__main__':
    main()