I am trying to install Hadoop 2.2.0 Cluster on the servers. For now all the servers are 64-bit, I download the Hadoop 2.2.0 and all the configuration files have been set up. When I am running ./start-dfs.sh, I got the following error:
13/11/15 14:29:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hchen/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.namenode]
sed: -e expression #1, char 6: unknown option to `s' have: ssh: Could not resolve hostname have: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
-c: Unknown cipher type 'cd'
Java: ssh: Could not resolve hostname Java: Name or service not known
The authenticity of host 'namenode (192.168.1.62)' can't be established.
RSA key fingerprint is 65:f9:aa:7c:8f:fc:74:e4:c7:a2:f5:7f:d2:cd:55:d4.
Are you sure you want to continue connecting (yes/no)? VM: ssh: Could not resolve hostname VM: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
...
Beside the 64-bit, is there any other errors? I have finished the log in between namenode and datanodes without password, what do the other errors mean?
The issue is not with the native library. Please see that its just a warning . Please export the hadoop variables mentioned above . That will work
Add the following entries to .bashrc where HADOOP_HOME is your hadoop folder:
In addition, execute the following commands:
I think that the only problem here is the same as in this question, so the solution is also the same:
Stop JVM from printing the stack guard warning to stdout/stderr, because this is what breaks the HDFS starting script.
Do it by replacing in your
etc/hadoop/hadoop-env.sh
line:with:
(This solution has been found on Sumit Chawla's blog)
I had the similar problem & could not solve it after following all the above suggestions.
Finally understood that, the hostname configured and IP address is not assigned for the same.
My hostname was
vagrant
and it is configured in/etc/hostname
. But I found that the IP address for the vagrant is not assigned in/etc/hosts
. In/etc/hosts
I found IP address for onlylocalhost
.Once I updated the hostname for both
localhost
andvagrant
all the above problems are resolved.The root cause is that the default native library in hadoop is built for 32-bit. The solution
1) Setup some environment variables in
.bash_profile
. Please refer to https://gist.github.com/ruo91/7154697 Or2) Rebuild your hadoop native library, please refer to http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/NativeLibraries.html
You also can export variables in hadoop-env.sh
/usr/local/hadoop - my hadoop installation folder