I'm studying Hadoop and currently I'm trying to set up an Hadoop 2.2.0 single node. I downloaded the latest distribution, uncompressed it, now I'm trying to set up the Hadoop Distributed File System (HDFS).
Now, I'm trying to follow the Hadoop instructions available here but I'm quite lost.
In the left bar you see there are references to the following files:
- core-default.xml
- hdfs-default.xml
- mapred-default.xml
- yarn-default.xml
But how those files are ?
I found /etc/hadoop/hdfs-site.xml, but it is empty!
I found /share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml but it is just a piece of doc!
So, what files I have to modify to configure HDFS ? Where the deaults values are read from ?
Thanks in advance for your help.
these files can be seen here /usr/lib/hadoop-2.2.0/etc/hadoop, in that location u can find all the XMLs.
For Installing Hadoop 2.2.0 You follow this link. It is for "0.23.9" but it works absolutely fine for "2.2.0"
All the configuration files will be located in the extracted tar.gz file in the etc/hadoop/ directory. The hdfs-site.xml may be hdfs-site.xml.template. You will need to rename it to hdfs-site.xml.
If you want to see what options for hdfs check the doc in the tarball in share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
These files are all found in the hadoop/conf directory.
For setting HDFS you have to configure core-site.xml and hdfs-site.xml.
HDFS works in two modes: distributed (multi-node cluster) and pseudo-distributed (cluster of one single machine).
For the pseudo-distributed mode you have to configure:
In core-site.xml:
In hdfs-site.xml:
Each property has its hardcoded default value.
Please remember to set ssh password-less login for hadoop user before starting HDFS.
P.S.
It you download Hadoop from Apache, you can consider switching to a Hadoop distribution:
Cloudera's CDH, HortonWorks or MapR.
If you install Cloudera CDH or Hortonworks HDP you will find the files in /etc/hadoop/conf/.