Hadoop cluster configuration with Ubuntu Master an

2020-07-24 03:02发布

Hi I am new to Hadoop.

Hadoop Version (2.2.0)

Goals:

  1. Setup Hadoop standalone - Ubuntu 12 (Completed)
  2. Setup Hadoop standalone - Windows 7 (cygwin being used for only sshd) (Completed)
  3. Setup cluster with Ubuntu Master and Windows 7 slave (This is mostly for learning purposes and setting up a env for development) (Stuck)

Setup in relationship with the questions below:

  • Master running on Ubuntu with hadoop 2.2.0
  • Slaves running on Windows 7 with a self compiled version from hadoop 2.2.0 source. I am using cygwin only for the sshd
  • password less login setup and i am able to login both ways using ssh from outside hadoop. Since my Ubuntu and Windows machine have different usernames I have set up a config file in the .ssh folder which maps Hosts with users

Questions:

  1. In a cluster does the username in the master need to be same as in the slave. The reason I am asking this is that post configuration of the cluster when I try to use start-dfs.sh the logs say that they are able to ssh into the slave nodes but were not able to find the location "/home/xxx/hadoop/bin/hadoop-daemon.sh" in the slave. The "xxx" is my master username and not the slaveone. Also since my slave in pure Windows version the install is under C:/hadoop/... Does the master look at the env variable $HADOOP_HOME to check where the install is in the slave? Is there any other env variables that I need to set?

  2. My goal was to use the Windows hadoop build on slave since hadoop is officially supporting windows now. But is it better to run the Linux build under cygwin to accomplish this. The question comes since I am seeing that the start-dfs.sh is trying to execute hadoop-daemon.sh and not some *.cmd.

  3. If this setup works out in future, a possible question that I have is whether Pig, Mahout etc will run in this kind of a setup as I have not seen a build of Pig, Mahout for Windows. Does these components need to be present only on the master node or do they need to be in the slave nodes too. I saw 2 ways of running mahout when experimenting with standalone mode first using the mahout script which I was able to use in linux and second using the yarn jar command where I passed in the mahout jar while using the windows version. In the case Mahout/ Pig (when using the provided sh script) will assume that the slaves already have the jars in place then the Ubuntu + Windows combo does not seem to work. Please advice.

As I mentioned this is more as an experiment rather than an implementation plan. Our final env will be completely on linux. Thank you for your suggestions.

标签: hadoop
2条回答
我欲成王,谁敢阻挡
2楼-- · 2020-07-24 03:37

I have only worked with the same username. In general SSH allows to login with a different login name with the -l command. But this might get tricky. You have to list your slaves in the slaves file.

At least at the manual https://hadoop.apache.org/docs/r0.19.1/cluster_setup.html#Slaves I did not find anything to add usernames. it might be worth trying to add -l login_name to the slavenode in the slave conf file and see if it works.

查看更多
爷的心禁止访问
3楼-- · 2020-07-24 04:01

You may have more success going with more standard ways of deploying hadoop. Try out using ubuntu vm's for master and slaves.

You can also try to do a pseudo-distributed deployment in which all of the processes run on a single VM and thus avoid the need to even consider multiple os's.

查看更多
登录 后发表回答