2013年12月29日 星期日

[研究] Hadoop 1.2.1 Cluster 安裝 (CentOS 6.5 x64)

[研究] Hadoop 1.2.1 Cluster 安裝 (CentOS 6.5 x64)

2013-12-29

小弟是新手,如有錯漏歡迎指教

參考資料

http://hadoop.apache.org/docs/r1.2.1/single_node_setup.html
http://hadoop.apache.org/docs/r1.2.1/cluster_setup.html

●環境

三台 CentOS 6.5 x86_64  64 bits 電腦

192.168.128.101  master01
192.168.128.102  slave01
192.168.128.103  slave02

●設定固定靜態 IP 和主機名稱 (三台都要做,注意 IP 和主機名稱是不同的)

設定固定 IP

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
HWADDR=00:0c:29:cd:49:e9
TYPE=Ethernet
UUID=778b0414-2c4b-4c39-877c-5902f145ec18
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.128.101
NETMASK=255.255.255.0
GATEWAY=192.168.128.2
DNS1=192.168.128.2
IPV6INIT=no
USERCTL=no

設定主機與 IP 對應

echo "192.168.128.101  master01" >> /etc/hosts
echo "192.168.128.102  slave01" >> /etc/hosts
echo "192.168.128.103  slave02" >> /etc/hosts
cat /etc/hosts



[root@localhost ~]# vi  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.128.101  master01
192.168.128.102  slave01
192.168.128.103  slave02

設定 DNS Server

[root@localhost ~]# vi /etc/resolv.conf

# Generated by NetworkManager
nameserver 192.168.128.2

設定主機名稱 (會立刻生效,但 reboot 後失效)

[root@localhost ~]# hostname  master01

測試

[root@localhost ~]# hostname
master01

設定主機名稱 (不會立刻生效,要 reboot 後生效)

[root@localhost local]# vi /etc/sysconfig/network
NETWORKING=yes
#HOSTNAME=localhost.localdomain
HOSTNAME=master01

重新啟動網路

[root@localhost local]# service network restart
Shutting down interface eth0:  Device state: 3 (disconnected)
                                                           [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:  Active connection state: activated
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1
                                                           [  OK  ]
[root@localhost local]#


●安裝Oracle Java (三台都要做)

[研究] Oracle Java 手動安裝與快速安裝程式 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/oracle-java-centos-65-x64.html

[root@localhost ~]# ./JDK7U45x64_Install.sh

●安裝 Hadoop (三台都要做)

[root@master01 ~]# wget http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-1.2.1/hadoop-1.2.1-1.x86_64.rpm

[root@master01 ~]# rpm  -ivh  hadoop-1.2.1-1.x86_64.rpm

檢視現況

[root@master01 ~]# hadoop version
Hadoop 1.2.1
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152
Compiled by mattf on Mon Jul 22 15:27:42 PDT 2013
From source with checksum 6923c86528809c4e7e6f493b6b413a9a
This command was run using /usr/share/hadoop/hadoop-core-1.2.1.jar

[root@master01 ~]#  cat /etc/passwd | grep Hadoop
mapred:x:202:123:Hadoop MapReduce:/tmp:/bin/bash
hdfs:x:201:123:Hadoop HDFS:/tmp:/bin/bash

[root@master01 ~]# find / -name hadoop
/usr/bin/hadoop
/usr/etc/hadoop
/usr/share/hadoop
/usr/share/doc/hadoop
/usr/include/hadoop
/etc/hadoop
/var/log/hadoop
/var/run/hadoop
/var/lib/hadoop
[root@master01 ~]#

[root@master01 ~]# export | grep HADOOP
declare -x HADOOP_CLIENT_OPTS="-Xmx128m "
declare -x HADOOP_CONF_DIR="/etc/hadoop"
declare -x HADOOP_IDENT_STRING="root"
declare -x HADOOP_LOG_DIR="/var/log/hadoop/root"
declare -x HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT "
declare -x HADOOP_OPTS="-Djava.net.preferIPv4Stack=true "
declare -x HADOOP_PID_DIR="/var/run/hadoop"
declare -x HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT "
declare -x HADOOP_SECURE_DN_LOG_DIR="/var/log/hadoop/"
declare -x HADOOP_SECURE_DN_PID_DIR="/var/run/hadoop"
declare -x HADOOP_SECURE_DN_USER=""
[root@master01 ~]#


●新增map reduce使用者mr (三台都要做)

[root@master01 ~]#  useradd mr

●執行Hadoop設定 (只要 master01 做,稍後會拷貝到另兩台)

其他可以用預設值,但注意最後 Proceed with generate configuration? (y/N) 要輸入 y

[root@master01 ~]# /usr/sbin/hadoop-setup-conf.sh
Setup Hadoop Configuration

Where would you like to put config directory? (/etc/hadoop)
Where would you like to put log directory? (/var/log/hadoop)
Where would you like to put pid directory? (/var/run/hadoop)
What is the host of the namenode? (master01)
Where would you like to put namenode data directory? (/var/lib/hadoop/hdfs/namenode)
Where would you like to put datanode data directory? (/var/lib/hadoop/hdfs/datanode)
What is the host of the jobtracker? (master01)
Where would you like to put jobtracker/tasktracker data directory? (/var/lib/hadoop/mapred)
Where is JAVA_HOME directory? (/usr/java/jdk1.7.0_45)
Would you like to create directories/copy conf files to localhost? (Y/n)

Review your choices:

Config directory            : /etc/hadoop
Log directory               : /var/log/hadoop
PID directory               : /var/run/hadoop
Namenode host               : master01
Namenode directory          : /var/lib/hadoop/hdfs/namenode
Datanode directory          : /var/lib/hadoop/hdfs/datanode
Jobtracker host             : master01
Mapreduce directory         : /var/lib/hadoop/mapred
Task scheduler              : org.apache.hadoop.mapred.JobQueueTaskScheduler
JAVA_HOME directory         : /usr/java/jdk1.7.0_45
Create dirs/copy conf files : y

Proceed with generate configuration? (y/N)
chown: invalid user: `mr:hadoop'
chown: invalid user: `mr:hadoop'
chown: invalid user: `mr:hadoop'
Configuration setup is completed.
Proceed to run hadoop-setup-hdfs.sh on namenode.
[root@master01 ~]#

它幫忙修改下面設定檔案

[root@master01 ~]# ls -al /etc/hadoop/*.xml
-rw-r--r--. 1 root root 6930 Dec 29 12:08 /etc/hadoop/capacity-scheduler.xml
-rw-r--r--. 1 root root 2063 Dec 29 12:08 /etc/hadoop/core-site.xml
-rw-r--r--. 1 root root  327 Jul 23 06:29 /etc/hadoop/fair-scheduler.xml
-rw-r--r--. 1 root root 4653 Dec 29 12:08 /etc/hadoop/hadoop-policy.xml
-rw-r--r--. 1 root root 6589 Dec 29 12:08 /etc/hadoop/hdfs-site.xml
-rw-r--r--. 1 root root  298 Dec 29 12:08 /etc/hadoop/mapred-queue-acls.xml
-rw-r--r--. 1 root root 9589 Dec 29 12:08 /etc/hadoop/mapred-site.xml
[root@master01 ~]#

● 編輯Master和Slaves (只要 master01 做,稍後會拷貝到另兩台)

[root@ master01 ~]# vi  /etc/hadoop/masters
master01

[root@ master01 ~]# vi /etc/hadoop/slaves
master01
slave01
slave02

●設定 ssh 連線免輸入密碼(三台都要做)

目的是讓master01能自動連進slave01 ,slave02啟動各台機器的相關服務,如Datanode和task服務

[root@ master01 ~]# yum  -y  install  openssh  rsync  sshd
[root@ master01 ~]# service sshd restart
[root@ master01 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
ca:04:30:8d:be:bd:91:a2:c3:c4:94:cf:18:c3:43:cb root@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
|  oo             |
| ..o.            |
|+.o .            |
| E.  .           |
|o Bo .. S        |
| +oo+o .         |
|o. . oo          |
|o.  .            |
| .               |
+-----------------+
[hadoop@ master01 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@ master01 ~]# chmod 600 ~/.ssh/authorized_keys

slave01 上也執行

yum  -y  install  openssh  rsync  sshd
service sshd restart
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

slave02 上也執行

yum  -y  install  openssh  rsync  sshd
service sshd restart
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

讓自己 ssh 自己免登入密碼 (三台要各自做,都是連 root@localhost)
第一次執行會問,回答 yes,執行 exit 離開

[root@master01 ~]# ssh root@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 6d:a4:8e:a6:b5:b0:e9:c4:e8:5b:55:be:e4:bd:04:60.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Thu Dec 26 18:30:35 2013 from 192.168.128.1
[root@master01 ~]# exit
logout
Connection to localhost closed.

第二次應該不會問,執行 exit 離開

[root@master01 ~]# ssh root@localhost
Last login: Thu Dec 26 18:31:05 2013 from localhost
[root@master01 ~]# exit
logout
Connection to localhost closed.
[root@master01 ~]#

檢視一下

[root@master01 ~]# cat ~/.ssh/authorized_keys
ssh-dss AAAAB3NzaC1kc3MAAACBAJvVJ7rK7QX2JcAGAwk85l5B7Cm2QUIrQ6RjaSsMDQTZEV6LJ8lWAkdlXIOJhte0EzylPLzxUvckjpr9wEtoZjBjh6i8qklzheQMfLbZUQG3QAxWqeoZYbSdDnoIsHOBSQbckjYiUOvpQECIetiBDQQUdjWglB8jLKWGWa42hUXPAAAAFQDMVDU+CdpFDmp/6PhvBiREpIwHAwAAAIAzXR5aFwO0pUWPAltTwkoruJkiOzl+iC5mrXUJQaEwXXnWJLBYxwLVm/sbNFcMBRLN6+DDp0RoYKe+AIiK51TPVlKGXqfpdPNMkrYYuJronkLGfRg215ko5DCFs/Zz9xsEHfKo48dmn/jy0fySvABwb6LAy3TFYgJBOHpp+lwVtgAAAIBrV22S3BubY4WU2T/BDHY9lfcz4nlSfV5izfjpnAXQ+e5NxD5NlGXmANb6vUcS3z9/dYXpHgAb4ZlpWEYFCLbiALA11fdscHA/bxdYp0nyhHZsZOAZQMR8Hzb6c/xX+btC5+3vmoNsTjhAySmke7SKnQR6yUFvBtjs+D3xvUZc6g== root@master01

[root@master01 ~]# cat ~/.ssh/known_hosts
localhost ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

(三台要各自做,都是連 root@localhost)

後面那一串亂碼,各台機器上可能不同,你我也可能不同

編輯
[root@master01 ~]# vi  ~/.ssh/known_hosts

localhost ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

內容改成下面 (增加一些主機)

localhost,127.0.0.1,master01,192.168.128.101 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

slave01 主機上的 ~/.ssh/known_hosts (注意 IP 和主機名稱不同)

localhost,127.0.0.1,slave01,192.168.128.102 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

slave02 主機上的 ~/.ssh/known_hosts (注意 IP 和主機名稱不同)

localhost,127.0.0.1,slave02,192.168.128.103 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==


● 讓所有 master 用 ssh 連其他 slave 免密碼 (master01 和 slave01,slave02 作法不同)

master01 主機上,把 key 拷貝到 slave01

[root@master01 ~]# scp ~/.ssh/authorized_keys root@slave01:~/.ssh/authorized_keys_from_master01
The authenticity of host 'slave01 (192.168.128.102)' can't be established.
RSA key fingerprint is b5:78:67:c6:4b:29:82:9d:f7:49:e7:02:d9:ec:09:17.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave01,192.168.128.102' (RSA) to the list of known hosts.
root@slave01's password:
authorized_keys                               100%  611     0.6KB/s   00:00

master01 主機上,把 key 拷貝到 slave02

[root@master01 ~]# scp ~/.ssh/authorized_keys root@slave02:~/.ssh/authorized_keys_from_master01
The authenticity of host 'slave02 (192.168.128.103)' can't be established.
RSA key fingerprint is ac:e1:83:2b:ee:e2:e2:0b:1c:df:06:c7:84:1b:56:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave02,192.168.128.103' (RSA) to the list of known hosts.
root@slave02's password:
authorized_keys                               100%  611     0.6KB/s   00:00

slave01 主機上,把 master01 的 key 放入授權檔案

[root@slave01 ~]# cat ~/.ssh/authorized_keys_from_master01 >> ~/.ssh/authorized_keys

檢視

[root@slave01 ~]# cat ~/.ssh/authorized_keys
ssh-dss AAAAB3NzaC1kc3MAAACBAOYUc5Q7GXnQgHdL3durY297VrEBFrFbTiqNcQoxUjsO1H9exXxU2U06ahcxVGM1sMOqgbTy5aQrNk6P6Lv0f3Lxwks+C07BeY0SBdfmoRotN/8dPb/4Ykk9WSRBo0x7a8HMWqidoVwb73Etsyc10aa0ujP/iwKVhICKY6w3y+IpAAAAFQCT+NAUGf3DhKCUgNBpkVVvWUts7QAAAIBHT4CIqeo2TAKrpF9chXNdd3IklAeidfpwb/p8WGVB0qdrgf8g7OD1E5/ZbSM7aebmbAR9AMGjTi+tcCbmI53JhuHLnMzrmP1P6+BmZxfiq1//GNz2uOsrLZzV4+BLKA7DNYgdeCLV7/GsQX0kc7FZLwK1mtdZVDMI+rOsB/j6sAAAAIEAzBZ3cv9L4qmaY3FoAttr3wbt2c1JJIWFo0CUCc+icDYM8S7jGmVOScfFAg0M81VLJVEli1Tr7/MRFJxEftHRSxEdooUBltRXmx5XjXfEM9tXN/nT9RuSiQop5XCMMSNFVYF/G1XxywyAh7mRvreibG0fxcfyuC2meorqa31PlCU= root@slave01
ssh-dss AAAAB3NzaC1kc3MAAACBAJvVJ7rK7QX2JcAGAwk85l5B7Cm2QUIrQ6RjaSsMDQTZEV6LJ8lWAkdlXIOJhte0EzylPLzxUvckjpr9wEtoZjBjh6i8qklzheQMfLbZUQG3QAxWqeoZYbSdDnoIsHOBSQbckjYiUOvpQECIetiBDQQUdjWglB8jLKWGWa42hUXPAAAAFQDMVDU+CdpFDmp/6PhvBiREpIwHAwAAAIAzXR5aFwO0pUWPAltTwkoruJkiOzl+iC5mrXUJQaEwXXnWJLBYxwLVm/sbNFcMBRLN6+DDp0RoYKe+AIiK51TPVlKGXqfpdPNMkrYYuJronkLGfRg215ko5DCFs/Zz9xsEHfKo48dmn/jy0fySvABwb6LAy3TFYgJBOHpp+lwVtgAAAIBrV22S3BubY4WU2T/BDHY9lfcz4nlSfV5izfjpnAXQ+e5NxD5NlGXmANb6vUcS3z9/dYXpHgAb4ZlpWEYFCLbiALA11fdscHA/bxdYp0nyhHZsZOAZQMR8Hzb6c/xX+btC5+3vmoNsTjhAySmke7SKnQR6yUFvBtjs+D3xvUZc6g== root@master01

兩行的最後分別是 root@slave01 和 root@master01,表示這兩個帳號可以免密碼 ssh 登入

slave02 主機上,把 master01 的 key 放入授權檔案

[root@slave02 ~]# cat ~/.ssh/authorized_keys_from_master01 >> ~/.ssh/authorized_keys

●把 hadoop 設定檔案從 master01 複製拷貝到 slave01, slave02 (只要 master01 做)

[root@master01 ~]# scp   /etc/hadoop/*   root@192.168.128.102:/etc/hadoop/.

[root@master01 ~]# scp   /etc/hadoop/*   root@192.168.128.103:/etc/hadoop/.

●格式化分散式檔案系統

[root@master01 ~]# hadoop namenode -format
13/12/29 12:25:39 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master01/192.168.128.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:27:42 PDT 2013
STARTUP_MSG:   java = 1.7.0_45
************************************************************/
13/12/29 12:25:39 INFO util.GSet: Computing capacity for map BlocksMap
13/12/29 12:25:39 INFO util.GSet: VM type       = 64-bit
13/12/29 12:25:39 INFO util.GSet: 2.0% max memory = 129761280
13/12/29 12:25:39 INFO util.GSet: capacity      = 2^18 = 262144 entries
13/12/29 12:25:39 INFO util.GSet: recommended=262144, actual=262144
13/12/29 12:25:39 INFO namenode.FSNamesystem: fsOwner=root
13/12/29 12:25:39 INFO namenode.FSNamesystem: supergroup=supergroup
13/12/29 12:25:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/12/29 12:25:39 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/12/29 12:25:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/12/29 12:25:39 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
13/12/29 12:25:39 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/12/29 12:25:40 INFO common.Storage: Image file /var/lib/hadoop/hdfs/namenode/current/fsimage of size 110 bytes saved in 0 seconds.
13/12/29 12:25:40 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/var/lib/hadoop/hdfs/namenode/current/edits
13/12/29 12:25:40 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/var/lib/hadoop/hdfs/namenode/current/edits
13/12/29 12:25:40 INFO common.Storage: Storage directory /var/lib/hadoop/hdfs/namenode has been successfully formatted.
13/12/29 12:25:40 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master01/192.168.128.101
************************************************************/
[root@master01 ~]#

●測試啟動各項Hadoop服務(三台)

(偷懶一點就各項服務的測試跳過,等 start-all.sh 發現異常再測試)

測試啟動和停止 Namenode

小弟經驗,不管用 hadoop-daemon.sh 或 start-all.sh 或 stop-all.sh 去啟動或停止,
就算告訴你成功,都沒有 100% 可靠,最好用 ps 檢查

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-master01.out

[root@master01 sbin]# ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t"  $12}'
root    34233   /usr/java/jdk1.7.0_45/bin/java  -Dproc_namenode
root    34297   grep    hadoop

為了避免麻煩,可以建立一個 hs.sh 專門執行這串複雜指令

[root@master01 hadoop]# vi  /usr/bin/hs.sh
內容
ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t"  $12}'

設定成可執行
[root@master01 hadoop]# chmod +x  /usr/bin/hs.sh


[root@master01 ~]# jps
5267 JobTracker
29599 Jps
5394 TaskTracker
5047 DataNode
4927 NameNode
5174 SecondaryNameNode

[root@master01 ~]# find / -name jps
/usr/java/jdk1.7.0_45/bin/jps

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh stop namenode
stopping namenode

測試啟動和停止 Datanode

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-master01.out

[root@master01 sbin]# ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t"  $12}'
root    34327   /usr/java/jdk1.7.0_45/bin/java  -Dproc_datanode
root    34392   grep    hadoop

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh stop datanode
stopping datanode

測試啟動和停止 jobtracker

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh start jobtracker
starting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-master01.out

[root@master01 sbin]# ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t"  $12}'
root    34424   /usr/java/jdk1.7.0_45/bin/java  -Dproc_jobtracker
root    34497   grep    hadoop

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh stop jobtracker
stopping jobtracker

測試啟動和停止 tasktracker

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh start tasktracker
starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-master01.out

[root@master01 ~]# ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t"  $12}'
root    34119   /usr/java/jdk1.7.0_45/bin/java  -Dproc_tasktracker
root    34169   grep    hadoop

[root@master01 hadoop]# /usr/sbin/hadoop-daemon.sh stop tasktracker
stopping tasktracker

全部測試成功之後測試啟動全部服務

避免防火牆問題,先停掉它

[root@master01 ~]# service iptables stop
[root@slave01 ~]# service iptables stop
[root@slave02 ~]# service iptables stop

[root@master01 ~]# chkconfig iptables off
[root@slave01 ~]# chkconfig iptables off

[root@slave02 ~]# chkconfig iptables off

[root@master01 ~]# chmod +x /usr/sbin/*.sh
[root@slave01 ~]# chmod +x /usr/sbin/*.sh
[root@slave02 ~]# chmod +x /usr/sbin/*.sh

●啟動 hadoop cluster (只要 master01 做,會自動啟動 slave01 和 slave02)

[root@master01 ~]# start-all.sh
starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-master01.out
slave01: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-slave01.out
slave02: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-slave02.out
master01: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-master01.out
master01: starting secondarynamenode, logging to /var/log/hadoop/root/hadoop-root-secondarynamenode-master01.out
starting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-master01.out
slave01: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-slave01.out
slave02: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-slave02.out
master01: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-master01.out

master01 上檢查

[root@master01 ~]# hs.sh
root    3330    /usr/java/jdk1.7.0_45/bin/java  -Dproc_namenode
root    3449    /usr/java/jdk1.7.0_45/bin/java  -Dproc_datanode
root    3575    /usr/java/jdk1.7.0_45/bin/java  -Dproc_secondarynamenode
root    3667    /usr/java/jdk1.7.0_45/bin/java  -Dproc_jobtracker
root    3792    /usr/java/jdk1.7.0_45/bin/java  -Dproc_tasktracker
root    3843    grep    hadoop
[root@master01 ~]#

slave01 上檢查

[root@slave01 ~]# hs.sh
root    2456    /usr/java/jdk1.7.0_45/bin/java  -Dproc_datanode
root    2551    /usr/java/jdk1.7.0_45/bin/java  -Dproc_tasktracker
root    2623    grep    hadoop
[root@slave01 ~]#

slave02 上檢查

[root@slave02 ~]# hs.sh
root    2456    /usr/java/jdk1.7.0_45/bin/java  -Dproc_datanode
root    2551    /usr/java/jdk1.7.0_45/bin/java  -Dproc_tasktracker
root    2623    grep    hadoop
[root@slave02 ~]#

●測試Hadoop網頁管理功能

測試HSDF網頁管理介面
http://192.168.128.101:50070


(下圖) 按下上圖中 Browse the filesystem 超連結後 (疑似有問題,待查)

(下圖) 按下第一張圖中 Live Nodes 超連結後

(下圖) 按下第一張圖中 Dead Nodes 超連結後


 (下圖) 按下第一張圖中 Decommissioning Nodes 超連結後

(下圖) 測試MapReduce網頁管理介面
http://192.168.128.101:50030



(下圖) 按下上圖中 default 超連結的畫面

停止

[root@master01 ~]#  /usr/sbin/stop-all.sh
stopping jobtracker
slave01: stopping tasktracker
slave02: stopping tasktracker
master01: stopping tasktracker
stopping namenode
slave01: stopping datanode
master01: stopping datanode
slave02: stopping datanode
master01: stopping secondarynamenode
[root@master01 ~]#


(完)

相關

[研究] Hadoop 2.2.0 Cluster 安裝 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/hadoop-220-cluster-centos-65-x64.html

[研究] Hadoop 1.2.1 Cluster 安裝 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/hadoop-121-cluster-centos-65-x64_29.html

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974

沒有留言:

張貼留言