2013年12月29日 星期日

[研究] Hadoop 2.2.0 Cluster 安裝 (CentOS 6.5 x64)

[研究] Hadoop 2.2.0 Cluster 安裝 (CentOS 6.5 x64)

2013-12-29

小弟是新手,如有錯漏歡迎指教

參考資料

http://hadoop.apache.org/docs/r1.2.1/single_node_setup.html
http://hadoop.apache.org/docs/r1.2.1/cluster_setup.html

●環境

三台 CentOS 6.5 x86_64  64 bits 電腦

192.168.128.101  master01
192.168.128.102  slave01
192.168.128.103  slave02

●設定固定靜態 IP 和主機名稱 (三台都要做,注意 IP 和主機名稱是不同的)

設定固定 IP

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
HWADDR=00:0c:29:cd:49:e9
TYPE=Ethernet
UUID=778b0414-2c4b-4c39-877c-5902f145ec18
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=192.168.128.101
NETMASK=255.255.255.0
GATEWAY=192.168.128.2
DNS1=192.168.128.2
IPV6INIT=no
USERCTL=no

設定主機與 IP 對應

echo "192.168.128.101  master01" >> /etc/hosts
echo "192.168.128.102  slave01" >> /etc/hosts
echo "192.168.128.103  slave02" >> /etc/hosts
cat /etc/hosts



[root@localhost ~]# vi  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.128.101  master01
192.168.128.102  slave01
192.168.128.103  slave02

設定 DNS Server

[root@localhost ~]# vi /etc/resolv.conf

# Generated by NetworkManager
nameserver 192.168.128.2

設定主機名稱 (會立刻生效,但 reboot 後失效)

[root@localhost ~]# hostname  master01

測試

[root@localhost ~]# hostname
master01

設定主機名稱 (不會立刻生效,要 reboot 後生效)

[root@localhost local]# vi /etc/sysconfig/network
NETWORKING=yes
#HOSTNAME=localhost.localdomain
HOSTNAME=master01

重新啟動網路

[root@localhost local]# service network restart
Shutting down interface eth0:  Device state: 3 (disconnected)
                                                           [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:  Active connection state: activated
Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1
                                                           [  OK  ]
[root@localhost local]#


●安裝Oracle Java (三台都要做)

[研究] Oracle Java 手動安裝與快速安裝程式 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/oracle-java-centos-65-x64.html

[root@localhost ~]# ./JDK7U45x64_Install.sh

●安裝 Hadoop (三台都要做)

hadoop-2.2.0-x86-x86_64.tar.gz 的來源請參考

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

cd /usr/local
tar zxvf hadoop-2.2.0-x86-x86_64.tar.gz
echo 'export HADOOP_HOME=/usr/local/hadoop-2.2.0' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile
source /etc/profile

檢視現況

[root@master01 hadoop]# hadoop version
Hadoop 2.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using /usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar
[root@master01 hadoop]#

● 編輯Master和Slaves (只要 master01 做,稍後會拷貝到另兩台)

[root@ master01 ~]# vi /usr/local/hadoop-2.2.0/etc/hadoop/masters
master01

[root@ master01 ~]# vi /usr/local/hadoop-2.2.0/etc/hadoop/slaves
master01
slave01
slave02

●設定 ssh 連線免輸入密碼(三台都要做)

目的是讓master01能自動連進slave01 ,slave02啟動各台機器的相關服務,如Datanode和task服務

[root@ master01 ~]# yum  -y  install  openssh  rsync
[root@ master01 ~]# service sshd restart
[root@ master01 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
ca:04:30:8d:be:bd:91:a2:c3:c4:94:cf:18:c3:43:cb root@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
|  oo             |
| ..o.            |
|+.o .            |
| E.  .           |
|o Bo .. S        |
| +oo+o .         |
|o. . oo          |
|o.  .            |
| .               |
+-----------------+
[hadoop@ master01 ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@ master01 ~]# chmod 600 ~/.ssh/authorized_keys

slave01 上也執行

yum  -y  install  openssh  rsync
service sshd restart
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

slave02 上也執行

yum  -y  install  openssh  rsync
service sshd restart
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

讓自己 ssh 自己免登入密碼 (三台要各自做,都是連 root@localhost)
第一次執行會問,回答 yes,執行 exit 離開

[root@master01 ~]# ssh root@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 6d:a4:8e:a6:b5:b0:e9:c4:e8:5b:55:be:e4:bd:04:60.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Thu Dec 26 18:30:35 2013 from 192.168.128.1
[root@master01 ~]# exit
logout
Connection to localhost closed.

第二次應該不會問,執行 exit 離開

[root@master01 ~]# ssh root@localhost
Last login: Thu Dec 26 18:31:05 2013 from localhost
[root@master01 ~]# exit
logout
Connection to localhost closed.
[root@master01 ~]#

檢視一下

[root@master01 ~]# cat ~/.ssh/authorized_keys
ssh-dss AAAAB3NzaC1kc3MAAACBAJvVJ7rK7QX2JcAGAwk85l5B7Cm2QUIrQ6RjaSsMDQTZEV6LJ8lWAkdlXIOJhte0EzylPLzxUvckjpr9wEtoZjBjh6i8qklzheQMfLbZUQG3QAxWqeoZYbSdDnoIsHOBSQbckjYiUOvpQECIetiBDQQUdjWglB8jLKWGWa42hUXPAAAAFQDMVDU+CdpFDmp/6PhvBiREpIwHAwAAAIAzXR5aFwO0pUWPAltTwkoruJkiOzl+iC5mrXUJQaEwXXnWJLBYxwLVm/sbNFcMBRLN6+DDp0RoYKe+AIiK51TPVlKGXqfpdPNMkrYYuJronkLGfRg215ko5DCFs/Zz9xsEHfKo48dmn/jy0fySvABwb6LAy3TFYgJBOHpp+lwVtgAAAIBrV22S3BubY4WU2T/BDHY9lfcz4nlSfV5izfjpnAXQ+e5NxD5NlGXmANb6vUcS3z9/dYXpHgAb4ZlpWEYFCLbiALA11fdscHA/bxdYp0nyhHZsZOAZQMR8Hzb6c/xX+btC5+3vmoNsTjhAySmke7SKnQR6yUFvBtjs+D3xvUZc6g== root@master01

[root@master01 ~]# cat ~/.ssh/known_hosts
localhost ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

(三台要各自做,都是連 root@localhost)

後面那一串亂碼,各台機器上可能不同,你我也可能不同

編輯
[root@master01 ~]# vi  ~/.ssh/known_hosts

localhost ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

內容改成下面 (增加一些主機)

localhost,127.0.0.1,master01,192.168.128.101 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

slave01 主機上的 ~/.ssh/known_hosts (注意 IP 和主機名稱不同)

localhost,127.0.0.1,slave01,192.168.128.102 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==

slave02 主機上的 ~/.ssh/known_hosts (注意 IP 和主機名稱不同)

localhost,127.0.0.1,slave02,192.168.128.103 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnCCNfMSnYd+cqCXPG3d+Okhz7BRxNjPx5dvA5PdeWEHKFvGBgJPX3m8cKMbD5yH9OTUEO9+gaUwSPCzAXFrUIbgEVzHdhVlWHN3MC+qGxp5ZNYf4JbyJzVhH0P5lbOTn6VNfVRJoMf1Ff1+D6OLXJ6vx1ZVpiEBiWZc3szFXvd/BpEazFUaSLhAR3UopKJ2r6GVjnVTpEWHhIs4hkiEHkLPUQfdupRmjZ4QMfoT2PJ36Yc4Xk+z/ShPBQsnrhMJyMwwvkm0WTJKrAGHQxiIzxbE3oPUHc/4n41tD9n1uREsVzILm7mb6VpYAbLSPkeplIqt9DA9itNRwDUjta98Eaw==


● 讓所有 master 用 ssh 連其他 slave 免密碼 (master01 和 slave01,slave02 作法不同)

master01 主機上,把 key 拷貝到 slave01

[root@master01 ~]# scp ~/.ssh/authorized_keys root@slave01:~/.ssh/authorized_keys_from_master01
The authenticity of host 'slave01 (192.168.128.102)' can't be established.
RSA key fingerprint is b5:78:67:c6:4b:29:82:9d:f7:49:e7:02:d9:ec:09:17.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave01,192.168.128.102' (RSA) to the list of known hosts.
root@slave01's password:
authorized_keys                               100%  611     0.6KB/s   00:00

master01 主機上,把 key 拷貝到 slave02

[root@master01 ~]# scp ~/.ssh/authorized_keys root@slave02:~/.ssh/authorized_keys_from_master01
The authenticity of host 'slave02 (192.168.128.103)' can't be established.
RSA key fingerprint is ac:e1:83:2b:ee:e2:e2:0b:1c:df:06:c7:84:1b:56:de.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave02,192.168.128.103' (RSA) to the list of known hosts.
root@slave02's password:
authorized_keys                               100%  611     0.6KB/s   00:00

slave01 主機上,把 master01 的 key 放入授權檔案

[root@slave01 ~]# cat ~/.ssh/authorized_keys_from_master01 >> ~/.ssh/authorized_keys

檢視

[root@slave01 ~]# cat ~/.ssh/authorized_keys
ssh-dss AAAAB3NzaC1kc3MAAACBAOYUc5Q7GXnQgHdL3durY297VrEBFrFbTiqNcQoxUjsO1H9exXxU2U06ahcxVGM1sMOqgbTy5aQrNk6P6Lv0f3Lxwks+C07BeY0SBdfmoRotN/8dPb/4Ykk9WSRBo0x7a8HMWqidoVwb73Etsyc10aa0ujP/iwKVhICKY6w3y+IpAAAAFQCT+NAUGf3DhKCUgNBpkVVvWUts7QAAAIBHT4CIqeo2TAKrpF9chXNdd3IklAeidfpwb/p8WGVB0qdrgf8g7OD1E5/ZbSM7aebmbAR9AMGjTi+tcCbmI53JhuHLnMzrmP1P6+BmZxfiq1//GNz2uOsrLZzV4+BLKA7DNYgdeCLV7/GsQX0kc7FZLwK1mtdZVDMI+rOsB/j6sAAAAIEAzBZ3cv9L4qmaY3FoAttr3wbt2c1JJIWFo0CUCc+icDYM8S7jGmVOScfFAg0M81VLJVEli1Tr7/MRFJxEftHRSxEdooUBltRXmx5XjXfEM9tXN/nT9RuSiQop5XCMMSNFVYF/G1XxywyAh7mRvreibG0fxcfyuC2meorqa31PlCU= root@slave01
ssh-dss AAAAB3NzaC1kc3MAAACBAJvVJ7rK7QX2JcAGAwk85l5B7Cm2QUIrQ6RjaSsMDQTZEV6LJ8lWAkdlXIOJhte0EzylPLzxUvckjpr9wEtoZjBjh6i8qklzheQMfLbZUQG3QAxWqeoZYbSdDnoIsHOBSQbckjYiUOvpQECIetiBDQQUdjWglB8jLKWGWa42hUXPAAAAFQDMVDU+CdpFDmp/6PhvBiREpIwHAwAAAIAzXR5aFwO0pUWPAltTwkoruJkiOzl+iC5mrXUJQaEwXXnWJLBYxwLVm/sbNFcMBRLN6+DDp0RoYKe+AIiK51TPVlKGXqfpdPNMkrYYuJronkLGfRg215ko5DCFs/Zz9xsEHfKo48dmn/jy0fySvABwb6LAy3TFYgJBOHpp+lwVtgAAAIBrV22S3BubY4WU2T/BDHY9lfcz4nlSfV5izfjpnAXQ+e5NxD5NlGXmANb6vUcS3z9/dYXpHgAb4ZlpWEYFCLbiALA11fdscHA/bxdYp0nyhHZsZOAZQMR8Hzb6c/xX+btC5+3vmoNsTjhAySmke7SKnQR6yUFvBtjs+D3xvUZc6g== root@master01

兩行的最後分別是 root@slave01 和 root@master01,表示這兩個帳號可以免密碼 ssh 登入

slave02 主機上,把 master01 的 key 放入授權檔案

[root@slave02 ~]# cat ~/.ssh/authorized_keys_from_master01 >> ~/.ssh/authorized_keys

● 做 hadoop 設定,並從 master01 複製拷貝到 slave01, slave02 (只要 master01 做)


[root@master01 hadoop-2.2.0]# vi /usr/local/hadoop-2.2.0/etc/hadoop/hadoop-env.sh

在 export JAVA_HOME=${JAVA_HOME} 之前增加
JAVA_HOME="/usr/java/jdk1.7.0_45"

[root@master01 hadoop-2.2.0]# vi /usr/local/hadoop-2.2.0/etc/hadoop/yarn-env.sh

在 # export JAVA_HOME=/home/y/libexec/jdk1.6.0/ 之後增加
JAVA_HOME="/usr/java/jdk1.7.0_45"

[root@master01 hadoop-2.2.0]# vi  /usr/local/hadoop-2.2.0/etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
            <name>fs.defaultFS</name>
<value>hdfs://master01:9000</value>
    </property>
    <property>
            <name>io.file.buffer.size</name>
          <value>131072</value>
    </property>
    <property>
<name>hadoop.tmp.dir</name>
          <value>file:/home/hduser/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
            <name>hadoop.proxyuser.hduser.hosts</name>
          <value>*</value>
    </property>
    <property>
            <name>hadoop.proxyuser.hduser.groups</name>
          <value>*</value>
    </property>
</configuration>



[root@master01 hadoop-2.2.0]# vi /usr/local/hadoop-2.2.0/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master01:9001</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hduser/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hduser/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>



[root@master01 hadoop-2.2.0]# vi /usr/local/hadoop-2.2.0/etc/hadoop/mapred-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master01:19888</value>
    </property>
</configuration>



[root@master01 hadoop-2.2.0]# vi /usr/local/hadoop-2.2.0/etc/hadoop/yarn-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master01:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value> master01:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value> master01:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value> master01:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value> master01:8088</value>
    </property>
</configuration>


[root@master01 ~]# scp   /usr/local/hadoop-2.2.0/etc/hadoop/*   root@192.168.128.102:/usr/local/hadoop-2.2.0/etc/hadoop/.

[root@master01 ~]# scp   /usr/local/hadoop-2.2.0/etc/hadoop/*   root@192.168.128.103:/usr/local/hadoop-2.2.0/etc/hadoop/.

●格式化分散式檔案系統

格式化指令 hadoop namenode -format 在 hadoop 2.2.0 改為 hdfs namenode -format

[root@master01 hadoop]# hdfs namenode -format
13/12/07 10:16:12 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master01/192.168.128.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_45
************************************************************/
13/12/07 10:16:12 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-075c1f4f-ddc9-48e6-b543-866a58f73547
13/12/07 10:16:13 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/12/07 10:16:13 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/12/07 10:16:13 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/12/07 10:16:13 INFO util.GSet: Computing capacity for map BlocksMap
13/12/07 10:16:13 INFO util.GSet: VM type       = 64-bit
13/12/07 10:16:13 INFO util.GSet: 2.0% max memory = 966.7 MB
13/12/07 10:16:13 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/12/07 10:16:13 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/12/07 10:16:13 INFO blockmanagement.BlockManager: defaultReplication         = 3
13/12/07 10:16:13 INFO blockmanagement.BlockManager: maxReplication             = 512
13/12/07 10:16:13 INFO blockmanagement.BlockManager: minReplication             = 1
13/12/07 10:16:13 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13/12/07 10:16:13 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
13/12/07 10:16:13 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/12/07 10:16:13 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
13/12/07 10:16:13 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
13/12/07 10:16:13 INFO namenode.FSNamesystem: supergroup          = supergroup
13/12/07 10:16:13 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/12/07 10:16:13 INFO namenode.FSNamesystem: HA Enabled: false
13/12/07 10:16:13 INFO namenode.FSNamesystem: Append Enabled: true
13/12/07 10:16:13 INFO util.GSet: Computing capacity for map INodeMap
13/12/07 10:16:13 INFO util.GSet: VM type       = 64-bit
13/12/07 10:16:13 INFO util.GSet: 1.0% max memory = 966.7 MB
13/12/07 10:16:13 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13/12/07 10:16:13 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/12/07 10:16:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/12/07 10:16:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/12/07 10:16:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
13/12/07 10:16:13 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/12/07 10:16:13 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/12/07 10:16:13 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/12/07 10:16:13 INFO util.GSet: VM type       = 64-bit
13/12/07 10:16:13 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
13/12/07 10:16:13 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /home/hduser/dfs/name ? (Y or N) Y
13/12/07 10:16:15 INFO common.Storage: Storage directory /home/hduser/dfs/name has been successfully formatted.
13/12/07 10:16:15 INFO namenode.FSImage: Saving image file /home/hduser/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/12/07 10:16:15 INFO namenode.FSImage: Image file /home/hduser/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
13/12/07 10:16:15 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/12/07 10:16:15 INFO util.ExitUtil: Exiting with status 0
13/12/07 10:16:15 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master01/192.168.128.101
************************************************************/
[root@master01 hadoop]#



●測試啟動各項Hadoop服務(三台)

(偷懶一點就各項服務的測試跳過,等 start-all.sh 發現異常再測試)

測試啟動和停止 Namenode

小弟經驗,不管用 hadoop-daemon.sh 或 start-all.sh 或 stop-all.sh 去啟動或停止,
就算告訴你成功,都沒有 100% 可靠,最好用 jps 或 ps  aux | grep node檢查

[root@master01 hadoop]# hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-namenode-master01.out

[root@master01 hadoop]# jps
2865 Jps
2807 NameNode

[root@master01 hadoop]# hadoop-daemon.sh stop namenode
stopping namenode


測試啟動和停止 Datanode

[root@master01 hadoop]# hadoop-daemon.sh start datanode
starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-datanode-master01.out

[root@master01 hadoop]# jps
2956 Jps
2904 DataNode

[root@master01 hadoop]# hadoop-daemon.sh stop datanode
stopping datanode
[root@master01 hadoop]#


測試啟動和停止 jobtracker

hadoop-daemon.sh start jobtracker 不再支援,改用 mapred 命令

測試啟動和停止 tasktracker

hadoop-daemon.sh start tasktracker 不再支援,改用 mapred 命令

全部測試成功之後測試啟動全部服務

避免防火牆問題,先停掉它

[root@master01 ~]# service iptables stop
[root@slave01 ~]# service iptables stop
[root@slave02 ~]# service iptables stop

[root@master01 ~]# chkconfig iptables off
[root@slave01 ~]# chkconfig iptables off
[root@slave02 ~]# chkconfig iptables off

●啟動 hadoop cluster (只要 master01 做,會自動啟動 slave01 和 slave02)

[root@master01 hadoop]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master01]
master01: starting namenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-namenode-master01.out
master01: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-datanode-master01.out
slave01: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-datanode-slave01.out
slave02: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-datanode-slave02.out
Starting secondary namenodes [master01]
master01: starting secondarynamenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-master01.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-root-resourcemanager-master01.out
slave02: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-root-nodemanager-slave02.out
slave01: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-root-nodemanager-slave01.out
master01: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-root-nodemanager-master01.out

[root@master01 hadoop]# jps
3346 DataNode
3481 SecondaryNameNode
3806 Jps
3231 NameNode

slave01 主機上

[root@slave01 ~]# jps
2406 Jps
2223 DataNode
[root@slave01 ~]#

slave02 主機上

[root@slave02 ~]# jps
2148 DataNode
2332 Jps
[root@slave02 ~]#

如果 hadoop-env.sh 和 yarn-env.sh 中沒有設定 JAVA_HOME 值,會出現下面錯誤

[root@master01 hadoop]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master01]
master01: Error: JAVA_HOME is not set and could not be found.
master01: Error: JAVA_HOME is not set and could not be found.
slave02: Error: JAVA_HOME is not set and could not be found.
slave01: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [master01]
master01: Error: JAVA_HOME is not set and could not be found.
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-root-resourcemanager-master01.out
slave02: Error: JAVA_HOME is not set and could not be found.
slave01: Error: JAVA_HOME is not set and could not be found.
master01: Error: JAVA_HOME is not set and could not be found.
[root@master01 hadoop]#

查看 Cluster 狀態

[root@master01 hadoop]# hdfs dfsadmin -report
Configured Capacity: 150827655168 (140.47 GB)
Present Capacity: 142593413120 (132.80 GB)
DFS Remaining: 142593339392 (132.80 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Live datanodes:
Name: 192.168.128.102:50010 (slave01)
Hostname: slave01
Decommission Status : Normal
Configured Capacity: 50275885056 (46.82 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2743713792 (2.56 GB)
DFS Remaining: 47532146688 (44.27 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.54%
Last contact: Sat Dec 07 10:42:34 CST 2013


Name: 192.168.128.101:50010 (master01)
Hostname: master01
Decommission Status : Normal
Configured Capacity: 50275885056 (46.82 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2746884096 (2.56 GB)
DFS Remaining: 47528976384 (44.26 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.54%
Last contact: Sat Dec 07 10:42:34 CST 2013


Name: 192.168.128.103:50010 (slave02)
Hostname: slave02
Decommission Status : Normal
Configured Capacity: 50275885056 (46.82 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2743644160 (2.56 GB)
DFS Remaining: 47532216320 (44.27 GB)
DFS Used%: 0.00%
DFS Remaining%: 94.54%
Last contact: Sat Dec 07 10:42:34 CST 2013


[root@master01 hadoop]#

查看檔案區塊

[root@master01 hadoop]# hdfs fsck / -files -blocks
Connecting to namenode via http://master01:50070
FSCK started by root (auth:SIMPLE) from /192.168.128.101 for path / at Sat Dec 07 10:43:52 CST 2013
/ <dir>
Status: HEALTHY
 Total size:    0 B
 Total dirs:    1
 Total files:   0
 Total symlinks:                0
 Total blocks (validated):      0
 Minimally replicated blocks:   0
 Over-replicated blocks:        0
 Under-replicated blocks:       0
 Mis-replicated blocks:         0
 Default replication factor:    3
 Average block replication:     0.0
 Corrupt blocks:                0
 Missing replicas:              0
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Sat Dec 07 10:43:52 CST 2013 in 7 milliseconds


The filesystem under path '/' is HEALTHY
[root@master01 hadoop]#



●測試Hadoop網頁管理功能

測試HSDF網頁管理介面
http://192.168.128.101:50070



(下圖) 按下上圖中 Browse the filesystem 超連結後 (疑似有問題,待查)



(下圖) 按下第一張圖中 Live Nodes 超連結後



(下圖) 按下第一張圖中 Dead Nodes 超連結後


(下圖) 按下第一張圖中 Decommissioning Nodes 超連結後



(下圖) 測試MapReduce網頁管理介面
http://192.168.128.101:50030

停止

[root@master01 hadoop]# stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master01]
master01: stopping namenode
master01: stopping datanode
slave01: stopping datanode
slave02: stopping datanode
Stopping secondary namenodes [master01]
master01: stopping secondarynamenode
stopping yarn daemons
no resourcemanager to stop
slave01: no nodemanager to stop
master01: no nodemanager to stop
slave02: no nodemanager to stop
no proxyserver to stop
[root@master01 hadoop]#


(完)

相關

[研究] Hadoop 2.2.0 Cluster 安裝 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/hadoop-220-cluster-centos-65-x64.html

[研究] Hadoop 1.2.1 Cluster 安裝 (CentOS 6.5 x64)
http://shaurong.blogspot.tw/2013/12/hadoop-121-cluster-centos-65-x64_29.html

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974

6 則留言:

  1. 作者已經移除這則留言。

    回覆刪除
  2. 您好,請問您我依照您的作法,但在最後執行start-all.sh時,我作wordcount example 的測試但一直出現下列錯誤訊息,請問是啥原因呢?
    hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /user/micmiu/wordcount/in /user/micmiu/wordcount/out
    [root@master01 hadoop-2.2.0]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /user/micmiu/wordcount/in /user/micmiu/wordcount/out
    14/03/22 00:04:31 INFO client.RMProxy: Connecting to ResourceManager at master01/192.168.100.101:8032
    14/03/22 00:04:32 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:33 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:34 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:35 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:36 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:37 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:38 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:39 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:40 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    14/03/22 00:04:41 INFO ipc.Client: Retrying connect to server: master01/192.168.100.101:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

    回覆刪除
    回覆
    1. 根據錯誤訊息,無法透過 port 8032 連到 192.168.100.101 上的 ResourceManager

      刪除
    2. 請問這個部份是因為我設定檔設定錯誤嗎?我已經將SELinux 及防火牆都進行關閉了說

      刪除
  3. 為什麼「MapReduce網頁管理介面」無法開啟呢?

    回覆刪除