請先完成下面的安裝
[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html
2.2.0 版安裝教學
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/SingleCluster.html
舊版安裝教學
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
新舊版 Single Node 安裝教學差異很大,而且新版教學沒有 Web 介面可看
這裡用舊版設定測試看看
開始
省麻煩,如果不是 root,請執行 su root 切換
有關 example 程式的測試將跳過
# 先設定 ssh 不須密碼就可登入
[root@localhost hadoop-2.2.0]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
71:1a:21:4f:e2:bc:66:77:dc:bd:3a:a0:65:1a:5a:e7 root@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
| o o |
| o = . |
| o + . |
| . * . . |
| + S o . . |
| o .o.= . |
| o O . . |
| . o E .. |
| .. |
+-----------------+
[root@localhost hadoop-2.2.0]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@localhost hadoop-2.2.0]# chmod 600 ~/.ssh/authorized_keys
測試,第一次應該會問
[root@localhost hadoop-2.2.0]# ssh root@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 6e:4d:b5:b4:2e:3a:41:0d:6d:da:6b:c9:b9:92:19:42.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Fri Nov 8 09:29:16 2013 from 192.168.128.1
[root@localhost ~]# exit
logout
Connection to localhost closed.
再次一次,應該不會問了
[root@localhost hadoop-2.2.0]# ssh root@localhost
Last login: Fri Nov 8 10:16:34 2013 from localhost
[root@localhost ~]# exit
logout
Connection to localhost closed.
# *****************************************************************************
# 設定 .xml 檔案 (照舊教學)
# 設定 core-site.xml
vi /usr/local/hadoop-2.2.0/etc/hadoop/core-site.xml
在
<configuration>
</configuration>
之間增加
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
# 設定 hdfs-site.xml
vi /usr/local/hadoop-2.2.0/etc/hadoop/hdfs-site.xml
在
<configuration>
</configuration>
之間增加
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
mapred-site.xml 不修改,因為新版教學改過了
# 格式化
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
13/11/08 10:24:21 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_45
************************************************************/
13/11/08 10:24:21 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-b7f1cd6d-d700-486d-b2f2-c78f9d712b3f
13/11/08 10:24:23 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/11/08 10:24:23 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/11/08 10:24:23 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/11/08 10:24:23 INFO util.GSet: Computing capacity for map BlocksMap
13/11/08 10:24:23 INFO util.GSet: VM type = 64-bit
13/11/08 10:24:23 INFO util.GSet: 2.0% max memory = 966.7 MB
13/11/08 10:24:23 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/11/08 10:24:23 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/11/08 10:24:23 INFO blockmanagement.BlockManager: defaultReplication = 1
13/11/08 10:24:23 INFO blockmanagement.BlockManager: maxReplication = 512
13/11/08 10:24:23 INFO blockmanagement.BlockManager: minReplication = 1
13/11/08 10:24:23 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
13/11/08 10:24:23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
13/11/08 10:24:23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/11/08 10:24:23 INFO blockmanagement.BlockManager: encryptDataTransfer = false
13/11/08 10:24:23 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
13/11/08 10:24:23 INFO namenode.FSNamesystem: supergroup = supergroup
13/11/08 10:24:23 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/11/08 10:24:23 INFO namenode.FSNamesystem: HA Enabled: false
13/11/08 10:24:23 INFO namenode.FSNamesystem: Append Enabled: true
13/11/08 10:24:24 INFO util.GSet: Computing capacity for map INodeMap
13/11/08 10:24:24 INFO util.GSet: VM type = 64-bit
13/11/08 10:24:24 INFO util.GSet: 1.0% max memory = 966.7 MB
13/11/08 10:24:24 INFO util.GSet: capacity = 2^20 = 1048576 entries
13/11/08 10:24:24 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/11/08 10:24:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/11/08 10:24:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/11/08 10:24:24 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
13/11/08 10:24:24 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/11/08 10:24:24 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/11/08 10:24:24 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/11/08 10:24:24 INFO util.GSet: VM type = 64-bit
13/11/08 10:24:24 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
13/11/08 10:24:24 INFO util.GSet: capacity = 2^15 = 32768 entries
13/11/08 10:24:25 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
13/11/08 10:24:25 INFO namenode.FSImage: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/11/08 10:24:25 INFO namenode.FSImage: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
13/11/08 10:24:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/11/08 10:24:25 INFO util.ExitUtil: Exiting with status 0
13/11/08 10:24:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost hadoop-2.2.0]#
注意訊息
DEPRECATED: Use of this script to execute hdfs command is deprecated.
格式化功能請改用 hdfs 命令
如下,檢查 hadoop 命令,確實沒有 node -format 這項
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
如下,檢查 hafs 命令,有 node -format 這項
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/bin/hdfs
Usage: hdfs [--config confdir] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
oiv apply the offline fsimage viewer to an fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
Most commands print help when invoked w/o parameters.
[root@localhost hadoop-2.2.0]#
改用 hdfs 做格式化
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/bin/hdfs namenode -format
13/11/08 10:27:52 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.2.0
STARTUP_MSG: classpath = /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG: java = 1.7.0_45
************************************************************/
13/11/08 10:27:52 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-4d00e57b-5eb5-494a-9310-b99dd8195ed9
13/11/08 10:27:54 INFO namenode.HostFileManager: read includes:
HostSet(
)
13/11/08 10:27:54 INFO namenode.HostFileManager: read excludes:
HostSet(
)
13/11/08 10:27:54 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13/11/08 10:27:54 INFO util.GSet: Computing capacity for map BlocksMap
13/11/08 10:27:54 INFO util.GSet: VM type = 64-bit
13/11/08 10:27:54 INFO util.GSet: 2.0% max memory = 966.7 MB
13/11/08 10:27:54 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/11/08 10:27:54 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
13/11/08 10:27:54 INFO blockmanagement.BlockManager: defaultReplication = 1
13/11/08 10:27:54 INFO blockmanagement.BlockManager: maxReplication = 512
13/11/08 10:27:54 INFO blockmanagement.BlockManager: minReplication = 1
13/11/08 10:27:54 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
13/11/08 10:27:54 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
13/11/08 10:27:54 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13/11/08 10:27:54 INFO blockmanagement.BlockManager: encryptDataTransfer = false
13/11/08 10:27:54 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
13/11/08 10:27:54 INFO namenode.FSNamesystem: supergroup = supergroup
13/11/08 10:27:54 INFO namenode.FSNamesystem: isPermissionEnabled = true
13/11/08 10:27:54 INFO namenode.FSNamesystem: HA Enabled: false
13/11/08 10:27:54 INFO namenode.FSNamesystem: Append Enabled: true
13/11/08 10:27:55 INFO util.GSet: Computing capacity for map INodeMap
13/11/08 10:27:55 INFO util.GSet: VM type = 64-bit
13/11/08 10:27:55 INFO util.GSet: 1.0% max memory = 966.7 MB
13/11/08 10:27:55 INFO util.GSet: capacity = 2^20 = 1048576 entries
13/11/08 10:27:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/11/08 10:27:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13/11/08 10:27:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13/11/08 10:27:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
13/11/08 10:27:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13/11/08 10:27:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
13/11/08 10:27:55 INFO util.GSet: Computing capacity for map Namenode Retry Cache
13/11/08 10:27:55 INFO util.GSet: VM type = 64-bit
13/11/08 10:27:55 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
13/11/08 10:27:55 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /tmp/hadoop-root/dfs/name ? (Y or N) y
13/11/08 10:28:05 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
13/11/08 10:28:05 INFO namenode.FSImage: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
13/11/08 10:28:05 INFO namenode.FSImage: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
13/11/08 10:28:05 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13/11/08 10:28:05 INFO util.ExitUtil: Exiting with status 0
13/11/08 10:28:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost hadoop-2.2.0]#
注意訊息,它會問是否要再次格式化,表示剛剛還是幫你做了
Re-format filesystem in Storage Directory /tmp/hadoop-root/dfs/name ? (Y or N) y
另外得到訊系,Storage Directory 是在 /tmp/hadoop-root/dfs/name
另外注意,1.x版 start-all.sh 在 bin 目錄,hadoop 2.2.0 版改放到 sbin 目錄
[root@localhost hadoop-2.2.0]# ls bin
container-executor hadoop hadoop.cmd hdfs hdfs.cmd mapred mapred.cmd rcc test-container-executor yarn yarn.cmd
[root@localhost hadoop-2.2.0]# ls sbin
distribute-exclude.sh hdfs-config.cmd mr-jobhistory-daemon.sh start-all.cmd start-dfs.cmd start-yarn.cmd stop-all.sh stop-dfs.sh stop-yarn.sh
hadoop-daemon.sh hdfs-config.sh refresh-namenodes.sh start-all.sh start-dfs.sh start-yarn.sh stop-balancer.sh stop-secure-dns.sh yarn-daemon.sh
hadoop-daemons.sh httpfs.sh slaves.sh start-balancer.sh start-secure-dns.sh stop-all.cmd stop-dfs.cmd stop-yarn.cmd yarn-daemons.sh
[root@localhost hadoop-2.2.0]#
啟動所有程式
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-root-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 6e:4d:b5:b4:2e:3a:41:0d:6d:da:6b:c9:b9:92:19:42.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: Write failed: Broken pipe
starting yarn daemons
resourcemanager running as process 2944. Stop it first.
localhost: nodemanager running as process 3172. Stop it first.
[root@localhost hadoop-2.2.0]#
注意訊息 This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
start-all.sh 被放棄了 (但目前仍能用,未來不知),請改用 start-dfs.sh 和 start-yarn.sh
因為沒有設定第 2 台 Name Node,會有些怪訊息
檢查執行情況
[root@localhost hadoop-2.2.0]# ps aux | grep hadoop
root 2944 0.3 10.8 1714744 110256 pts/1 Sl 09:40 0:15 /usr/java/jre1.7.0_45/bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dyarn.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.home.dir= -Dyarn.id.str=root -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dyarn.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-resourcemanager-localhost.localdomain.log -Dyarn.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -classpath /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/common/*:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/*:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.2.0/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
root 3172 0.4 9.3 1579376 94412 pts/1 Sl 09:40 0:17 /usr/java/jre1.7.0_45/bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dyarn.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.home.dir= -Dyarn.id.str=root -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dyarn.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.log.file=yarn-root-nodemanager-localhost.localdomain.log -Dyarn.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -classpath /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/common/*:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/*:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/*:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-2.2.0/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
root 29500 1.3 11.1 1553544 112984 ? Sl 10:38 0:06 /usr/java/jre1.7.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=hadoop-root-namenode-localhost.localdomain.log -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
root 29584 1.3 10.2 1559476 103468 ? Sl 10:38 0:06 /usr/java/jre1.7.0_45/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/usr/local/hadoop-2.2.0/logs -Dhadoop.log.file=hadoop-root-datanode-localhost.localdomain.log -Dhadoop.home.dir=/usr/local/hadoop-2.2.0 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/usr/local/hadoop-2.2.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
root 29892 0.0 0.0 103248 844 pts/1 S+ 10:46 0:00 grep hadoop
[root@localhost hadoop-2.2.0]#
應該會有 5 行 (也就是一行開頭是 root 的共 5 行),其中一行是 grep haddop,其他 4 行分別啟動了 Resource Manager, Node Manager, Name Node, Data Node
或者用下面方式輸出會少些
[root@localhost hadoop-2.2.0]# ps aux | grep hadoop | awk '{print $1 "\t" $2 "\t" $11 "\t" $12}'
root 30587 /usr/java/jre1.7.0_45/bin/java -Dproc_namenode
root 30677 /usr/java/jre1.7.0_45/bin/java -Dproc_datanode
root 30835 /usr/java/jre1.7.0_45/bin/java -Dproc_secondarynamenode
root 30969 /usr/java/jre1.7.0_45/bin/java -Dproc_resourcemanager
root 31065 /usr/java/jre1.7.0_45/bin/java -Dproc_nodemanager
root 31417 grep hadoop
[root@localhost hadoop-2.2.0]#
-Dproc_namenode
-Dproc_datanode
-Dproc_secondarynamenode
-Dproc_resourcemanager
-Dproc_nodemanager
hadoop
[root@localhost hadoop-2.2.0]#
看一下目前 Web 介面 (請在本機上測試)
(下圖) hadoop 畫面在 http://localhost:8080/
不可用 http://127.0.0.1:8088/ 或 http://192.168.128.102:8088/,會ˋ連不上
(下圖) NameNode 在 http://localhost:9000/
(下圖) JobTracker - http://localhost:50030/ 好像這版取消了,所以看不到
(下圖)
http://localhost:9001/
http://localhost:9002/
http://localhost:9003/
http://localhost:9004/
會出現 It looks like you are making an HTTP request to a Hadoop IPC port. This is not the correct port for the web interface on this daemon.
停用
[root@localhost hadoop-2.2.0]# /usr/local/hadoop-2.2.0/sbin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
[root@localhost hadoop-2.2.0]#
(完)
相關
[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html
[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html
[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html
[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html
[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html
[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035
[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166
[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513
[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974
你好,
回覆刪除我照你的方法做, 原來都很順,
但是到了這一步
[root@localhost hadoop-2.2.0]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@localhost hadoop-2.2.0]# chmod 600 ~/.ssh/authorized_keys
再來卻出了問題
[root@localhost hadoop-2.2.0]# ssh root@localhost
ssh: connect to host localhost port 22: Connection refused
我不知道怎麼辦, 我想請問這該怎麼解決?
感激不盡, 謝謝
yum -y install openssh-clients
刪除service sshd restart
chkconfig sshd on
service iptables stop
chkconfig iptables off
相當棒的文章,剛好解決了我的疑惑,非常感謝.
回覆刪除