[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
2013-07-27
Hadoop 是個架設雲端的系統,它參考Google Filesystem,以Java開發,提供HDFS與MapReduce API。
The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.
官方網站
http://hadoop.apache.org/
安裝參考
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
下載
http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-1.2.1/
一、準備工作
1.安裝基本套件
[root@localhost ~]# yum -y install openssh rsync
[root@localhost ~]# chmod +x jre-6u45-linux-x64-rpm.bin
[root@localhost ~]# ./jre-6u45-linux-x64-rpm.bin
[root@localhost ~]# find / -name java
/etc/java
/etc/alternatives/java
/etc/pki/java
/var/lib/alternatives/java
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java
/usr/lib/java
/usr/share/java
/usr/lib64/libreoffice/ure/share/java
/usr/lib64/libreoffice/basis3.4/share/Scripts/java
/usr/bin/java
/usr/java
/usr/java/jre1.6.0_45/bin/java
[root@localhost ~]# |
2.建立 hadoop 帳號,設定密碼,切換為 hadoop 身分
[root@centos1 ~]# useradd hadoop
[root@centos1 ~]# passwd hadoop
[root@centos1 ~]# su hadoop
[hadoop@localhost root]$ cd
[hadoop@localhost ~]$ |
3.設定 ssh 連線免輸入密碼
[hadoop@localhost ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
e9:eb:5c:6e:ef:fe:31:13:ac:9d:6a:1d:1f:ae:b6:7f hadoop@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
| |
| |
| |
| . . |
| S o |
| . o.+ |
| . . ..Bo.|
| . +. .o.=E|
| .+..=*+=..|
+-----------------+
[hadoop@localhost ~]$
[hadoop@localhost ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@localhost ~]$ chmod 600 .ssh/authorized_keys |
測試一下,第一次測試可能仍會詢問問題
[hadoop@localhost ~]$ ssh hadoop@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 6b:a1:53:17:70:de:0d:ff:8d:f9:01:e1:ad:e6:05:2e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
[hadoop@localhost ~]$ exit
logout
Connection to localhost closed.
[hadoop@localhost ~]$ |
第二次測試應該可以直接連線
[hadoop@localhost ~]$ ssh hadoop@localhost
Last login: Sun Jul 21 03:57:48 2013 from localhost
[hadoop@localhost ~]$ |
3.下載解壓hadoop
[hadoop@localhost ~]$ wget http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
[hadoop@localhost ~]$ tar xzvf hadoop-1.2.1.tar.gz
[hadoop@localhost ~]$ cd /home/hadoop/hadoop-1.2.1
[hadoop@localhost hadoop-1.2.1]$ vim /home/hadoop/hadoop-1.2.1/conf/hadoop-env.sh |
加入一行
4.測試一下hadoop可否執行
二、測試
1. 測試 hadoop 命令
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
datanode run a DFS datanode
dfsadmin run a DFS admin client
mradmin run a Map-Reduce admin client
fsck run a DFS filesystem checking utility
fs run a generic filesystem user client
balancer run a cluster balancing utility
fetchdt fetch a delegation token from the NameNode
jobtracker run the MapReduce job Tracker node
pipes run a Pipes job
tasktracker run a MapReduce task Tracker node
historyserver run job history servers as a standalone daemon
job manipulate MapReduce jobs
queue get information regarding JobQueues
version print the version
jar <jar> run a jar file
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archi ve
classpath prints the class path needed to get the
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
[hadoop@localhost hadoop-1.2.1]$ |
2. 測試 Local (Standalone) Mode
[hadoop@localhost hadoop-1.2.1]$ mkdir input
[hadoop@localhost hadoop-1.2.1]$ cp conf/*.xml input
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop jar hadoop-examples-1.2.1.jar grep input output 'dfs[a-z.]+'
13/07/21 04:03:49 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/07/21 04:03:49 WARN snappy.LoadSnappy: Snappy native library not loaded
13/07/21 04:03:49 INFO mapred.FileInputFormat: Total input paths to process : 7
13/07/21 04:03:50 INFO mapred.JobClient: Running job: job_local_0001
13/07/21 04:03:50 INFO util.ProcessTree: setsid exited with exit code 0
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4195d263
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/capacity-scheduler.xml:0+7457
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@255722d7
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.MapTask: Finished spill 0
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/hadoop-policy.xml:0+4644
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a4993d4
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/mapred-queue-acls.xml:0+2033
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5e38634a
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/fair-scheduler.xml:0+327
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5e3b76ea
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/mapred-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@508610d2
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/hdfs-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@67cef2cd
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000006_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/core-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000006_0' done.
13/07/21 04:03:50 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5a766050
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Merger: Merging 7 sorted segments
13/07/21 04:03:50 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
13/07/21 04:03:50 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/hadoop/hadoop-1.2.1/grep-temp-1078176698
13/07/21 04:03:50 INFO mapred.LocalJobRunner: reduce > reduce
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
13/07/21 04:03:51 INFO mapred.JobClient: map 100% reduce 100%
13/07/21 04:03:51 INFO mapred.JobClient: Job complete: job_local_0001
13/07/21 04:03:51 INFO mapred.JobClient: Counters: 21
13/07/21 04:03:51 INFO mapred.JobClient: File Input Format Counters
13/07/21 04:03:51 INFO mapred.JobClient: Bytes Read=14995
13/07/21 04:03:51 INFO mapred.JobClient: File Output Format Counters
13/07/21 04:03:51 INFO mapred.JobClient: Bytes Written=123
13/07/21 04:03:51 INFO mapred.JobClient: FileSystemCounters
13/07/21 04:03:51 INFO mapred.JobClient: FILE_BYTES_READ=1272808
13/07/21 04:03:51 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1547930
13/07/21 04:03:51 INFO mapred.JobClient: Map-Reduce Framework
13/07/21 04:03:51 INFO mapred.JobClient: Map output materialized bytes=61
13/07/21 04:03:51 INFO mapred.JobClient: Map input records=369
13/07/21 04:03:51 INFO mapred.JobClient: Reduce shuffle bytes=0
13/07/21 04:03:51 INFO mapred.JobClient: Spilled Records=2
13/07/21 04:03:51 INFO mapred.JobClient: Map output bytes=17
13/07/21 04:03:51 INFO mapred.JobClient: Total committed heap usage (bytes)=1204920320
13/07/21 04:03:51 INFO mapred.JobClient: CPU time spent (ms)=0
13/07/21 04:03:51 INFO mapred.JobClient: Map input bytes=14995
13/07/21 04:03:51 INFO mapred.JobClient: SPLIT_RAW_BYTES=749
13/07/21 04:03:51 INFO mapred.JobClient: Combine input records=1
13/07/21 04:03:51 INFO mapred.JobClient: Reduce input records=1
13/07/21 04:03:51 INFO mapred.JobClient: Reduce input groups=1
13/07/21 04:03:51 INFO mapred.JobClient: Combine output records=1
13/07/21 04:03:51 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/07/21 04:03:51 INFO mapred.JobClient: Reduce output records=1
13/07/21 04:03:51 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/07/21 04:03:51 INFO mapred.JobClient: Map output records=1
13/07/21 04:03:51 INFO mapred.FileInputFormat: Total input paths to process : 1
13/07/21 04:03:51 INFO mapred.JobClient: Running job: job_local_0002
13/07/21 04:03:51 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5717e4ff
13/07/21 04:03:51 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:51 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:51 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:51 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:51 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:51 INFO mapred.MapTask: Finished spill 0
13/07/21 04:03:51 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
13/07/21 04:03:51 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/grep-temp-1078176698/part-00000:0+111
13/07/21 04:03:51 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
13/07/21 04:03:51 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1466f971
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Merger: Merging 1 sorted segments
13/07/21 04:03:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
13/07/21 04:03:51 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/hadoop/hadoop-1.2.1/output
13/07/21 04:03:51 INFO mapred.LocalJobRunner: reduce > reduce
13/07/21 04:03:51 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
13/07/21 04:03:52 INFO mapred.JobClient: map 100% reduce 100%
13/07/21 04:03:52 INFO mapred.JobClient: Job complete: job_local_0002
13/07/21 04:03:52 INFO mapred.JobClient: Counters: 21
13/07/21 04:03:52 INFO mapred.JobClient: File Input Format Counters
13/07/21 04:03:52 INFO mapred.JobClient: Bytes Read=123
13/07/21 04:03:52 INFO mapred.JobClient: File Output Format Counters
13/07/21 04:03:52 INFO mapred.JobClient: Bytes Written=23
13/07/21 04:03:52 INFO mapred.JobClient: FileSystemCounters
13/07/21 04:03:52 INFO mapred.JobClient: FILE_BYTES_READ=609559
13/07/21 04:03:52 INFO mapred.JobClient: FILE_BYTES_WRITTEN=770693
13/07/21 04:03:52 INFO mapred.JobClient: Map-Reduce Framework
13/07/21 04:03:52 INFO mapred.JobClient: Map output materialized bytes=25
13/07/21 04:03:52 INFO mapred.JobClient: Map input records=1
13/07/21 04:03:52 INFO mapred.JobClient: Reduce shuffle bytes=0
13/07/21 04:03:52 INFO mapred.JobClient: Spilled Records=2
13/07/21 04:03:52 INFO mapred.JobClient: Map output bytes=17
13/07/21 04:03:52 INFO mapred.JobClient: Total committed heap usage (bytes)=262946816
13/07/21 04:03:52 INFO mapred.JobClient: CPU time spent (ms)=0
13/07/21 04:03:52 INFO mapred.JobClient: Map input bytes=25
13/07/21 04:03:52 INFO mapred.JobClient: SPLIT_RAW_BYTES=115
13/07/21 04:03:52 INFO mapred.JobClient: Combine input records=0
13/07/21 04:03:52 INFO mapred.JobClient: Reduce input records=1
13/07/21 04:03:52 INFO mapred.JobClient: Reduce input groups=1
13/07/21 04:03:52 INFO mapred.JobClient: Combine output records=0
13/07/21 04:03:52 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
13/07/21 04:03:52 INFO mapred.JobClient: Reduce output records=1
13/07/21 04:03:52 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
13/07/21 04:03:52 INFO mapred.JobClient: Map output records=1
[hadoop@localhost hadoop-1.2.1]$
[hadoop@localhost hadoop-1.2.1]$ cat output/*
1 dfsadmin
[hadoop@localhost hadoop-1.2.1]$ |
3. 測試 Pseudo-Distributed Mode
此模式下每個 Hadoop daemon 執行在一個分離的 Java 程序中。
[hadoop@localhost hadoop-1.2.1]$ vim conf/core-site.xml |
改為
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration> |
[hadoop@localhost hadoop-1.2.1]$ vim conf/hdfs-site.xml |
改為
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration> |
[hadoop@localhost hadoop-1.2.1]$ vim conf/mapred-site.xml |
改為
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration> |
格式化分散式檔案系統
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop namenode -format
13/07/21 04:07:29 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
13/07/21 04:07:32 INFO util.GSet: VM type = 64-bit
13/07/21 04:07:32 INFO util.GSet: 2% max memory = 19.33375 MB
13/07/21 04:07:32 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/07/21 04:07:32 INFO util.GSet: recommended=2097152, actual=2097152
13/07/21 04:07:32 INFO namenode.FSNamesystem: fsOwner=hadoop
13/07/21 04:07:32 INFO namenode.FSNamesystem: supergroup=supergroup
13/07/21 04:07:32 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/07/21 04:07:32 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/07/21 04:07:32 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/07/21 04:07:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/07/21 04:07:32 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/07/21 04:07:32 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
13/07/21 04:07:32 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
13/07/21 04:07:32 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
13/07/21 04:07:32 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hadoop@localhost hadoop-1.2.1]$ |
啟動 hadoop daemons
[hadoop@localhost hadoop-1.2.1]$ bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-localhost.localdomain.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-localhost.localdomain.out
[hadoop@localhost hadoop-1.2.1]$ |
log會輸出到 ${HADOOP_LOG_DIR} 目錄 (預設是 to ${HADOOP_HOME}/logs).
瀏覽 NameNode 和 JobTracker 網頁介面,預設為
NameNode
JobTracker
拷貝檔案到分散式檔案系統
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop fs -put conf input |
一些範例
從分散式檔案系統拷貝檔案到本機檔案系統檢驗
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop fs -get output output
get: null
[hadoop@localhost hadoop-1.2.1]$ cat output/*
1 dfsadmin |
停止 daemon 命令如下
[hadoop@localhost hadoop-1.2.1]$ bin/stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode |
4. 測試 Fully-Distributed Mode
請看 Hadoop Cluster Setup
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
(完)
相關
[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html
[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html
[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html
[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html
[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html
[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035
[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166
[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513
[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974