2011年6月10日 星期五

[研究] 雲端軟體 Hadoop 0.20.203.0rc1 安裝 (CentOS 5.6 x86)

[研究] 雲端軟體 Hadoop 0.20.203.0rc1 安裝 (CentOS 5.6 x86)

2011年6月10日

Hadoop 是個架設雲端的系統,它參考Google Filesystem,以Java開發,提供HDFS與MapReduce API。

官方網站
http://hadoop.apache.org/common/releases.html
下載
http://apache.ntu.edu.tw/hadoop/core/

Quick Start
http://hadoop.apache.org/common/docs/r0.20.203.0/#Getting+Started
http://hadoop.apache.org/common/docs/r0.20.203.0/single_node_setup.html

一、準備工作

1.安裝基本套件


[root@localhost ~]# yum -y install openssh rsync

[root@localhost ~]# sh ./jre-6u26-linux-i586-rpm.bin
[root@localhost ~]# find / -name java
/var/lib/alternatives/java
/usr/share/java
/usr/java
/usr/java/jre1.6.0_26/bin/java
/usr/lib/openoffice.org/ure/share/java
/usr/lib/openoffice.org/basis3.1/share/Scripts/java
/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre/bin/java
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/jre/bin/java
/usr/lib/java
/usr/bin/java
/etc/java
/etc/alternatives/java
[root@localhost ~]#
切換 Java


[root@localhost ~]# alternatives --install /usr/bin/java java /usr/java/jre1.6.0_26/bin/java 100

[root@localhost ~]# alternatives --config java

There are 3 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/lib/jvm/jre-1.6.0-openjdk/bin/java
   2           /usr/lib/jvm/jre-1.4.2-gcj/bin/java
   3           /usr/java/jre1.6.0_26/bin/java

Enter to keep the current selection[+], or type selection number: 3
[root@localhost ~]#
如果安裝的是 JDK,可用類似下面方法切換
alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_26/bin/java 100
alternatives --install /usr/bin/javaws javaws /usr/java/jdk1.6.0_26/bin/javaws 100
alternatives --install /usr/bin/javac  javac /usr/java/jdk1.6.0_26/bin/javac 100


2.建立 hadoop 帳號,設定密碼,切換為 hadoop 身分


[root@centos1 ~]# useradd  hadoop
[root@centos1 ~]# passwd  hadoop
[root@centos1 ~]# su  hadoop
[hadoop@localhost root]$ cd
[hadoop@localhost ~]$

3.設定 ssh 連線免輸入密碼


[hadoop@localhost ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
2c:8a:b6:f5:99:e9:2d:a2:43:23:8c:44:a8:14:be:ee hadoop@localhost.localdomain
[hadoop@localhost ~]$

[hadoop@localhost ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@localhost ~]$ chmod 600 .ssh/authorized_keys
測試一下,第一次測試可能仍會詢問問題,要輸入yes


[hadoop@localhost ~]$ ssh  hadoop@localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is 00:be:ee:9e:c7:0c:af:1e:0a:08:11:4d:04:b2:f7:77.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

[hadoop@localhost ~]$ exit
Connection to localhost closed.
第二次測試應該可以直接連線


[hadoop@localhost ~]$ ssh  hadoop@localhost
Last login: Thu Jun  9 17:18:02 2011 from localhost.localdomain
[hadoop@localhost ~]$
3.下載解壓hadoop


[hadoop@localhost ~]$ wget http://apache.ntu.edu.tw/hadoop/core/hadoop-0.20.203.0/hadoop-0.20.203.0rc1.tar.gz
[hadoop@localhost ~]$ tar xzvf hadoop-0.20.203.0rc1.tar.gz
[hadoop@localhost ~]$ cd /home/hadoop/hadoop-0.20.203.0
[hadoop@localhost hadoop-0.20.203.0]$ vim /home/hadoop/hadoop-0.20.203.0/conf/hadoop-env.sh
加入一行


export JAVA_HOME=/usr

4.測試一下hadoop可否執行

二、測試

1. 測試 hadoop 命令


[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  mradmin              run a Map-Reduce admin client
  fsck                 run a DFS filesystem checking utility
  fs                   run a generic filesystem user client
  balancer             run a cluster balancing utility
  fetchdt              fetch a delegation token from the NameNode
  jobtracker           run the MapReduce job Tracker node
  pipes                run a Pipes job
  tasktracker          run a MapReduce task Tracker node
  historyserver        run job history servers as a standalone daemon
  job                  manipulate MapReduce jobs
  queue                get information regarding JobQueues
  version              print the version
  jar <jar>            run a jar file
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
 or
  CLASSNAME            run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
[hadoop@localhost hadoop-0.20.203.0]$

2. 測試 Local (Standalone) Mode


[hadoop@localhost hadoop-0.20.203.0]$ mkdir input
[hadoop@localhost hadoop-0.20.203.0]$ cp conf/*.xml input
[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'
11/06/10 08:10:35 INFO mapred.FileInputFormat: Total input paths to process : 6
11/06/10 08:10:36 INFO mapred.JobClient: Running job: job_local_0001
11/06/10 08:10:36 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:36 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:37 INFO mapred.JobClient:  map 0% reduce 0%
11/06/10 08:10:38 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:38 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:38 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:38 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/06/10 08:10:39 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/capacity-scheduler.xml:0+7457
11/06/10 08:10:39 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/06/10 08:10:39 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:39 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:39 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:39 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:39 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:39 INFO mapred.MapTask: Finished spill 0
11/06/10 08:10:39 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/06/10 08:10:39 INFO mapred.JobClient:  map 100% reduce 0%
11/06/10 08:10:42 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/hadoop-policy.xml:0+4644
11/06/10 08:10:42 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/hadoop-policy.xml:0+4644
11/06/10 08:10:42 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/06/10 08:10:42 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:42 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:42 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:42 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:42 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:42 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/06/10 08:10:45 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/mapred-queue-acls.xml:0+2033
11/06/10 08:10:45 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/mapred-queue-acls.xml:0+2033
11/06/10 08:10:45 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/06/10 08:10:45 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:45 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:45 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:45 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:45 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:45 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/06/10 08:10:48 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/hdfs-site.xml:0+178
11/06/10 08:10:48 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/hdfs-site.xml:0+178
11/06/10 08:10:48 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/06/10 08:10:48 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:48 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:49 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:49 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:49 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:49 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/06/10 08:10:52 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/core-site.xml:0+178
11/06/10 08:10:52 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/core-site.xml:0+178
11/06/10 08:10:52 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/06/10 08:10:52 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:52 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:52 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:52 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:52 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:52 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/06/10 08:10:55 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/input/mapred-site.xml:0+178
11/06/10 08:10:55 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/06/10 08:10:55 INFO mapred.LocalJobRunner:
11/06/10 08:10:55 INFO mapred.Merger: Merging 6 sorted segments
11/06/10 08:10:55 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/06/10 08:10:55 INFO mapred.LocalJobRunner:
11/06/10 08:10:55 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/06/10 08:10:55 INFO mapred.LocalJobRunner:
11/06/10 08:10:55 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/06/10 08:10:55 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/hadoop/hadoop-0.20.203.0/grep-temp-1370586259
11/06/10 08:10:58 INFO mapred.LocalJobRunner: reduce > reduce
11/06/10 08:10:58 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/06/10 08:10:58 INFO mapred.JobClient:  map 100% reduce 100%
11/06/10 08:10:58 INFO mapred.JobClient: Job complete: job_local_0001
11/06/10 08:10:58 INFO mapred.JobClient: Counters: 17
11/06/10 08:10:58 INFO mapred.JobClient:   File Input Format Counters
11/06/10 08:10:58 INFO mapred.JobClient:     Bytes Read=14668
11/06/10 08:10:58 INFO mapred.JobClient:   File Output Format Counters
11/06/10 08:10:58 INFO mapred.JobClient:     Bytes Written=123
11/06/10 08:10:58 INFO mapred.JobClient:   FileSystemCounters
11/06/10 08:10:58 INFO mapred.JobClient:     FILE_BYTES_READ=1107694
11/06/10 08:10:58 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1232605
11/06/10 08:10:58 INFO mapred.JobClient:   Map-Reduce Framework
11/06/10 08:10:58 INFO mapred.JobClient:     Map output materialized bytes=55
11/06/10 08:10:58 INFO mapred.JobClient:     Map input records=357
11/06/10 08:10:58 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/06/10 08:10:58 INFO mapred.JobClient:     Spilled Records=2
11/06/10 08:10:58 INFO mapred.JobClient:     Map output bytes=17
11/06/10 08:10:58 INFO mapred.JobClient:     Map input bytes=14668
11/06/10 08:10:58 INFO mapred.JobClient:     SPLIT_RAW_BYTES=671
11/06/10 08:10:58 INFO mapred.JobClient:     Combine input records=1
11/06/10 08:10:58 INFO mapred.JobClient:     Reduce input records=1
11/06/10 08:10:58 INFO mapred.JobClient:     Reduce input groups=1
11/06/10 08:10:58 INFO mapred.JobClient:     Combine output records=1
11/06/10 08:10:58 INFO mapred.JobClient:     Reduce output records=1
11/06/10 08:10:58 INFO mapred.JobClient:     Map output records=1
11/06/10 08:10:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/06/10 08:10:58 INFO mapred.FileInputFormat: Total input paths to process : 1
11/06/10 08:10:58 INFO mapred.JobClient: Running job: job_local_0002
11/06/10 08:10:58 INFO mapred.MapTask: numReduceTasks: 1
11/06/10 08:10:58 INFO mapred.MapTask: io.sort.mb = 100
11/06/10 08:10:59 INFO mapred.MapTask: data buffer = 79691776/99614720
11/06/10 08:10:59 INFO mapred.MapTask: record buffer = 262144/327680
11/06/10 08:10:59 INFO mapred.MapTask: Starting flush of map output
11/06/10 08:10:59 INFO mapred.MapTask: Finished spill 0
11/06/10 08:10:59 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/06/10 08:10:59 INFO mapred.JobClient:  map 0% reduce 0%
11/06/10 08:11:01 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/grep-temp-1370586259/part-00000:0+111
11/06/10 08:11:01 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-0.20.203.0/grep-temp-1370586259/part-00000:0+111
11/06/10 08:11:01 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/06/10 08:11:01 INFO mapred.LocalJobRunner:
11/06/10 08:11:01 INFO mapred.Merger: Merging 1 sorted segments
11/06/10 08:11:01 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/06/10 08:11:01 INFO mapred.LocalJobRunner:
11/06/10 08:11:01 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/06/10 08:11:01 INFO mapred.LocalJobRunner:
11/06/10 08:11:01 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/06/10 08:11:01 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/hadoop/hadoop-0.20.203.0/output
11/06/10 08:11:01 INFO mapred.LocalJobRunner: reduce > reduce
11/06/10 08:11:01 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/06/10 08:11:02 INFO mapred.JobClient:  map 100% reduce 100%
11/06/10 08:11:02 INFO mapred.JobClient: Job complete: job_local_0002
11/06/10 08:11:02 INFO mapred.JobClient: Counters: 17
11/06/10 08:11:02 INFO mapred.JobClient:   File Input Format Counters
11/06/10 08:11:02 INFO mapred.JobClient:     Bytes Read=123
11/06/10 08:11:02 INFO mapred.JobClient:   File Output Format Counters
11/06/10 08:11:02 INFO mapred.JobClient:     Bytes Written=23
11/06/10 08:11:02 INFO mapred.JobClient:   FileSystemCounters
11/06/10 08:11:02 INFO mapred.JobClient:     FILE_BYTES_READ=607477
11/06/10 08:11:02 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=701369
11/06/10 08:11:02 INFO mapred.JobClient:   Map-Reduce Framework
11/06/10 08:11:02 INFO mapred.JobClient:     Map output materialized bytes=25
11/06/10 08:11:02 INFO mapred.JobClient:     Map input records=1
11/06/10 08:11:02 INFO mapred.JobClient:     Reduce shuffle bytes=0
11/06/10 08:11:02 INFO mapred.JobClient:     Spilled Records=2
11/06/10 08:11:02 INFO mapred.JobClient:     Map output bytes=17
11/06/10 08:11:02 INFO mapred.JobClient:     Map input bytes=25
11/06/10 08:11:02 INFO mapred.JobClient:     SPLIT_RAW_BYTES=120
11/06/10 08:11:02 INFO mapred.JobClient:     Combine input records=0
11/06/10 08:11:02 INFO mapred.JobClient:     Reduce input records=1
11/06/10 08:11:02 INFO mapred.JobClient:     Reduce input groups=1
11/06/10 08:11:02 INFO mapred.JobClient:     Combine output records=0
11/06/10 08:11:02 INFO mapred.JobClient:     Reduce output records=1
11/06/10 08:11:02 INFO mapred.JobClient:     Map output records=1
[hadoop@localhost hadoop-0.20.203.0]$

[hadoop@localhost hadoop-0.20.203.0]$ cat output/*
1       dfsadmin

3. 測試 Pseudo-Distributed Mode

此模式下每個 Hadoop daemon 執行在一個分離的 Java 程序中。


[hadoop@localhost hadoop-0.20.203.0]$ vim conf/core-site.xml
改為


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>
</configuration>



[hadoop@localhost hadoop-0.20.203.0]$ vim conf/hdfs-site.xml
改為


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>



[hadoop@localhost hadoop-0.20.203.0]$ vim conf/mapred-site.xml
改為


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>

格式化分散式檔案系統


[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop namenode -format
11/06/10 08:13:19 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
11/06/10 08:13:19 INFO util.GSet: VM type       = 32-bit
11/06/10 08:13:19 INFO util.GSet: 2% max memory = 19.33375 MB
11/06/10 08:13:19 INFO util.GSet: capacity      = 2^22 = 4194304 entries
11/06/10 08:13:19 INFO util.GSet: recommended=4194304, actual=4194304
11/06/10 08:13:19 INFO namenode.FSNamesystem: fsOwner=hadoop
11/06/10 08:13:19 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/10 08:13:19 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/10 08:13:19 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/06/10 08:13:19 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/06/10 08:13:19 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/06/10 08:13:20 INFO common.Storage: Image file of size 112 saved in 0 seconds.
11/06/10 08:13:20 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
11/06/10 08:13:20 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hadoop@localhost hadoop-0.20.203.0]$

啟動 hadoop daemons


[hadoop@localhost hadoop-0.20.203.0]$ bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /home/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-jobtracker-localhost.localdomain.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-tasktracker-localhost.localdomain.out
[hadoop@localhost hadoop-0.20.203.0]$

log會輸出到 ${HADOOP_LOG_DIR} 目錄 (預設是 to ${HADOOP_HOME}/logs).

瀏覽 NameNode 和 JobTracker 網頁介面,預設為

NameNode
http://localhost:50070/



JobTracker
http://localhost:50030/



拷貝檔案到分散式檔案系統


[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop fs -put conf input
一些範例

從分散式檔案系統拷貝檔案到本機檔案系統檢驗


[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop fs -get output output
[hadoop@localhost hadoop-0.20.203.0]$ cat output/*
cat: output/output: Is a directory
1       dfsadmin
或在分散式檔案系統上檢驗輸出檔案


[hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop fs -cat output/*
停止 daemon 命令如下


[hadoop@localhost hadoop-0.20.203.0]$ bin/stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
4. 測試 Fully-Distributed Mode

請看 Hadoop Cluster Setup
http://hadoop.apache.org/common/docs/r0.20.203.0/cluster_setup.html

(完)


相關

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974


2011年6月9日 星期四

[研究] 在Windows XP Professional上IIS啟動SSL

[研究] 在Windows XP Professional上IIS啟動SSL

2011-06-09
2015-04-08 更新圖片

(下圖)先安裝 IIS,點選「開始/設計/控制台」

(下圖)點選「新增/移除程式」

(下圖)點選「新增移除Windows元件」

(下圖)勾選「Internet Information Services(IIS)」,按下「下一步」按鈕

(下圖)IIS安裝完成


(下圖)啟動IE,網址輸入http://localhost,應該可以看到網頁



(下圖)網址改為https://localhost,應該看不到網頁

(下圖)下載 IIS6 Resource Kit Tools
http://www.microsoft.com/downloads/details.aspx?FamilyID=56fc92ee-a71a-4c73-b628-ade629c89499&DisplayLang=en





(下圖)點選「開始/程式集/IIS Resources/SelfSSL/SelfSSL」



(下圖)執行selfssl (IIS6一定要先安裝好)

(下圖)網址輸入http://localhost,按下「是」按鈕

(下圖)應該可以看到網頁,表示SSL功能運作起來了。

(完)