2013年7月28日 星期日

[研究] sbt 0.13.0-RC3 安裝 (CentOS 6.4 x86)

[研究] sbt 0.13.0-RC3 安裝 (CentOS 6.4 x86)

sbt is a build tool for Scala, Java, and more. It requires Java 1.6 or later.

官方網站
http://www.scala-sbt.org/

安裝參考
http://www.scala-sbt.org/0.13.0/docs/Getting-Started/Setup.html

Java 請到 http://java.oracle.com/ 下載

安裝

rpm -ivh jdk-7u25-linux-x64.rpm
wget http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.0-RC3/sbt-launch.jar
cp sbt-launch.jar /bin/.

vi  /bin/sbt

sbt內容
SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"

設為可執行,執行之
chmod u+x /bin/sbt
sbt

(完)

[研究] sbt 0.13.0-RC3 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/sbt-0130-rc3-centos-64-x86.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80040

[研究] sbt 0.12.4 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/sbt-0124-centos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80041

[研究] sbt 0.12.4 安裝 (CentOS 6.4 x64)

[研究] sbt 0.12.4 安裝 (CentOS 6.4 x64)

sbt is a build tool for Scala, Java, and more. It requires Java 1.6 or later.

官方網站
http://www.scala-sbt.org/

安裝參考
http://www.scala-sbt.org/0.12.4/docs/Getting-Started/Setup.html

Java 請到 http://java.oracle.com/ 下載

安裝

rpm -ivh jdk-7u25-linux-x64.rpm
wget http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch//0.12.4/sbt-launch.jar
cp sbt-launch.jar /bin/.

vi  /bin/sbt

sbt內容
java -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=384M -jar `dirname $0`/sbt-launch.jar "$@"

設為可執行、執行
chmod u+x /bin/sbt
sbt

(完)

[研究] sbt 0.13.0-RC3 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/sbt-0130-rc3-centos-64-x86.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80040

[研究] sbt 0.12.4 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/sbt-0124-centos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80041

[研究] Apache Mahout 0.8 安裝 (CentOS 6.4 x64)

[研究] Apache Mahout 0.8 安裝 (CentOS 6.4 x64)

2013-07-28

Mahout's goal is to build scalable machine learning libraries.

官方網站
http://mahout.apache.org/

簡介
https://cwiki.apache.org/confluence/display/MAHOUT/Overview

下載
http://www.apache.org/dyn/closer.cgi/mahout/
http://ftp.tc.edu.tw/pub/Apache/mahout/0.8/mahout-distribution-0.8-src.tar.gz

需求
Java 1.6.x or greater.
Maven 3.x to build the source code.

安裝參考
https://cwiki.apache.org/confluence/display/MAHOUT/BuildingMahout
https://cwiki.apache.org/confluence/display/MAHOUT/Mahout+Wiki#MahoutWiki-Installation%2FSetup

安裝

# 安裝 SubVersion (SVN)
yum -y install subversion

# 安裝 Java JDK
rpm -ivh jdk-7u25-linux-x64.rpm

# 安裝 Apache Maven
wget http://ftp.tc.edu.tw/pub/Apache/maven/maven-3/3.1.0/binaries/apache-maven-3.1.0-bin.tar.gz
tar zxvf apache-maven-3.1.0-bin.tar.gz -C /usr/local
export M2_HOME=/usr/local/apache-maven-3.1.0
export M2=$M2_HOME/bin
# MAVEN_OPTS 非必須
export MAVEN_OPTS="-Xms256m -Xmx512m"
export PATH=$M2:$PATH
export JAVA_HOME=/usr/java/jdk1.7.0_25
#測試安裝是否成功
/usr/local/apache-maven-3.1.0/bin/mvn -version

# 用 SVN 下載 Mahout
svn co http://svn.apache.org/repos/asf/mahout/trunk

# 切換到 core 目錄,編譯、安裝
cd /root/trunk/core
mvn compile
mvn install

# 切換到範例目錄,編譯
cd /root/trunk/examples
mvn compile


根據 /root/trunk/CHANGELOG 內容,可能算是 0.8 build 1295 版吧
Release 0.9 - unreleased
  MAHOUT-1295: Excluded all Maven's target directories from distribution archives (sslavic)
Release 0.8 - 2013-07-25

(完)

[研究] Apache Mahout 0.8 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/apache-mahout-08-centos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80039

[研究] Apache Maven 3.1.0 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/apache-maven-310-centos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80036

[研究] SubVersion 1.6.11 安裝 (yum)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/subversion-1611-yumcentos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80037

[研究] Hadoop 1.1.2 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] SubVersion 1.6.11 安裝 (yum)(CentOS 6.4 x64)

[研究] SubVersion 1.6.11 安裝 (yum)(CentOS 6.4 x64)

SubVersion 是版本管理工具

官方網站
http://subversion.tigris.org/

安裝參考
http://subversion.apache.org/packages.html#centos
http://subversion.apache.org/source-code.html
http://subversion.apache.org/packages.html

快速安裝
yum -y install subversion

版本
svn --version

測試
svn co http://svn.apache.org/repos/asf/mahout/trunk

相關檔案

[root@localhost ~]# find  / -name subversion
/etc/subversion
/etc/bash_completion.d/subversion

[root@localhost ~]# find  / -name svn
/usr/lib64/python2.6/site-packages/svn
/usr/bin/svn
[root@localhost ~]#

[root@localhost bin]# find / -name svnserve
/etc/rc.d/init.d/svnserve
/usr/bin/svnserve

啟動 svn 伺服器

[root@localhost bin]# service svnserve start
Starting svnserve:                                         [  OK  ]
[root@localhost bin]# ps aux | grep svn
root      2999  0.0  0.0 182748  1204 ?        Ss   07:16   0:00 /usr/bin/svnserve --daemon --pid-file=/var/run/svnserve.pid
root      3004  0.0  0.0 103236   868 pts/1    S+   07:16   0:00 grep svn
[root@localhost bin]#

svn 伺服器設定檔案
/etc/sysconfig/svnserve

(完)

[研究] SubVersion 1.6.11 安裝 (yum)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/subversion-1611-yumcentos-64-x64.html
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80037

[研究] Apache Maven 3.1.0 安裝 (CentOS 6.4 x64)

[研究] Apache Maven 3.1.0 安裝 (CentOS 6.4 x64)

介紹
http://maven.apache.org/ref/3.1.0/

Maven is a project development management and comprehension tool.
Maven是一個專案的開發,管理和綜合工具。

下載
http://maven.apache.org/download.cgi

參考
http://maven.apache.org/download.cgi#Installation

使用教學
http://maven.apache.org/guides/getting-started/index.html
http://maven.apache.org/guides/index.html

安裝
rpm -ivh jdk-7u25-linux-x64.rpm
wget http://ftp.tc.edu.tw/pub/Apache/maven/maven-3/3.1.0/binaries/apache-maven-3.1.0-bin.tar.gz

export M2_HOME=/usr/local/apache-maven-3.1.0
export M2=$M2_HOME/bin
# MAVEN_OPTS 非必須
export MAVEN_OPTS="-Xms256m -Xmx512m"
export PATH=$M2:$PATH
export JAVA_HOME=/usr/java/jdk1.7.0_25

方便執行
[root@localhost ~]#  ln  -s  /usr/local/apache-maven-3.1.0/bin/mvn  /usr/bin/mvn

測試安裝是否成功

[root@localhost ~]# /usr/local/apache-maven-3.1.0/bin/mvn -version
Apache Maven 3.1.0 (893ca28a1da9d5f51ac03827af98bb730128f9f2; 2013-06-28 10:15:32+0800)
Maven home: /usr/local/apache-maven-3.1.0
Java version: 1.7.0_25, vendor: Oracle Corporation
Java home: /usr/java/jdk1.7.0_25/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
[root@localhost ~]#  

下載測試

[root@localhost ~]# mvn archetype:generate \
  -DarchetypeGroupId=org.apache.maven.archetypes \
  -DgroupId=com.mycompany.app \
  -DartifactId=my-app

編譯測試
[root@localhost ~]# cd  my-app
[root@localhost my-app]# mvn compile
[root@localhost my-app]# mvn test

其他使用教學請看
http://maven.apache.org/guides/getting-started/index.html
http://maven.apache.org/guides/index.html

(完)

[研究] Apache Maven 3.1.0 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80036

2013年7月27日 星期六

[研究] HBase 0.94.10 資料庫 安裝(CentOS 6.4 x86)

[研究] HBase 0.94.10 資料庫 安裝(CentOS 6.4 x86)

Apache HBase is the Hadoop database, a distributed, scalable, big data store.

官方網站
http://hbase.apache.org/

安裝說明
http://hbase.apache.org/book/quickstart.html

安裝(需要自己動下載 Sun Java 1.6 或更新)

chmod +x jdk-6u45-linux-i586-rpm.bin
./jdk-6u45-linux-i586-rpm.bin
export JAVA_HOME=/usr/java/jdk1.6.0_45

wget http://ftp.mirror.tw/pub/apache/hbase/hbase-0.94.10/hbase-0.94.10.tar.gz
tar zxvf hbase-0.94.10.tar.gz -C /usr/local
export HBASE_HOME=/usr/local/hbase-0.94.10

啟動

[root@localhost ~]# /usr/local/hbase-0.94.10//bin/start-hbase.sh
starting master, logging to /usr/local/hbase-0.94.10/bin/../logs/hbase-root-master-localhost.localdomain.out
[root@localhost ~]#

檢查啟動情況

[root@localhost ~]# ps aux | grep hbase
root      3028  0.7  5.6 1235736 116612 pts/1  Sl   20:37   0:03 /usr/java/jdk1.6.0_45//bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/usr/local/hbase-0.94.10/bin/../logs -Dhbase.log.file=hbase-root-master-localhost.localdomain.log -Dhbase.home.dir=/usr/local/hbase-0.94.10/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/usr/local/hbase-0.94.10/bin/../lib/native/Linux-i386-32 -Dhbase.security.logger=INFO,DRFAS -classpath /usr/local/hbase-0.94.10/bin/../conf:/usr/java/jdk1.6.0_45//lib/tools.jar:/usr/local/hbase-0.94.10/bin/..:/usr/local/hbase-0.94.10/bin/../hbase-0.94.10.jar:/usr/local/hbase-0.94.10/bin/../hbase-0.94.10-tests.jar:/usr/local/hbase-0.94.10/bin/../lib/activation-1.1.jar:/usr/local/hbase-0.94.10/bin/../lib/asm-3.1.jar:/usr/local/hbase-0.94.10/bin/../lib/avro-1.5.3.jar:/usr/local/hbase-0.94.10/bin/../lib/avro-ipc-1.5.3.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-beanutils-1.7.0.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-cli-1.2.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-codec-1.4.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-collections-3.2.1.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-configuration-1.6.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-digester-1.8.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-el-1.0.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-httpclient-3.1.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-io-2.1.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-lang-2.5.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-logging-1.1.1.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-math-2.1.jar:/usr/local/hbase-0.94.10/bin/../lib/commons-net-1.4.1.jar:/usr/local/hbase-0.94.10/bin/../lib/core-3.1.1.jar:/usr/local/hbase-0.94.10/bin/../lib/guava-11.0.2.jar:/usr/local/hbase-0.94.10/bin/../lib/hadoop-core-1.0.4.jar:/usr/local/hbase-0.94.10/bin/../lib/high-scale-lib-1.1.1.jar:/usr/local/hbase-0.94.10/bin/../lib/httpclient-4.1.2.jar:/usr/local/hbase-0.94.10/bin/../lib/httpcore-4.1.3.jar:/usr/local/hbase-0.94.10/bin/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jackson-jaxrs-1.8.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jackson-xc-1.8.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jamon-runtime-2.3.1.jar:/usr/local/hbase-0.94.10/bin/../lib/jasper-compiler-5.5.23.jar:/usr/local/hbase-0.94.10/bin/../lib/jasper-runtime-5.5.23.jar:/usr/local/hbase-0.94.10/bin/../lib/jaxb-api-2.1.jar:/usr/local/hbase-0.94.10/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/local/hbase-0.94.10/bin/../lib/jersey-core-1.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jersey-json-1.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jersey-server-1.8.jar:/usr/local/hbase-0.94.10/bin/../lib/jettison-1.1.jar:/usr/local/hbase-0.94.10/bin/../lib/jetty-6.1.26.jar:/usr/local/hbase-0.94.10/bin/../lib/jetty-util-6.1.26.jar:/usr/local/hbase-0.94.10/bin/../lib/jruby-complete-1.6.5.jar:/usr/local/hbase-0.94.10/bin/../lib/jsp-2.1-6.1.14.jar:/usr/local/hbase-0.94.10/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/local/hbase-0.94.10/bin/../lib/jsr305-1.3.9.jar:/usr/local/hbase-0.94.10/bin/../lib/junit-4.10-HBASE-1.jar:/usr/local/hbase-0.94.10/bin/../lib/libthrift-0.8.0.jar:/usr/local/hbase-0.94.10/bin/../lib/log4j-1.2.16.jar:/usr/local/hbase-0.94.10/bin/../lib/metrics-core-2.1.2.jar:/usr/local/hbase-0.94.10/bin/../lib/netty-3.2.4.Final.jar:/usr/local/hbase-0.94.10/bin/../lib/protobuf-java-2.4.0a.jar:/usr/local/hbase-0.94.10/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hbase-0.94.10/bin/../lib/slf4j-api-1.4.3.jar:/usr/local/hbase-0.94.10/bin/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hbase-0.94.10/bin/../lib/snappy-java-1.0.3.2.jar:/usr/local/hbase-0.94.10/bin/../lib/stax-api-1.0.1.jar:/usr/local/hbase-0.94.10/bin/../lib/velocity-1.7.jar:/usr/local/hbase-0.94.10/bin/../lib/xmlenc-0.52.jar:/usr/local/hbase-0.94.10/bin/../lib/zookeeper-3.4.5.jar: org.apache.hadoop.hbase.master.HMaster start
root      3216  0.0  0.0   4356   756 pts/1    S+   20:46   0:00 grep hbase
[root@localhost ~]#

進入管理模式

[root@localhost ~]#  ${HBASE_HOME}/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.10, r1504995, Fri Jul 19 20:24:16 UTC 2013

hbase(main):001:0>

help說明

hbase(main):001:0> help
HBase Shell, version 0.94.10, r1504995, Fri Jul 19 20:24:16 UTC 2013
Type 'help "COMMAND"', (e.g. 'help "get"' -- the quotes are necessary) for help on a specific command.
Commands are grouped. Type 'help "COMMAND_GROUP"', (e.g. 'help "general"') for help on a command group.

COMMAND GROUPS:
  Group name: general
  Commands: status, version, whoami

  Group name: ddl
  Commands: alter, alter_async, alter_status, create, describe, disable, disable_all, drop, drop_all, enable, enable_all, exists, is_disabled, is_enabled, list, show_filters

  Group name: dml
  Commands: count, delete, deleteall, get, get_counter, incr, put, scan, truncate

  Group name: tools
  Commands: assign, balance_switch, balancer, close_region, compact, flush, hlog_roll, major_compact, move, split, unassign, zk_dump

  Group name: replication
  Commands: add_peer, disable_peer, enable_peer, list_peers, remove_peer, start_replication, stop_replication

  Group name: snapshot
  Commands: clone_snapshot, delete_snapshot, list_snapshots, restore_snapshot, snapshot

  Group name: security
  Commands: grant, revoke, user_permission

SHELL USAGE:
Quote all names in HBase Shell such as table and column names.  Commas delimit
command parameters.  Type <RETURN> after entering a command to run it.
Dictionaries of configuration used in the creation and alteration of tables are
Ruby Hashes. They look like this:

  {'key1' => 'value1', 'key2' => 'value2', ...}

and are opened and closed with curley-braces.  Key/values are delimited by the
'=>' character combination.  Usually keys are predefined constants such as
NAME, VERSIONS, COMPRESSION, etc.  Constants do not need to be quoted.  Type
'Object.constants' to see a (messy) list of all constants in the environment.

If you are using binary keys or values and need to enter them in the shell, use
double-quote'd hexadecimal representation. For example:

  hbase> get 't1', "key\x03\x3f\xcd"
  hbase> get 't1', "key\003\023\011"
  hbase> put 't1', "test\xef\xff", 'f1:', "\x01\x33\x40"

The HBase shell is the (J)Ruby IRB with the above HBase-specific commands added.
For more on the HBase Shell, see http://hbase.apache.org/docs/current/book.html
hbase(main):002:0>

建立稱為 mytable 的資料表,和稱為 mycolumnfamily 的 column family。

hbase(main):003:0* create "mytable", "mycolumnfamily"
0 row(s) in 1.3820 seconds

檢視 mytable 資訊

hbase(main):004:0> describe "mytable"
DESCRIPTION                                                             ENABLED
 {NAME => 'mytable', FAMILIES => [{NAME => 'mycolumnfamily', COMPRESSIO true
 N => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536
 ', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0640 seconds

增加一列 myrow 到 "mycolumnfamily:x" 行,值為 v

hbase(main):005:0> put "mytable", "myrow", "mycolumnfamily:x", "v"
0 row(s) in 0.0530 seconds

顯示值

hbase(main):006:0> get "mytable", "myrow"
COLUMN                                        CELL
 mycolumnfamily:x                             timestamp=1374929517279, value=v
1 row(s) in 0.0200 seconds

掃描 mytable

hbase(main):007:0> scan "mytable"
ROW                                           COLUMN+CELL
 myrow                                        column=mycolumnfamily:x, timestamp=1374929282679, value=v
1 row(s) in 0.0270 seconds

離開交談模式

hbase(main):006:0> quit

停止 hbase

[root@localhost ~]# /usr/local/hbase-0.94.10//bin/stop-hbase.sh
stopping hbase.............

(完)

[研究] HBase 0.94.10 資料庫 安裝(CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80022

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)

2013-07-27

Hadoop 是個架設雲端的系統,它參考Google Filesystem,以Java開發,提供HDFS與MapReduce API。

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.

官方網站
http://hadoop.apache.org/

安裝參考
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleNodeSetup.html

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

下載
http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-1.2.1/

一、準備工作

1.安裝基本套件

[root@localhost ~]# yum -y install openssh rsync

[root@localhost ~]# chmod +x jre-6u45-linux-x64-rpm.bin
[root@localhost ~]# ./jre-6u45-linux-x64-rpm.bin

[root@localhost ~]# find / -name java
/etc/java
/etc/alternatives/java
/etc/pki/java
/var/lib/alternatives/java
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java
/usr/lib/java
/usr/share/java
/usr/lib64/libreoffice/ure/share/java
/usr/lib64/libreoffice/basis3.4/share/Scripts/java
/usr/bin/java
/usr/java
/usr/java/jre1.6.0_45/bin/java
[root@localhost ~]#

2.建立 hadoop 帳號,設定密碼,切換為 hadoop 身分

[root@centos1 ~]# useradd  hadoop
[root@centos1 ~]# passwd  hadoop
[root@centos1 ~]# su  hadoop
[hadoop@localhost root]$ cd
[hadoop@localhost ~]$

3.設定 ssh 連線免輸入密碼

[hadoop@localhost ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/home/hadoop/.ssh'.
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
e9:eb:5c:6e:ef:fe:31:13:ac:9d:6a:1d:1f:ae:b6:7f hadoop@localhost.localdomain
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|                 |
|         .   .   |
|        S     o  |
|       .     o.+ |
|        . . ..Bo.|
|       . +. .o.=E|
|       .+..=*+=..|
+-----------------+
[hadoop@localhost ~]$

[hadoop@localhost ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@localhost ~]$ chmod 600 .ssh/authorized_keys

測試一下,第一次測試可能仍會詢問問題

[hadoop@localhost ~]$ ssh  hadoop@localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 6b:a1:53:17:70:de:0d:ff:8d:f9:01:e1:ad:e6:05:2e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
[hadoop@localhost ~]$ exit
logout
Connection to localhost closed.
[hadoop@localhost ~]$

第二次測試應該可以直接連線

[hadoop@localhost ~]$ ssh  hadoop@localhost
Last login: Sun Jul 21 03:57:48 2013 from localhost
[hadoop@localhost ~]$

3.下載解壓hadoop

[hadoop@localhost ~]$ wget http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
[hadoop@localhost ~]$ tar xzvf hadoop-1.2.1.tar.gz
[hadoop@localhost ~]$ cd /home/hadoop/hadoop-1.2.1
[hadoop@localhost hadoop-1.2.1]$ vim /home/hadoop/hadoop-1.2.1/conf/hadoop-env.sh

加入一行
export JAVA_HOME=/usr

4.測試一下hadoop可否執行

二、測試

1. 測試 hadoop 命令

[hadoop@localhost hadoop-1.2.1]$ bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  mradmin              run a Map-Reduce admin client
  fsck                 run a DFS filesystem checking utility
  fs                   run a generic filesystem user client
  balancer             run a cluster balancing utility
  fetchdt              fetch a delegation token from the NameNode
  jobtracker           run the MapReduce job Tracker node
  pipes                run a Pipes job
  tasktracker          run a MapReduce task Tracker node
  historyserver        run job history servers as a standalone daemon
  job                  manipulate MapReduce jobs
  queue                get information regarding JobQueues
  version              print the version
  jar <jar>            run a jar file
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archi                                                                                                  ve
  classpath            prints the class path needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
 or
  CLASSNAME            run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
[hadoop@localhost hadoop-1.2.1]$

2. 測試 Local (Standalone) Mode

[hadoop@localhost hadoop-1.2.1]$ mkdir input
[hadoop@localhost hadoop-1.2.1]$ cp conf/*.xml input
[hadoop@localhost hadoop-1.2.1]$ bin/hadoop jar hadoop-examples-1.2.1.jar grep input output 'dfs[a-z.]+'
13/07/21 04:03:49 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/07/21 04:03:49 WARN snappy.LoadSnappy: Snappy native library not loaded
13/07/21 04:03:49 INFO mapred.FileInputFormat: Total input paths to process : 7
13/07/21 04:03:50 INFO mapred.JobClient: Running job: job_local_0001
13/07/21 04:03:50 INFO util.ProcessTree: setsid exited with exit code 0
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@4195d263
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/capacity-scheduler.xml:0+7457
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@255722d7
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.MapTask: Finished spill 0
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/hadoop-policy.xml:0+4644
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6a4993d4
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/mapred-queue-acls.xml:0+2033
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5e38634a
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/fair-scheduler.xml:0+327
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5e3b76ea
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/mapred-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@508610d2
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/hdfs-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@67cef2cd
13/07/21 04:03:50 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:50 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:50 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:50 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:50 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_m_000006_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/input/core-site.xml:0+178
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_m_000006_0' done.
13/07/21 04:03:50 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5a766050
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Merger: Merging 7 sorted segments
13/07/21 04:03:50 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
13/07/21 04:03:50 INFO mapred.LocalJobRunner:
13/07/21 04:03:50 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
13/07/21 04:03:50 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/hadoop/hadoop-1.2.1/grep-temp-1078176698
13/07/21 04:03:50 INFO mapred.LocalJobRunner: reduce > reduce
13/07/21 04:03:50 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
13/07/21 04:03:51 INFO mapred.JobClient:  map 100% reduce 100%
13/07/21 04:03:51 INFO mapred.JobClient: Job complete: job_local_0001
13/07/21 04:03:51 INFO mapred.JobClient: Counters: 21
13/07/21 04:03:51 INFO mapred.JobClient:   File Input Format Counters
13/07/21 04:03:51 INFO mapred.JobClient:     Bytes Read=14995
13/07/21 04:03:51 INFO mapred.JobClient:   File Output Format Counters
13/07/21 04:03:51 INFO mapred.JobClient:     Bytes Written=123
13/07/21 04:03:51 INFO mapred.JobClient:   FileSystemCounters
13/07/21 04:03:51 INFO mapred.JobClient:     FILE_BYTES_READ=1272808
13/07/21 04:03:51 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1547930
13/07/21 04:03:51 INFO mapred.JobClient:   Map-Reduce Framework
13/07/21 04:03:51 INFO mapred.JobClient:     Map output materialized bytes=61
13/07/21 04:03:51 INFO mapred.JobClient:     Map input records=369
13/07/21 04:03:51 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/07/21 04:03:51 INFO mapred.JobClient:     Spilled Records=2
13/07/21 04:03:51 INFO mapred.JobClient:     Map output bytes=17
13/07/21 04:03:51 INFO mapred.JobClient:     Total committed heap usage (bytes)=1204920320
13/07/21 04:03:51 INFO mapred.JobClient:     CPU time spent (ms)=0
13/07/21 04:03:51 INFO mapred.JobClient:     Map input bytes=14995
13/07/21 04:03:51 INFO mapred.JobClient:     SPLIT_RAW_BYTES=749
13/07/21 04:03:51 INFO mapred.JobClient:     Combine input records=1
13/07/21 04:03:51 INFO mapred.JobClient:     Reduce input records=1
13/07/21 04:03:51 INFO mapred.JobClient:     Reduce input groups=1
13/07/21 04:03:51 INFO mapred.JobClient:     Combine output records=1
13/07/21 04:03:51 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
13/07/21 04:03:51 INFO mapred.JobClient:     Reduce output records=1
13/07/21 04:03:51 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
13/07/21 04:03:51 INFO mapred.JobClient:     Map output records=1
13/07/21 04:03:51 INFO mapred.FileInputFormat: Total input paths to process : 1
13/07/21 04:03:51 INFO mapred.JobClient: Running job: job_local_0002
13/07/21 04:03:51 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5717e4ff
13/07/21 04:03:51 INFO mapred.MapTask: numReduceTasks: 1
13/07/21 04:03:51 INFO mapred.MapTask: io.sort.mb = 100
13/07/21 04:03:51 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/21 04:03:51 INFO mapred.MapTask: record buffer = 262144/327680
13/07/21 04:03:51 INFO mapred.MapTask: Starting flush of map output
13/07/21 04:03:51 INFO mapred.MapTask: Finished spill 0
13/07/21 04:03:51 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
13/07/21 04:03:51 INFO mapred.LocalJobRunner: file:/home/hadoop/hadoop-1.2.1/grep-temp-1078176698/part-00000:0+111
13/07/21 04:03:51 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
13/07/21 04:03:51 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1466f971
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Merger: Merging 1 sorted segments
13/07/21 04:03:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
13/07/21 04:03:51 INFO mapred.LocalJobRunner:
13/07/21 04:03:51 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
13/07/21 04:03:51 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/hadoop/hadoop-1.2.1/output
13/07/21 04:03:51 INFO mapred.LocalJobRunner: reduce > reduce
13/07/21 04:03:51 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
13/07/21 04:03:52 INFO mapred.JobClient:  map 100% reduce 100%
13/07/21 04:03:52 INFO mapred.JobClient: Job complete: job_local_0002
13/07/21 04:03:52 INFO mapred.JobClient: Counters: 21
13/07/21 04:03:52 INFO mapred.JobClient:   File Input Format Counters
13/07/21 04:03:52 INFO mapred.JobClient:     Bytes Read=123
13/07/21 04:03:52 INFO mapred.JobClient:   File Output Format Counters
13/07/21 04:03:52 INFO mapred.JobClient:     Bytes Written=23
13/07/21 04:03:52 INFO mapred.JobClient:   FileSystemCounters
13/07/21 04:03:52 INFO mapred.JobClient:     FILE_BYTES_READ=609559
13/07/21 04:03:52 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=770693
13/07/21 04:03:52 INFO mapred.JobClient:   Map-Reduce Framework
13/07/21 04:03:52 INFO mapred.JobClient:     Map output materialized bytes=25
13/07/21 04:03:52 INFO mapred.JobClient:     Map input records=1
13/07/21 04:03:52 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/07/21 04:03:52 INFO mapred.JobClient:     Spilled Records=2
13/07/21 04:03:52 INFO mapred.JobClient:     Map output bytes=17
13/07/21 04:03:52 INFO mapred.JobClient:     Total committed heap usage (bytes)=262946816
13/07/21 04:03:52 INFO mapred.JobClient:     CPU time spent (ms)=0
13/07/21 04:03:52 INFO mapred.JobClient:     Map input bytes=25
13/07/21 04:03:52 INFO mapred.JobClient:     SPLIT_RAW_BYTES=115
13/07/21 04:03:52 INFO mapred.JobClient:     Combine input records=0
13/07/21 04:03:52 INFO mapred.JobClient:     Reduce input records=1
13/07/21 04:03:52 INFO mapred.JobClient:     Reduce input groups=1
13/07/21 04:03:52 INFO mapred.JobClient:     Combine output records=0
13/07/21 04:03:52 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
13/07/21 04:03:52 INFO mapred.JobClient:     Reduce output records=1
13/07/21 04:03:52 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
13/07/21 04:03:52 INFO mapred.JobClient:     Map output records=1
[hadoop@localhost hadoop-1.2.1]$

[hadoop@localhost hadoop-1.2.1]$ cat output/*
1       dfsadmin
[hadoop@localhost hadoop-1.2.1]$

3. 測試 Pseudo-Distributed Mode

此模式下每個 Hadoop daemon 執行在一個分離的 Java 程序中。

[hadoop@localhost hadoop-1.2.1]$ vim conf/core-site.xml

改為

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>

[hadoop@localhost hadoop-1.2.1]$ vim conf/hdfs-site.xml

改為

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>


[hadoop@localhost hadoop-1.2.1]$ vim conf/mapred-site.xml

改為

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>

格式化分散式檔案系統

[hadoop@localhost hadoop-1.2.1]$ bin/hadoop namenode -format
13/07/21 04:07:29 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
13/07/21 04:07:32 INFO util.GSet: VM type       = 64-bit
13/07/21 04:07:32 INFO util.GSet: 2% max memory = 19.33375 MB
13/07/21 04:07:32 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13/07/21 04:07:32 INFO util.GSet: recommended=2097152, actual=2097152
13/07/21 04:07:32 INFO namenode.FSNamesystem: fsOwner=hadoop
13/07/21 04:07:32 INFO namenode.FSNamesystem: supergroup=supergroup
13/07/21 04:07:32 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/07/21 04:07:32 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/07/21 04:07:32 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/07/21 04:07:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/07/21 04:07:32 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/07/21 04:07:32 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
13/07/21 04:07:32 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
13/07/21 04:07:32 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
13/07/21 04:07:32 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[hadoop@localhost hadoop-1.2.1]$


啟動 hadoop daemons

[hadoop@localhost hadoop-1.2.1]$ bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-localhost.localdomain.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-localhost.localdomain.out
[hadoop@localhost hadoop-1.2.1]$

log會輸出到 ${HADOOP_LOG_DIR} 目錄 (預設是 to ${HADOOP_HOME}/logs).

瀏覽 NameNode 和 JobTracker 網頁介面,預設為

NameNode
http://localhost:50070/






JobTracker
http://localhost:50030/




拷貝檔案到分散式檔案系統

[hadoop@localhost hadoop-1.2.1]$ bin/hadoop fs -put conf input

一些範例

從分散式檔案系統拷貝檔案到本機檔案系統檢驗

[hadoop@localhost hadoop-1.2.1]$ bin/hadoop fs -get output output
get: null
[hadoop@localhost hadoop-1.2.1]$ cat output/*
1       dfsadmin

停止 daemon 命令如下
[hadoop@localhost hadoop-1.2.1]$ bin/stop-all.sh

stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode

4. 測試 Fully-Distributed Mode

請看 Hadoop Cluster Setup
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

(完)

相關

[研究] Hadoop 2.2.0 編譯 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-centos-64-x64.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (二)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64_7.html

[研究] Hadoop 2.2.0 Single Cluster 安裝 (一)(CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/11/hadoop-220-single-cluster-centos-64-x64.html

[研究] Hadoop 1.2.1 (rpm)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/10/hadoop-121-rpm-centos-64-x64.html

[研究] Hadoop 1.2.1 (bin)安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/hadoop-112-centos-64-x64.html

[研究] Hadoop 1.2.1 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80035

[研究] 雲端軟體 Hadoop 1.0.0 安裝 (CentOS 6.2 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=21166

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=18513

[研究] 雲端軟體 Hadoop 0.20.2 安裝 (CentOS 5.4 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=17974



[研究] MongoDB 2.4.5 文件型資料庫 安裝 (CentOS 6.4 x64)

[研究] MongoDB 2.4.5 文件型資料庫 安裝 (CentOS 6.4 x64)

官方網站
http://www.mongodb.org/

安裝參考
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/

安裝

vi  /etc/yum.repos.d/10gen.repo

[10gen]
name=10gen Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1

yum -y install mongo-10gen mongo-10gen-server
service mongod start
chkconfig mongod on

其他:
啟動程式式 /etc/rc.d/init.d/mongod
設定檔案為 /etc/mongod.conf
檔案存放在 /var/lib/mongo
Log 存放在 /var/log/mongo

測試

[root@localhost ~]# mongo
MongoDB shell version: 2.4.5
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
> db.test.save( { a: 1 } )
> db.test.find()
{ "_id" : ObjectId("51eb24bdba664ac5a3ca2a52"), "a" : 1 }
> help
        db.help()                    help on db methods
        db.mycoll.help()             help on collection methods
        sh.help()                    sharding helpers
        rs.help()                    replica set helpers
        help admin                   administrative help
        help connect                 connecting to a db help
        help keys                    key shortcuts
        help misc                    misc things to know
        help mr                      mapreduce

        show dbs                     show database names
        show collections             show collections in current database
        show users                   show users in current database
        show profile                 show most recent system.profile entries wit                                                                                                  h time >= 1ms
        show logs                    show the accessible logger names
        show log [name]              prints out the last segment of log in memor                                                                                                  y, 'global' is default
        use <db_name>                set current database
        db.foo.find()                list objects in collection foo
        db.foo.find( { a : 1 } )     list objects in foo where a == 1
        it                           result of the last line evaluated; use to f                                                                                                  urther iterate
        DBQuery.shellBatchSize = x   set default number of items to display on s                                                                                                  hell
        exit                         quit the mongo shell
> exit
bye
[root@localhost ~]#

(完)

[研究] MongoDB 2.4.5 文件型資料庫 安裝 (CentOS 6.4 x64)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=80023

[研究] MongoDB 1.6.5 文件型資料庫 安裝 (CentOS 5.5 x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=19357

[研究] MongoDB 1.6.5 文件型資料庫 安裝 (Windows x86)
http://forum.icst.org.tw/phpbb/viewtopic.php?t=19395
 
開源雲計算技術系列三(10gen)安裝配置
http://forum.icst.org.tw/phpbb/viewtopic.php?t=19386

三十五個非主流開源數據庫 MongoDB領銜主演
http://forum.icst.org.tw/phpbb/viewtopic.php?t=19377

2013年7月26日 星期五

[研究] Apache Zookeeper 3.3.5 安裝 (CentOS 6.4 x64)

[研究] Apache Zookeeper 3.3.5 安裝 (CentOS 6.4 x64)


官方網站
http://zookeeper.apache.org/

簡介

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.

安裝

wget http://apache.cdpa.nsysu.edu.tw/zookeeper/zookeeper-3.3.5/zookeeper-3.3.5.tar.gz
tar zxvf  zookeeper-3.3.5.tar.gz -C /usr/local
mkdir -p /var/lib/zookeeper
echo tickTime=2000 > /usr/local/zookeeper-3.3.5/conf/zoo.cfg
echo dataDir=/var/lib/zookeeper >> /usr/local/zookeeper-3.3.5/conf/zoo.cfg
echo clientPort=2181 >> /usr/local/zookeeper-3.3.5/conf/zoo.cfg
cat /usr/local/zookeeper-3.3.5/conf/zoo.cfg

簡要參數提示

/usr/local/zookeeper-3.3.5/bin/zkServer.sh

結果
[root@localhost ~]# /usr/local/zookeeper-3.3.5/bin/zkServer.sh
JMX enabled by default
Using config: /usr/local/zookeeper-3.3.5/bin/../conf/zoo.cfg
Usage: /usr/local/zookeeper-3.3.5/bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}

啟動

/usr/local/zookeeper-3.3.5/bin/zkServer.sh start


停止

/usr/local/zookeeper-3.3.5/bin/zkServer.sh stop

有關程式開發部分請看
http://zookeeper.apache.org/doc/trunk/zookeeperStarted.html#sc_InstallingSingleMode

(完)

相關

[研究] Apache Zookeeper 3.4.6 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/apache-zookeeper-346-centos-70-x8664.html

[研究] Apache Zookeeper 3.3.5 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.com/2013/07/apache-zookeeper-345.html

2013年7月25日 星期四

[研究] Office/Visio/Project 2010 和 SP2 整合光碟製作

[研究] Office/Visio/Project 2010 和 SP2 整合 (slipstream) 光碟製作

緣由:
[研究] Office/Visio/Project 2010 SP2 安裝記 (疑似有Bug)
http://shaurong.blogspot.tw/2013/07/officevisioproject-2010-sp2-bug.html

1.執行
mkdir  C:\Office2010x64SP2
officesp2010-kb2687455-fullfile-x64-zh-tw.exe  /extract:C:\Office2010x64SP2

2.把 Office 2010 光碟放入光碟機

3.執行 UltraISO
  把光碟片所有內容拖入 UltraISO 畫面
  把 C:\Office2010x64SP2 所有內容拖入 UltraISO 畫面的 Updates 目錄

4.把光碟存檔做成 .iso

測試1:把剛做出來的 Office 2010 with SP2 拿到 Windows 7 x64 上 (沒有安裝任何 Office) 安裝,安裝順利成功,檢查一下目前版本:

(下圖) 點選「檔案」下拉選單,選「說明」,點選「其他版本與著作權資訊」

(下圖) 顯示幕前是 SP2 版,表示整合光碟製作成功,也可以安裝

其他 Office 2010 x86 、Visio 2010 x86/x64、Project 2010 x86/x64 和 SP2 整合就不測試了。

測試2: 在已經安裝 Office 2010 with SP1 的電腦上安裝剛做出來的 Office 2010 with SP2,用 「修復安裝」,結果安裝完畢仍是 SP1

測試3: 在已經安裝 Office 2010 with SP1 的電腦上,先移除 Office,安裝剛做出來的 Office 2010 with SP2,安裝完畢是 SP2

(完)

[研究] Office/Visio/Project 2010 SP2 安裝記 (疑似有Bug)

[研究] Office/Visio/Project 2010 SP2 安裝記 (疑似有Bug)

Office 2010 Service Pack 2 (SP2) 繁體中文版

2013-07-21

x86版
http://www.microsoft.com/zh-TW/download/details.aspx?id=39667

x64版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39647

Visio 2010 Service Pack 2 繁體中文版

x86版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39665

x64版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39648

Project 20102 SP2 繁體中文版

x86版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39669

x64版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39661

Project Server 20102 SP2 繁體中文版
http://www.microsoft.com/zh-tw/download/details.aspx?id=39657

環境:Windows 2008 R2 + Office 2010 with Service Pack 1 x64

(下圖) 安裝 officesp2010-kb2687455-fullfile-x64-zh-tw.exe,結果安裝程式說
C:\MSOCache\All Users\{90140000-0011-0000-1000-0000000FF1CE}-C\ProPlusWW.msi 找不到


(下圖) 放入光碟片,指向它需要的檔案

(下圖) 按下確定按鈕

 (下圖) 結果還是失敗

去 Internet 上找了一下,也有別人失敗
http://www.dslreports.com/forum/r28492321-Microsoft-Office-2010-Service-Pack-2-SP2-

考慮把 SP2 和 DVD 整合 (slipstream) 成 Office 2010 with SP2 光碟嘗試...

(待續...)

[研究] Office/Visio/Project 2010 和 SP2 整合光碟製作
http://shaurong.blogspot.tw/2013/07/officevisioproject-2010-sp2.html

2013年7月24日 星期三

[研究] Windows Server 2012 R2 Preview 安裝記

[研究] Windows Server 2012 R2 Preview 安裝記

Windows Server 2012 R2 Preview 公開下載
http://technet.microsoft.com/zh-tw/evalcenter/dn205286.aspx



測試環境:VM on VMware Workstation 9.02 for Windows

(下圖)















(完)

[研究] Windows 8.1 Preview 安裝記

[研究] Windows 8.1 Preview 安裝記

Windows 8.1 Preview 預覽版 公開下載

官方網站:
http://windows.microsoft.com/zh-tw/windows-8/preview

下載網站:
http://windows.microsoft.com/zh-tw/windows-8/preview-download

中文映像檔下載:
[32位元](2.9GB)
http://view.atdmt.com/action/BluePreview_DLISO_Clk?href=http://go.microsoft.com/fwlink/?LinkId=302168

[64位元] (3.9GB)
http://view.atdmt.com/action/BluePreview_DLISO_Clk?href=http://go.microsoft.com/fwlink/?LinkId=302167

產品金鑰:
NTTX3-RV7VB-T7X7F-WQYYY-9Y92F

微軟評估中心 (試用版軟體下載)
http://technet.microsoft.com/zh-tw/evalcenter

(下圖)










(下圖) 如果你不想用 Microsoft 帳戶透過 Internet 登入,可以按下左上角圓形箭頭,回到上一個畫面 ([設定] 畫面),拔掉網路線, 按下 [使用快速設定] 按鈕

(下圖)  沒有連上 Internet 的情況,會使用這個畫面






(完)