[研究] Apache Mahout 0.9 (bin) 安裝 (CentOS 7.0 x86_64)
2014-08-13
Mahout's goal is to build scalable machine learning libraries.
官方網站
http://mahout.apache.org/
簡介
https://cwiki.apache.org/confluence/display/MAHOUT/Overview
下載
http://www.apache.org/dyn/closer.cgi/mahout/
http://ftp.tc.edu.tw/pub/Apache/mahout/0.8/mahout-distribution-0.9.tar.gz
需求
Java 1.6.x or greater.
Maven 3.x to build the source code.
安裝參考
https://cwiki.apache.org/confluence/display/MAHOUT/BuildingMahout
https://cwiki.apache.org/confluence/display/MAHOUT/Mahout+Wiki#MahoutWiki-Installation%2FSetup
安裝
# 安裝 JAVA
yum -y install java-1.7.0-openjdk
yum -y install java-1.7.0-openjdk-devel
echo 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el7_0.x86_64' >> /etc/profile
source /etc/profile
# 安裝 Hadoop
cd /usr/local
wget http://apache.cdpa.nsysu.edu.tw/hadoop/common/hadoop-2.4.1/hadoop-2.4.1.tar.gz
tar zxvf hadoop-2.4.1.tar.gz
echo 'export HADOOP_HOME=/usr/local/hadoop-2.4.1' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile
echo 'export HADOOP_CONF_DIR=$HADOOP_HOME/conf' >> /etc/profile
echo 'export CLASSPATH=${CLASSPATH}:$HADOOP_CONF_DIR' >> /etc/profile
source /etc/profile
# 安裝 Apache Mahout 0.9 Binary
wget http://ftp.tc.edu.tw/pub/Apache/mahout/0.9/mahout-distribution-0.9.tar.gz
tar zxvf mahout-distribution-0.9.tar.gz
export MAHOUT_HOME=/usr/local/mahout-distribution-0.9
export PATH=$MAHOUT_HOME/bin:$PATH
#如果要想Mahout運行在Hadoop上,則MAHOUT_LOCAL必須為空
echo 'export MAHOUT_LOCAL=' >> /etc/profile
source /etc/profile
執行測試
[root@localhost local]# mahout
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/local/hadoop-2.4.1/bin/hadoop and HADOOP_CONF_DIR=/usr/local/hadoop-2.4.1/conf
MAHOUT-JOB: /usr/local/mahout-distribution-0.9/mahout-examples-0.9-job.jar
An example program must be given as the first argument.
Valid program names are:
arff.vector: : Generate Vectors from an ARFF file or directory
baumwelch: : Baum-Welch algorithm for unsupervised HMM training
canopy: : Canopy clustering
cat: : Print a file or resource as the logistic regression models would see it
cleansvd: : Cleanup and verification of SVD output
clusterdump: : Dump cluster output to text
clusterpp: : Groups Clustering Output In Clusters
cmdump: : Dump confusion matrix in HTML or text formats
concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
fkmeans: : Fuzzy K-means clustering
hmmpredict: : Generate random sequence of observations by given HMM
itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
kmeans: : K-means clustering
lucene.vector: : Generate Vectors from a Lucene index
lucene2seq: : Generate Text SequenceFiles from a Lucene index
matrixdump: : Dump matrix in CSV format
matrixmult: : Take the product of two matrices
parallelALS: : ALS-WR factorization of a rating matrix
qualcluster: : Runs clustering experiments and summarizes results in a CSV
recommendfactorized: : Compute recommendations using the factorization of a rating matrix
recommenditembased: : Compute recommendations using item-based collaborative filtering
regexconverter: : Convert text files on a per line basis based on regular expressions
resplit: : Splits a set of SequenceFiles into a number of equal splits
rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
runlogistic: : Run a logistic regression model against CSV data
seq2encoded: : Encoded Sparse Vector generation from Text sequence files
seq2sparse: : Sparse Vector generation from Text sequence files
seqdirectory: : Generate sequence files (of Text) from a directory
seqdumper: : Generic Sequence File dumper
seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
seqwiki: : Wikipedia xml dump to sequence file
spectralkmeans: : Spectral k-means clustering
split: : Split Input data into test and train sets
splitDataset: : split a rating dataset into training and probe parts
ssvd: : Stochastic SVD
streamingkmeans: : Streaming k-means clustering
svd: : Lanczos Singular Value Decomposition
testnb: : Test the Vector-based Bayes classifier
trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
trainlogistic: : Train a logistic regression using stochastic gradient descent
trainnb: : Train the Vector-based Bayes classifier
transpose: : Take the transpose of a matrix
validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
vectordump: : Dump vectors from a sequence file to text
viterbi: : Viterbi decoding of hidden states from given output states sequence
[root@localhost local]#
(完)
[研究] Apache Mahout 0.9 (bin) 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.com/2014/08/apache-mahout-0.html
[研究] Hadoop 2.4.1 安裝 (CentOS 7.0 x86_64)
http://shaurong.blogspot.tw/2014/08/hadoop-241-centos-70-x8664.html
[研究] Apache Mahout 0.8 (bin) 安裝 (CentOS 6.5 x64)
http://shaurong.blogspot.com/2013/12/apache-mahout-08-bin-centos-65-x64.html
[研究] Apache Mahout 0.8 (svn) 安裝 (CentOS 6.4 x64)
http://shaurong.blogspot.tw/2013/07/apache-mahout-08-centos-64-x64.html
沒有留言:
張貼留言