文档章节

hadoop2.7环境的编译安装

openthings
 openthings
发布于 2015/12/17 09:31
字数 1175
阅读 211
收藏 0

0.环境说明

编译所用的操作系统为:

[root@host11 hadoop-2.7.1-src]# cat /etc/redhat-release 

CentOS release 6.5 (Final)

hadoop的版本为2.7.1

1.安装依赖软件包

yum install svn autoconf automake libtool cmake ncurses-devel openssl-devel gcc*


2.配置java和maven环境

wget wget http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz?AuthParam=1443446776_174368b9ab1a6a92468aba5cd4d092d0
tar -zxvf jdk-8u60-linux-x64.tar.gz -C /usr/local
cd /usr/local
ln -s jdk1.8.0_60 jdk
echo ‘export JAVA_HOME=/usr/local/jdk‘ >>/etc/profile;
echo ‘export PATH=$JAVA_HOME/bin:$PATH‘>>/etc/profile;
wget http://mirrors.hust.edu.cn/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
tar -zxvf apache-maven-3.3.3-bin.tar.gz -C /usr/local
cd /usr/local
ln -s apache-maven-3.3.3 maven
echo ‘export PATH=/usr/local/maven/bin:$PATH‘ >/etc/profile.d/maven.sh;

3.下载并安装protobuf(必须使用2.5版本)

wget https://codeload.github.com/google/protobuf/zip/v2.5.0
unzip protobuf-2.5.0.zip
wget http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2
tar -jxvf gtest-1.5.0.tar.bz2
mv gtest-1.5.0 ./protobuf-2.5.0/gtest
./autogen.sh
./configure
make
make check
make install
which protoc

[root@host11 protobuf-master]# which protoc

/usr/local/bin/protoc

4.下载并安装ant

wget http://mirrors.cnnic.cn/apache//ant/binaries/apache-ant-1.9.6-bin.zip

unzip apache-ant-1.9.6-bin.zip

mv apache-ant-1.9.6 /usr/local/ant

echo ‘export PATH=/usr/local/ant/bin:$PATH‘ >/etc/profile.d/ant.sh

5.编译hadoop

tar -zxvf tar -zxvf hadoop-2.7.1-src.tar.gz

mvn package -Pdist,native -DskipTests -Dtar

6.故障处理

第一次编译故障:

[ERROR] Failed to execute goal on project hadoop-auth: Could not resolve dependencies for project org.apache.hadoop:hadoop-auth:jar:2.7.1: The following artifacts could not be resolved: org.mockito:mockito-all:jar:1.8.5, org.mortbay.jetty:jetty-util:jar:6.1.26, org.mortbay.jetty:jetty:jar:6.1.26, org.apache.tomcat.embed:tomcat-embed-core:jar:7.0.55, org.apache.httpcomponents:httpclient:jar:4.2.5, org.apache.zookeeper:zookeeper:jar:3.4.6: Could not transfer artifact org.mockito:mockito-all:jar:1.8.5 from/to central (https://repo.maven.apache.org/maven2): GET request of: org/mockito/mockito-all/1.8.5/mockito-all-1.8.5.jar from central failed: SSL peer shut down incorrectly -> [Help 1]

[ERROR] 

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR] 

[ERROR] For more information about the errors and possible solutions, please read the following articles:

[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

[ERROR] 

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn <goals> -rf :hadoop-auth

解决办法:

这种情况很常见,这是因为插件没有下载完毕造成的。多执行几次下面命令就可以了

mvn package -Pdist,native -DskipTests -Dtar

第二次编译故障:

[ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.7.1:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc version is ‘libprotoc 3.0.0‘, expected version is ‘2.5.0‘ -> [Help 1]

protobuf版本过新,需要使用2.5的版本;

7.编译成功的日志

[INFO] Apache Hadoop Main ................................. SUCCESS [  7.502 s]

[INFO] Apache Hadoop Project POM .......................... SUCCESS [  4.844 s]

[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 10.274 s]

[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.477 s]

[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  4.568 s]

[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 11.000 s]

[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  9.870 s]

[INFO] Apache Hadoop Auth ................................. SUCCESS [  9.003 s]

[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  9.321 s]

[INFO] Apache Hadoop Common ............................... SUCCESS [03:21 min]

[INFO] Apache Hadoop NFS .................................. SUCCESS [ 20.029 s]

[INFO] Apache Hadoop KMS .................................. SUCCESS [ 21.350 s]

[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.079 s]

[INFO] Apache Hadoop HDFS ................................. SUCCESS [10:57 min]

[INFO] Apache Hadoop HttpFS ............................... SUCCESS [01:15 min]

[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 46.255 s]

[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 21.495 s]

[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.242 s]

[INFO] hadoop-yarn ........................................ SUCCESS [  0.137 s]

[INFO] hadoop-yarn-api .................................... SUCCESS [01:34 min]

[INFO] hadoop-yarn-common ................................. SUCCESS [01:31 min]

[INFO] hadoop-yarn-server ................................. SUCCESS [  0.291 s]

[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 35.037 s]

[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 44.224 s]

[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  4.315 s]

[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 17.461 s]

[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 46.435 s]

[INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 10.698 s]

[INFO] hadoop-yarn-client ................................. SUCCESS [  8.976 s]

[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [ 10.343 s]

[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.113 s]

[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  7.395 s]

[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  4.006 s]

[INFO] hadoop-yarn-site ................................... SUCCESS [  0.108 s]

[INFO] hadoop-yarn-registry ............................... SUCCESS [ 12.317 s]

[INFO] hadoop-yarn-project ................................ SUCCESS [ 18.781 s]

[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.396 s]

[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 46.350 s]

[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 34.772 s]

[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  8.779 s]

[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 22.440 s]

[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 12.865 s]

[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [01:45 min]

[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  6.051 s]

[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  8.077 s]

[INFO] hadoop-mapreduce ................................... SUCCESS [ 12.782 s]

[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 24.680 s]

[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 50.965 s]

[INFO] Apache Hadoop Archives ............................. SUCCESS [  6.861 s]

[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 12.928 s]

[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  6.784 s]

[INFO] Apache Hadoop Data Join ............................ SUCCESS [  3.629 s]

[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  7.135 s]

[INFO] Apache Hadoop Extras ............................... SUCCESS [  6.233 s]

[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 31.548 s]

[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 10.084 s]

[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [35:23 min]

[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 36.126 s]

[INFO] Apache Hadoop Client ............................... SUCCESS [ 24.463 s]

[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.353 s]

[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 12.506 s]

[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 34.475 s]

[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.159 s]

[INFO] Apache Hadoop Distribution ......................... SUCCESS [02:37 min]

[INFO] ------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO] ------------------------------------------------------------------------

[INFO] Total time: 01:12 h

[INFO] Finished at: 2015-10-03T03:54:29+08:00

[INFO] Final Memory: 91M/237M

[INFO] ------------------------------------------------------------------------

[root@host11 hadoop-2.7.1-src]# 

8.检查生成包

cd /tmp/hadoop-2.7.1-src/hadoop-dist/target;

[root@host11 target]# ls -ld hadoop*

drwxr-xr-x 9 root root      4096 10月  3 03:51 hadoop-2.7.1

-rw-r--r-- 1 root root 194796372 10月  3 03:52 hadoop-2.7.1.tar.gz

-rw-r--r-- 1 root root      2823 10月  3 03:52 hadoop-dist-2.7.1.jar

-rw-r--r-- 1 root root 390430395 10月  3 03:54 hadoop-dist-2.7.1-javadoc.jar

至此编译工作顺利结束。

本文出自 “webseven” 博客,请务必保留此出处http://webseven.blog.51cto.com/4388012/1699980

本文转载自:

openthings
粉丝 325
博文 1140
码字总数 689435
作品 1
东城
架构师
私信 提问
加载中

评论(2)

openthings
openthings 博主
编译使用:mvn package -Pdist,native -DskipTests -Dtar,有的测试项不通过,编译时先忽略掉。
openthings
openthings 博主
安装完protoc后,使用protoc --version测试一下版本号,如果有问题,可以重启一下系统,再测试一下。因为protoc在当前系统中会被使用,导致没有更新。
1、Spark预编译版本下载安装与启动

1、下载 2、安装 3、hadoop 4、spark-shell scala> file.first() 5、Resilient Distributed Dataset 6、通过编译方式安装spark 轻量级高速集群计算。针对大规模的数据处理快速通用的引擎。比...

chenkangyao
2017/10/26
30
0
pycharm搭建spark环境

pycharm搭建spark环境 安装python环境 安装spark环境 官网下载 包,解压即可 配置 配置python-spark环境 将spark目录 下的 解压 将解压后的 放到 python 目录 下 提示:python 和 spark 的安...

jackmanwu
2018/08/20
499
0
Ubuntu系统搭建单机Spark注意事项

对于Spark而言,如果大家只是想摸一下、熟悉熟悉而已,可以搭建单机的Spark,大致步骤如下(我使用VMWare下的Ubuntu 14.04,暂不考虑安全问题,在root下运行): 1、安装Ubuntu 14.04,注意装...

chenhu73
2017/09/30
0
0
win10 spark+scala+eclipse+sbt 安装配置

转载请务必注明原创地址为:https://dongkelun.com/2018/03/15/winSparkConf/ 1、首先安装配置jdk1.8以上,建议全部的安装路径不要有空格 2、安装spark 2.1 下载 下载地址:http://spark.apac...

董可伦
2018/05/20
0
0
IDEA开发Spark应用实战(Scala)

版权声明:欢迎转载,请注明出处,谢谢。 https://blog.csdn.net/boling_cavalry/article/details/87510822 Scala语言在函数式编程方面的优势适合Spark应用开发,IDEA是我们常用的IDE工具,今...

博陵精骑
02/17
0
0

没有更多内容

加载失败,请刷新页面

加载更多

一起来学Java8(三)——方法引用

在一起来学Java8(一)——函数式编程中有一个简单的函数式编程的例子: import java.util.function.Consumer;class Person { public static void sayHello(String name) { S...

猿敲月下码
25分钟前
13
0
读书笔记:深入理解ES6(十一)

第十一章 Promise与异步编程   Promise可以实现其他语言中类似Future和Deferred一样的功能,是另一种异步编程的选择,它既可以像事件和回调函数一样指定稍后执行的代码,也可以明确指示代码...

张森ZS
48分钟前
19
0
面试官,Java8 JVM内存结构变了,永久代到元空间

在文章《JVM之内存结构详解》中我们描述了Java7以前的JVM内存结构,但在Java8和以后版本中JVM的内存结构慢慢发生了变化。作为面试官如果你还不知道,那么面试过程中是不是有些露怯?作为面试...

程序新视界
56分钟前
28
0
Elasticsearch 实战(一) - 简介

官腔 Elasticsearch,分布式,高性能,高可用,可伸缩的搜索和分析系统 基本等于没说,咱们慢慢看 1 概述 百度:我们比如说想找寻任何的信息的时候,就会上百度去搜索一下,比如说找一部自己喜...

JavaEdge
今天
23
0
【jQuery基础学习】11 jQuery性能简单优化

本文转载于:专业的前端网站➦【jQuery基础学习】11 jQuery性能简单优化 关于性能优化 合适的选择器 $("#id")会直接调用底层方法,所以这是最快的。如果这样不能直接找到,也可以用find方法继...

前端老手
今天
18
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部