文档章节

CentOS 64位系统进行Hadoop2.3.0本地编译及完全分布式集群的部署

灯下黑鬼吹灯
 灯下黑鬼吹灯
发布于 2016/11/28 01:15
字数 5230
阅读 107
收藏 3

     本文是在小编的博文《 基于Hadoop1.2.1完全分布式集群的部署 》的基础上写作的,所有硬件环境跟之前博文的硬件环境一模一样,因此本文不想再这方面费过多的口舌,关于hosts配置、JDK的安装和SSH互信,请阅读前一篇文章:https://my.oschina.net/u/2453090/blog/794311

    本文主要讲述如何在CentOS64位操作系统下本地编译hadoop2.3.0的源码,并在此基础上进行3台虚拟主机间的hadoop集群的部署及对Hadoop1.X与Hadoop2.X的安装做简要的总结!

    在官方所给的hadoop2.3.0的安装版压缩文件中存在一个巨大的坑,按照正常的集群搭建步骤总是卡在格式化NameNode上,其中的原因是官方压缩包里的libhadoop.so.1.0.0是32位的,也就是说如果在32位linux操作系统上不存在这样的问题;如果想要在64位的linux系统上安装hadoop2.3.0,必须将源码在本地编译一遍,如此才能从根本上解决问题,本文的目的也是帮助初学者很快地在本地编译源码并顺利进行基于Hadoop2.3.0 完全分布式集群的部署,废话不多说,自己接切入主题吧!

                Let's   Go !

一、安装与编译有关的包(基于CentOS)

  • svn
[root@localhost Downloads]# yum install svn
  • autoconfautomake libtool cmake
[root@localhost Downloads]# yum install autoconfautomake libtool cmake
  • ncurses-devel
[root@localhost Downloads]# yum install ncurses-devel
  • openssl=devel
[root@localhost Downloads]# yum install openssl-devel
  • gcc
[root@localhost Downloads]# yum install gcc*

二、安装maven

  • 下载和解压maven
[root@localhost Desktop]# wget http://mirrors.hust.edu.cn/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
[root@localhost Desktop]# tar xzvf apache-maven-3.3.9-bin.tar.gz
  • 配置环境变量
[root@localhost grid]# ls -a
.              .gnome2                  Public
..             .gnome2_private          .pulse
.bash_history  .gnote                   .pulse-cookie
.bash_logout   .gnupg                   .recently-used.xbel
.bash_profile  .gstreamer-0.10          release-2.3.0
.bashrc        .gtk-bookmarks           release-2.5.0
.cache         .gvfs                    .spice-vdagent
.config        hadoop-2.7.3-src.tar.gz  .ssh
.dbus          .ICEauthority            .subversion
Desktop        .icons                   Templates
.dmrc          .imsettings.log          .themes
Documents      .local                   .thumbnails
Downloads      .m2                      Videos
eclipse        .mozilla                 .viminfo
.eclipse       Music                    workspace
.esd_auth      .nautilus                .xinputrc
.fontconfig    .oracle_jre_usage        .xsession-errors
.gconf         .p2                      .xsession-errors.old
.gconfd        Pictures
[root@localhost grid]# vi .bash_profile 

配置JDK的路径:

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:/usr/local/apache-maven-3.3.9/bin
JAVA_HOME=/usr/local/jdk1.7.0_79
export JAVA_HOME
export PATH
  • 验证
[root@localhost ~]# source .bash_profile 
[root@localhost ~]# mvn -v
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00)
Maven home: /usr/local/apache-maven-3.3.9
Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /usr/local/jdk1.7.0_79/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-279.el6.x86_64", arch: "amd64", family: "unix"

三、安装protobuf

  • 下载和解压
[root@localhost ~]# wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
[root@localhost ~]# tar xzvf ./protobuf-2.5.0.tar.gz
  • 本地编译(依次执行下面命令)
[root@localhost ~]# cd protobuf-2.5.0
[root@localhost ~]# ./configure
[root@localhost ~]# make
[root@localhost ~]# make check
[root@localhost ~]# make install

四、下载hadoop2.3.0源码和本地库编译

  • 通过svn下载程序源码
[root@localhost ~]# svn checkout http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0
  • 重新编译本地库
[root@localhost grid]# ls
Desktop    eclipse                  Pictures       release-2.5.0  workspace
Documents  hadoop-2.7.3-src.tar.gz  Public         Templates
Downloads  Music                    release-2.3.0  Videos
[root@localhost grid]# cd release-2.3.0/
[root@localhost release-2.3.0]# mvn package -Pdist,native -DskipTests -Dtar
  • 编译过程中错误解决

        错误一: JDK版本过高,原先的jdk版本为最新的1.8;

[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-annotations: MavenReportException: Error while creating archive:
[ERROR] Exit code: 1 - /home/grid/release-2.5.0/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java:27: error: unexpected end tag: </ul>
[ERROR] * </ul>
[ERROR] ^
[ERROR] 
[ERROR] Command line was: /usr/local/jdk1.8.0_111/jre/../bin/javadoc @options @packages
[ERROR] 
[ERROR] Refer to the generated Javadoc files in '/home/grid/release-2.5.0/hadoop-common-project/hadoop-annotations/target' dir.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-annotations

        解决办法:下载1.7版本的jdk

        注意修改原先配置maven时的jdk的路径

        错误二:  由于网络原因,导致apache-tomcat-6.0.36.tar.gz没有构建到工程中;

[INFO] Apache Hadoop Main ................................. SUCCESS [  3.219 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [  2.628 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [  5.291 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.423 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  2.978 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  6.345 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [17:34 min]
[INFO] Apache Hadoop Auth ................................. SUCCESS [03:17 min]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 55.583 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [15:21 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [01:42 min]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.057 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [08:27 min]
[INFO] Apache Hadoop HttpFS ............................... FAILURE [ 40.320 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] hadoop-yarn ........................................ SKIPPED
[INFO] hadoop-yarn-api .................................... SKIPPED
[INFO] hadoop-yarn-common ................................. SKIPPED
[INFO] hadoop-yarn-server ................................. SKIPPED
[INFO] hadoop-yarn-server-common .......................... SKIPPED
[INFO] hadoop-yarn-server-nodemanager ..................... SKIPPED
[INFO] hadoop-yarn-server-web-proxy ....................... SKIPPED
[INFO] hadoop-yarn-server-resourcemanager ................. SKIPPED
[INFO] hadoop-yarn-server-tests ........................... SKIPPED
[INFO] hadoop-yarn-client ................................. SKIPPED
[INFO] hadoop-yarn-applications ........................... SKIPPED
[INFO] hadoop-yarn-applications-distributedshell .......... SKIPPED
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SKIPPED
[INFO] hadoop-yarn-site ................................... SKIPPED
[INFO] hadoop-yarn-project ................................ SKIPPED
[INFO] hadoop-mapreduce-client ............................ SKIPPED
[INFO] hadoop-mapreduce-client-core ....................... SKIPPED
[INFO] hadoop-mapreduce-client-common ..................... SKIPPED
[INFO] hadoop-mapreduce-client-shuffle .................... SKIPPED
[INFO] hadoop-mapreduce-client-app ........................ SKIPPED
[INFO] hadoop-mapreduce-client-hs ......................... SKIPPED
[INFO] hadoop-mapreduce-client-jobclient .................. SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins ................. SKIPPED
[INFO] Apache Hadoop MapReduce Examples ................... SKIPPED
[INFO] hadoop-mapreduce ................................... SKIPPED
[INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Client ............................... SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 48:27 min
[INFO] Finished at: 2016-11-27T07:39:45-08:00
[INFO] Final Memory: 67M/237M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project hadoop-hdfs-httpfs: An Ant BuildException has occured: Can't get http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.36/bin/apache-tomcat-6.0.36.tar.gz to /home/grid/release-2.3.0/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/apache-tomcat-6.0.36.tar.gz
[ERROR] around Ant part ...<get dest="downloads/apache-tomcat-6.0.36.tar.gz" skipexisting="true" verbose="true" src="http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.36/bin/apache-tomcat-6.0.36.tar.gz"/>... @ 5:182 in /home/grid/release-2.3.0/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs-httpfs

        解决办法 : 请参考 http://f.dataguru.cn/thread-253905-1-1.html

  • 编译成果查找
[root@localhost release-2.3.0]# ls
BUILDING.txt           hadoop-dist               hadoop-project
dev-support            hadoop-hdfs-project       hadoop-project-dist
hadoop-assemblies      hadoop-mapreduce-project  hadoop-tools
hadoop-client          hadoop-maven-plugins      hadoop-yarn-project
hadoop-common-project  hadoop-minicluster        pom.xml
[root@localhost release-2.3.0]# cd hadoop-dist/
[root@localhost hadoop-dist]# ls
pom.xml  target
[root@localhost hadoop-dist]# cd target/
[root@localhost target]# ls
antrun                    hadoop-2.3.0.tar.gz            maven-archiver
dist-layout-stitching.sh  hadoop-dist-2.3.0.jar          test-dir
dist-tar-stitching.sh     hadoop-dist-2.3.0-javadoc.jar
hadoop-2.3.0              javadoc-bundle-options
  • 本地编译成功标志
ain:
     [exec] $ tar cf hadoop-2.3.0.tar hadoop-2.3.0
     [exec] $ gzip -f hadoop-2.3.0.tar
     [exec] 
     [exec] Hadoop dist tar available at: /home/grid/release-2.3.0/hadoop-dist/target/hadoop-2.3.0.tar.gz
     [exec] 
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /home/grid/release-2.3.0/hadoop-dist/target/hadoop-dist-2.3.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................. SUCCESS [  4.669 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [  4.645 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [  9.152 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  1.219 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  3.685 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [  7.565 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [  5.756 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [  7.229 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [  4.537 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [03:06 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 24.638 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.202 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [03:46 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 49.343 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [08:49 min]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  9.072 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.102 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.153 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [01:27 min]
[INFO] hadoop-yarn-common ................................. SUCCESS [03:20 min]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.267 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 25.902 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [04:41 min]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  6.783 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 28.362 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  2.474 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 13.168 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.078 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  6.020 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  3.818 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.164 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [ 12.232 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.202 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 49.932 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 43.257 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  7.975 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 20.336 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 16.849 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [01:39 min]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  4.883 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 15.809 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [ 23.096 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 36.068 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [01:27 min]
[INFO] Apache Hadoop Archives ............................. SUCCESS [  6.520 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 14.746 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  9.904 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  6.605 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  7.009 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 18.794 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 12.960 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 16.694 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.434 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 41.992 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [  9.035 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.041 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [01:11 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 39:32 min
[INFO] Finished at: 2016-11-27T09:09:40-08:00
[INFO] Final Memory: 88M/237M
[INFO] ------------------------------------------------------------------------

 

五、在64位机器上安装Hadoop2.3.0完全分布式集群

  • 修改hadoop-env.sh
[root@namenode hadoop-2.3.0]# ls
bin  etc  include  lib  libexec  sbin  share
[root@namenode hadoop-2.3.0]# cd etc/
[root@namenode etc]# ls
hadoop
[root@namenode etc]# cd hadoop/
[root@namenode hadoop]# ls
capacity-scheduler.xml      httpfs-site.xml
configuration.xsl           log4j.properties
container-executor.cfg      mapred-env.cmd
core-site.xml               mapred-env.sh
hadoop-env.cmd              mapred-queues.xml.template
hadoop-env.sh               mapred-site.xml.template
hadoop-metrics2.properties  slaves
hadoop-metrics.properties   ssl-client.xml.example
hadoop-policy.xml           ssl-server.xml.example
hdfs-site.xml               yarn-env.cmd
httpfs-env.sh               yarn-env.sh
httpfs-log4j.properties     yarn-site.xml
httpfs-signature.secret
[root@namenode hadoop]# vi hadoop-env.sh 

 

# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/local/jdk1.8.0_111
  • 修改yarn-env.sh
[root@namenode hadoop]# vi yarn-env.sh 


# some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/usr/local/jdk1.8.0_111
if [ "$JAVA_HOME" != "" ]; then
  #echo "run java in $JAVA_HOME"
  JAVA_HOME=$JAVA_HOME
fi

if [ "$JAVA_HOME" = "" ]; then
  echo "Error: JAVA_HOME is not set."
  exit 1
fi
  • 修改slaves
[root@namenode hadoop]# vi slaves 
datanode1
datanode2

 

  • 修改core-site.xml
[root@namenode hadoop]# vi core-site.xml 
<configuration>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://namenode:9000</value>
	</property>
	<property>
		<name>io.file.buffer.size</name>
		<value>131072</value>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>file:/usr/local/hadoop-2.3.0/tmp</value>
		<description>Abase for other temporary directories.</description>
	</property>
	<property>
		<name>hadoop.proxyuser.hduser.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.hduser.groups</name>
		<value>*</value>
	</property>
</configuration>
  • 修改hdfs-site.xml
[root@namenode hadoop]# vi hdfs-site.xml 
<configuration>
	<property>
		<name>dfs.namenode.secondary.http-address</name>
		<value>namenode:9001</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>file:/usr/local/hadoop-2.3.0/name</value>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>file:/usr/local/hadoop-2.3.0/data</value>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>2</value>
	</property>
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
</configuration>
  • 修改mapred-site.xml ,通过模板创建;
[root@namenode hadoop]# cp mapred-site.xml.template  mapred-site.xml
[root@namenode hadoop]# ls
capacity-scheduler.xml  hadoop-env.cmd              hadoop-policy.xml        httpfs-signature.secret  mapred-env.sh               slaves                  yarn-env.sh
configuration.xsl       hadoop-env.sh               hdfs-site.xml            httpfs-site.xml          mapred-queues.xml.template  ssl-client.xml.example  yarn-site.xml
container-executor.cfg  hadoop-metrics2.properties  httpfs-env.sh            log4j.properties         mapred-site.xml             ssl-server.xml.example
core-site.xml           hadoop-metrics.properties   httpfs-log4j.properties  mapred-env.cmd           mapred-site.xml.template    yarn-env.cmd
[root@namenode hadoop]# vi mapred-site.xml
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>namenode:10020</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>namenode:19888</value>
	</property>
</configuration>
  • 修改yarn-site.xml
[root@namenode hadoop]# vi yarn-site.xml 
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>namenode:8032</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>namenode:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>namenode:8031</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address</name>
		<value>namenode:8033</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>namenode:8088</value>
	</property>
</configuration
  • 向各节点复制hadoop
[root@namenode local]# scp -r /usr/local/hadoop-2.3.0/ datanode1:/usr/local/
[root@namenode local]# scp -r /usr/local/hadoop-2.3.0/ datanode2:/usr/local/
  • 确保各个节点的防火墙已经关闭
#关闭:
service iptables stop
#查看:
service iptables status
#重启不启动:
chkconfig iptables off
#重启启动:
chkconfig iptables on
[root@namenode local]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@namenode local]# service iptables status
iptables: Firewall is not running.
[root@namenode local]# chkconfig iptables off
  • 执行文件系统格式化
[root@namenode hadoop-2.3.0]# bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

16/11/27 22:28:58 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = namenode/192.168.115.133
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.3.0
STARTUP_MSG:   classpath = /usr/local/hadoop-2.3.0/etc/hadoop:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/hadoop-annotations-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/hadoop-auth-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/hadoop-common-2.3.0-tests.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/hadoop-common-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/common/hadoop-nfs-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/hadoop-hdfs-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/hdfs/hadoop-hdfs-2.3.0-tests.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-server-common-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-client-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-api-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-common-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0-tests.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.3.0.jar:/usr/local/hadoop-2.3.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = Unknown -r 1771539; compiled by 'root' on 2016-11-27T16:31Z
STARTUP_MSG:   java = 1.8.0_111
************************************************************/
16/11/27 22:28:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-eda91cbe-adec-449b-9525-ccc8c9ede5c8
16/11/27 22:29:01 INFO namenode.FSNamesystem: fsLock is fair:true
16/11/27 22:29:01 INFO namenode.HostFileManager: read includes:
HostSet(
)
16/11/27 22:29:01 INFO namenode.HostFileManager: read excludes:
HostSet(
)
16/11/27 22:29:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/11/27 22:29:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/11/27 22:29:02 INFO util.GSet: Computing capacity for map BlocksMap
16/11/27 22:29:02 INFO util.GSet: VM type       = 64-bit
16/11/27 22:29:02 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
16/11/27 22:29:02 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/11/27 22:29:02 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/11/27 22:29:02 INFO blockmanagement.BlockManager: defaultReplication         = 2
16/11/27 22:29:02 INFO blockmanagement.BlockManager: maxReplication             = 512
16/11/27 22:29:02 INFO blockmanagement.BlockManager: minReplication             = 1
16/11/27 22:29:02 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/11/27 22:29:02 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/11/27 22:29:02 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/11/27 22:29:02 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/11/27 22:29:02 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/11/27 22:29:02 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/11/27 22:29:02 INFO namenode.FSNamesystem: supergroup          = supergroup
16/11/27 22:29:02 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/11/27 22:29:02 INFO namenode.FSNamesystem: HA Enabled: false
16/11/27 22:29:02 INFO namenode.FSNamesystem: Append Enabled: true
16/11/27 22:29:03 INFO util.GSet: Computing capacity for map INodeMap
16/11/27 22:29:03 INFO util.GSet: VM type       = 64-bit
16/11/27 22:29:03 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
16/11/27 22:29:03 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/11/27 22:29:03 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/27 22:29:03 INFO util.GSet: Computing capacity for map cachedBlocks
16/11/27 22:29:03 INFO util.GSet: VM type       = 64-bit
16/11/27 22:29:03 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
16/11/27 22:29:03 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/11/27 22:29:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/11/27 22:29:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/11/27 22:29:03 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/11/27 22:29:03 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/11/27 22:29:03 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/11/27 22:29:03 INFO util.GSet: Computing capacity for map Namenode Retry Cache
16/11/27 22:29:03 INFO util.GSet: VM type       = 64-bit
16/11/27 22:29:03 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
16/11/27 22:29:03 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/11/27 22:29:04 INFO common.Storage: Storage directory /usr/local/hadoop-2.3.0/name has been successfully formatted.
16/11/27 22:29:04 INFO namenode.FSImage: Saving image file /usr/local/hadoop-2.3.0/name/current/fsimage.ckpt_0000000000000000000 using no compression
16/11/27 22:29:04 INFO namenode.FSImage: Image file /usr/local/hadoop-2.3.0/name/current/fsimage.ckpt_0000000000000000000 of size 216 bytes saved in 0 seconds.
16/11/27 22:29:04 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/11/27 22:29:04 INFO util.ExitUtil: Exiting with status 0
16/11/27 22:29:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at namenode/192.168.115.133
************************************************************/
  • 启动hadoop

    namenode节点启动情况:

[root@namenode hadoop-2.3.0]# ls
bin  data  etc  include  lib  libexec  name  sbin  share  tmp
[root@namenode hadoop-2.3.0]# ./sbin/start-dfs.sh
Starting namenodes on [namenode]
namenode: starting namenode, logging to /usr/local/hadoop-2.3.0/logs/hadoop-root-namenode-namenode.out
datanode2: starting datanode, logging to /usr/local/hadoop-2.3.0/logs/hadoop-root-datanode-datanode2.out
datanode1: starting datanode, logging to /usr/local/hadoop-2.3.0/logs/hadoop-root-datanode-datanode1.out
Starting secondary namenodes [namenode]
namenode: starting secondarynamenode, logging to /usr/local/hadoop-2.3.0/logs/hadoop-root-secondarynamenode-namenode.out
[root@namenode hadoop-2.3.0]# jps
5286 SecondaryNameNode
5119 NameNode
5394 Jps
[root@namenode hadoop-2.3.0]# ./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.3.0/logs/yarn-grid-resourcemanager-namenode.out
datanode2: starting nodemanager, logging to /usr/local/hadoop-2.3.0/logs/yarn-root-nodemanager-datanode2.out
datanode1: starting nodemanager, logging to /usr/local/hadoop-2.3.0/logs/yarn-root-nodemanager-datanode1.out
[root@namenode hadoop-2.3.0]# jps
5286 SecondaryNameNode
5119 NameNode
5448 ResourceManager
5582 Jps

    datanode1节点启动情况:

[root@datanode1 ~]# jps
4944 Jps
4858 DataNode
[root@datanode1 ~]# jps
5076 Jps
4980 NodeManager
4858 DataNode

    datanode2节点启动情况:

[root@datanode2 Desktop]# jps
4842 Jps
4765 DataNode
[root@datanode2 Desktop]# jps
4972 Jps
4876 NodeManager
4765 DataNode

六、集群测试

 

七、Hadoop1.X与Hadoop2.X安装对比

  •  
  •  

 

 

 

 

 

 

 

 

 

© 著作权归作者所有

灯下黑鬼吹灯
粉丝 29
博文 15
码字总数 29995
作品 0
闸北
前端工程师
私信 提问
加载中

评论(0)

企业级Hadoop2.3.0完全分布式集群的搭建

***面向生产环境的大集群模式重新安装实施Hadoop,要求 1)使用DNS而不是hosts文件解析主机名 2)使用NFS共享密钥文件,而不是逐个手工拷贝添加密钥 3)复制Hadoop时使用批量拷贝脚本而不是逐...

灯下黑鬼吹灯
2016/12/13
90
0
开源引擎Docker单机安装教程

概述: Docker是一个开源的引擎,可以轻松的为任何应用创建一个轻量级的、可移植的、自给自足的容器。开发者在笔记本上编译测试通过的容器可以批量地在生产环境中部署,包括VMs(虚拟机)、b...

漫天雪_昆仑巅
2017/12/18
0
0
实战CentOS系统部署Hadoop集群服务

HDFS架构图 一、Hadoop框架简介 Hadoop的框架最核心的设计就是:HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。 HDFS(Hadoop Distribution File System...

linuxprobe
2016/09/23
53
0
腾讯云关于 Linux 系统“脏牛”漏洞的修复通告

昨日,我们发布了一条 Linux 严重提权漏洞正被利用的新闻。今日,腾讯云在其官方论坛公布了关于 Linux 系统“脏牛”漏洞的修复通告。内容如下: 近日Linux官方爆出了“脏牛”漏洞(代号:Dir...

局长
2016/10/23
9.2K
15
分布式系统唯一性ID生成策略思考

多种ID生成方式 1. UUID 算法的核心思想是结合机器的网卡、当地时间、一个随记数来生成UUID。 优点:本地生成,性能好,没有高可用风险; 缺点:长度过长,且无序 2. 数据库sequence 使用数据...

群星纪元
2019/04/10
95
0

没有更多内容

加载失败,请刷新页面

加载更多

OSChina 周六乱弹 —— 屁会不会传染病毒

Osc乱弹歌单(2020)请戳(这里) 【今日歌曲】 @薛定谔的兄弟 :分享洛神有语创建的歌单「我喜欢的音乐」: 《ハレハレヤ(朗朗晴天)》- 猫瑾 手机党少年们想听歌,请使劲儿戳(这里) @空格...

小小编辑
29分钟前
45
1
两个值得注意的问题

对成员变量的操作只能放在方法中,方法可以对成员变量和方法体中自己定义的局部 变量进行操作.在定义类的成员变量时可以同时赋予初值,如 class A { int a=12; float b=12.56f; } 但是不可以这...

咔啡
37分钟前
23
0
第三章 分布式服务框架的选择

1.大项目工程且多人维护的弊端 (1)项目团队协同成本高,业务响应越来越慢 (2)应用复杂度已超出人的认知负载(向杂乱的电线一样) (3)错误难于隔离(一个模块出错,整个系统挂掉) (4...

zxx901221
今天
68
0
eclipse 上传jar到远程仓库

使用maven的项目中,有时需要把本地的项目打成jar包上传到mevan仓库。 操作如下: 前提:pom文件中配置好远程库的地址,否则会报错 一、将maven 中的settings文件配置好用户名和密码,如下:...

文文1
昨天
63
0
Linux学习第七天

接续第5章内容 文件的特殊权限 SUID --格式:chmod u+s 文件 --是一种对二进制程序进行设置的特殊权限,可以让二进制程序的执行者临时拥有属主的权限(仅对拥有执行权限的二进制程序有效) ...

唯穆静雪
昨天
74
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部