文档章节

Hadoop (CDH4发行版)集群部署

snakelxc
 snakelxc
发布于 2013/07/10 12:46
字数 2608
阅读 6579
收藏 101

前言

折腾了一段时间hadoop的部署管理,写下此系列博客记录一下。

为了避免各位做部署这种重复性的劳动,我已经把部署的步骤写成脚本,各位只需要按着本文把脚本执行完,整个环境基本就部署完了。部署的脚本我放在了开源中国的git仓库里(http://git.oschina.net/snake1361222/hadoop_scripts)。

本文的所有部署都基于cloudera公司的CDH4,CDH4是cloudera公司包装好的hadoop生态圈一系列yum包,把CDH4放到自己的yum仓库中,能极大的提高hadoop环境部署的简易性。

本文的部署过程中涵盖了namenode的HA实现,hadoop管理的解决方案(hadoop配置文件的同步,快速部署脚本等)。

环境准备

一共用5台机器作为硬件环境,全都是centos 6.4

  • namenode & resourcemanager 主服务器: 192.168.1.1

  • namenode & resourcemanager 备服务器: 192.168.1.2

  • datanode & nodemanager 服务器: 192.168.1.100 192.168.1.101 192.168.1.102

  • zookeeper 服务器集群(用于namenode 高可用的自动切换): 192.168.1.100 192.168.1.101

  • jobhistory 服务器(用于记录mapreduce的日志): 192.168.1.1

  • 用于namenode HA的NFS: 192.168.1.100

环境部署

一、加入CDH4的YUM仓库

1.最好的办法是把cdh4的包放到自建的yum仓库中,如何自建yum仓库请看 自建YUM仓库

2.如果不想自建yum仓库,在所有的hadoop机器执行以下操作加入cdn4的yum仓库

wget http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm
sudo yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm

二、创建用于namenode HA的NFS服务器

1.登录192.168.1.100,执行以下脚本 createNFS.sh

#!/bin/bash
yum -y install rpc-bind nfs-utils
mkdir -p /data/nn_ha/
echo "/data/nn_ha  *(rw,root_squash,all_squash,sync)" >> /etc/exports
/etc/init.d/rpcbind start
/etc/init.d/nfs  start
chkconfig  --level 234 rpcbind   on
chkconfig  -level 234 nfs  on

三、Hadoop Namenode & resourcemanager 主服务器 环境部署

1.登录192.168.1.1,创建脚本目录,把脚本从git仓库复制下来


yum –y install git
mkdir –p /opt/
cd /opt/
git clone http://git.oschina.net/snake1361222/hadoop_scripts.git
/etc/init.d/iptables stop

2.修改hostname

sh /opt/hadoop_scripts/deploy/AddHostname.sh

3.修改部署脚本的配置文件

vim /opt/hadoop_scripts/deploy/config
#添加master服务器的地址,也就是namenode主服务器
master="192.168.1.1"
#添加nfs服务器地址
nfsserver="192.168.1.100"

4.编辑hosts文件(此文件会同步到hadoop集群所有机器)

vim /opt/hadoop_scripts/share_data/resolv_host
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 nn.dg.hadoop.cn
192.168.1.2 nn2.dg.hadoop.cn
192.168.1.100 dn100.dg.hadoop.cn
192.168.1.101 dn101.dg.hadoop.cn
192.168.1.102 dn102.dg.hadoop.cn

5.执行部署脚本CreateNamenode.sh

sh /opt/hadoop_scripts/deploy/CreateNamenode.sh

6.搭建saltstack master

PS:类似于puppet的服务器管理开源工具,比较轻量,在这里用于管理hadoop集群,调度datanode,关于saltstack的详细请看 SaltStack部署与使用

a.安装

yum -y install salt salt-master

b.修改配置文件`/etc/salt/master`,下面标志的是需要修改的项

修改监听IP:
interface: 0.0.0.0
多线程池:
worker_threads: 5
开启任务缓存:(官方描叙开启缓存能承载5000minion)
job_cache
开启自动认证:
auto_accept: True

c.开启服务

/etc/init.d/salt-master start
chkconfig  salt-master on

7.部署过程中已经把我的sample配置复制过去了,所以只需要修改部分配置文件

a. /etc/hadoop/conf/hdfs-site.xml (其实就是按实际修改主机名地址)

<property>
  <name>dfs.namenode.rpc-address.mycluster.ns1</name>
  <value>nn.dg.hadoop.cn:8020</value>
  <description>定义ns1的rpc地址</description>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.ns2</name>
  <value>nn2.dg.hadoop.cn:8020</value>
  <description>定义ns2的rpc地址</description>
</property>
<property>
    <name>ha.zookeeper.quorum</name>
    <value>dn100.dg.hadoop.cn:2181,dn101.dg.hadoop.cn:2181,dn102.dg.hadoop.cn:2181,</value>
    <description>指定用于HA的ZooKeeper集群机器列表</description>
</property>

b. mapred-site.xml

<property>
 <name>mapreduce.jobhistory.address</name>
 <value>nn.dg.hadoop.cn:10020</value>
</property>
<property>
 <name>mapreduce.jobhistory.webapp.address</name>
 <value>nn.dg.hadoop.cn:19888</value>
</property>


c. yarn-site.xml

<property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>nn.dg.hadoop.cn:8031</value>
</property>
<property>
  <name>yarn.resourcemanager.address</name>
  <value>nn.dg.hadoop.cn:8032</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>nn.dg.hadoop.cn:8030</value>
</property>
<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>nn.dg.hadoop.cn:8033</value>
</property>


三、Hadoop Namenode & resourcemanager 备服务器 环境部署

1.登录192.168.1.2,创建脚本目录,从主服务器把脚本同步过来

/etc/init.d/iptables stop
mkdir –p /opt/hadoop_scripts
rsync –avz 192.168.1.1::hadoop_s   /opt/hadoop_scripts

2.修改hostname

sh /opt/hadoop_scripts/deploy/AddHostname.sh

2.执行部署脚本CreateNamenode.sh

sh /opt/hadoop_scripts/deploy/CreateNamenode.sh

3.同步hadoop配置文件

rsync –avz 192.168.1.1::hadoop_conf  /etc/hadoop/conf

4.部署saltstack客户端

sh /opt/hadoop_scripts/deploy/salt_minion.sh

四、zookeeper服务器集群部署

zookeeper是一个开源分布式服务,在这里用于namenode 的auto fail over功能。

1.安装

yum install zookeeper zookeeper-server

2.修改配置文件/etc/zookeeper/conf/zoo.cfg

maxClientCnxns=50
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
#这里指定zookeeper集群内的所有机器,此配置集群内机器都是一样的
server.1=dn100.dg.hadoop.cn :2888:3888
server.2=dn101.dg.hadoop.cn:2888:3888

3.指定当前机器的id,并开启服务

#譬如当前机器是192.168.1.100(dn100.dg.hadoop.cn),它是server.1,id是1,SO:
echo "1" >  /var/lib/zookeeper/myid
chown -R zookeeper.zookeeper /var/lib/zookeeper/
service zookeeper-server init
/etc/init.d/zookeeper-server start
chkconfig zookeeper-server on
#如此类推,部署192.168.1.101

五、datanode & nodemanager 服务器部署

1.登录datanode机器,创建脚本目录,从主服务器把脚本同步过来

/etc/init.d/iptables stop
mkdir –p /opt/hadoop_scripts
rsync –avz 192.168.1.1::hadoop_s   /opt/hadoop_scripts

2.修改hostname,执行部署脚本 CreateDatanode.sh

sh /opt/hadoop_scripts/deploy/AddHostname.sh
sh /opt/hadoop_scripts/deploy/CreateDatanode.sh

集群初始化

到这里,hadoop集群的环境已部署完毕,现在开始初始化集群

一、namenode的HA高可用初始化

1.把zookeeper集群服务启动(192.168.1.100 192.168.1.101 )

/etc/init.d/zookeeper-server start

2.在namenode主服务器(192.168.1.1)执行zookeeper的failover功能格式化

sudo –u hdfs hdfs zkfc –formatZK

3.把namenode主备服务器的zkfc服务起来(192.168.1.1 192.168.1.2)

/etc/init.d/hadoop-hdfs-zkfc start

4.在namenode主服务器(192.168.1.1)格式化hdfs

#确保是用hdfs用户格式化
sudo -u hdfs hadoop namenode –format

5.第一次搭建namenode高可用,需要把name.dir下面的数据复制到namenode备服务器(此坑花了好多时间)

a.在主服务器(192.168.1.1)执行

tar -zcvPf /tmp/namedir.tar.gz /data/hadoop/dfs/name/
nc -l 9999 < /tmp/namedir.tar.gz

b.在备服务器(192.168.1.2)执行

wget 192.168.1.1:9999 -O /tmp/namedir.tar.gz
tar -zxvPf /tmp/namedir.tar.gz

6.主从服务都启动

/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-yarn-resourcemanager start

7.查看hdfs的web界面

http://192.168.1.1:9080
http://192.168.1.2:9080
#如果在web界面看到两个namenode都是backup状态,那就是auto fail over配置不成功
#查看zkfc日志(/var/log/hadoop-hdfs/hadoop-hdfs-zkfc-nn.dg.s.kingsoft.net.log)
#查看zookeeper集群的日志(/var/log/zookeeper/zookeeper.log)

8.现在可以尝试关闭namenode主服务,看是否能主从切换

二、hdfs集群开启

到这里,所有hadoop部署已完成,现在开始把集群启动,验证效果

1.把所有datanode服务器启动

#还记得之前搭建的saltstack管理工具不,现在开始发挥它的作用,登录saltstack master(192.168.1.1)执行
salt -v "dn*" cmd.run "/etc/init.d/hadoop-hdfs-datanode start"

2.查看hdfs web界面,看是否都成为live nodes

3.如果没有问题,现在可以尝试hdfs操作

#创建一个tmp目录
sudo -u hdfs hdfs dfs -mkdir /tmp
#创建一个10G大小的空文件,计算它的MD5值,并放入hdfs
dd if=/dev/zero of=/data/test_10G_file bs=1G count=10
md5sum /data/test_10G_file
sudo -u hdfs hdfs dfs -put /data/test_10G_file  /tmp
sudo -u hdfs hdfs dfs -ls /tmp
#现在可以尝试关闭一台datanode,然后把刚才的测试文件拉取出来,再算一次MD5看是否一样
sudo -u hdfs hdfs dfs -get /tmp/test_10G_file /tmp/
md5sum /tmp/test_10G_file

三、yarn集群开启

hadoop除了hdfs用于大数据的分布式存储,还有更重要的组件,分布式计算(mapreduce)。现在我们来把mapreducev2 yarn集群启动

1.在主服务器把resourcemanager服务起来(192.168.1.1)

/etc/init.d/hadoop-yarn-resourcemanager start

2.把所有nodemanager服务启动

#还是登陆saltstack master,执行
salt -v "dn*" cmd.run "/etc/init.d/hadoop-yarn-nodemanager start"

3.查看yarn 任务追踪界面(http://192.168.1.1:9081/),看是否所有nodes都已加入

4.hadoop自带有基准测试的mapreduce实例,我们利用它来测试yarn环境是否正常

#TestDFSIO测试HDFS的读写性能,写10个文件,每个文件1G.
su hdfs -
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar TestDFSIO  -write -nrFiles 10 -fileSize 1000
#Sort测试MapReduce
##向random-data目录输出数据
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter  random-data
##运行sort程序
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort random-data sorted-data
##验证sorted-data 文件是否排好序
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar testmapredsort -sortInput random-data \
-sortOutput sorted-data

Hadoop集群的管理

一、datanode & nodemanager 节点加入

1.修改hosts表,譬如有节点192.168.1.103需要加入

vim /opt/hadoop_scripts/share_data/resolv_host
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 nn.dg.hadoop.cn
192.168.1.2 nn2.dg.hadoop.cn
192.168.1.100 dn100.dg.hadoop.cn
192.168.1.101 dn101.dg.hadoop.cn
192.168.1.102 dn102.dg.hadoop.cn
192.168.1.103 dn103.dg.hadoop.cn

2.修改hostname,同步脚本目录,并执行部署

mkdir –p /opt/hadoop_scripts
rsync –avz 192.168.1.1::hadoop_s   /opt/hadoop_scripts
sh /opt/hadoop_scripts/deploy/CreateDatanode.sh
sh /opt/hadoop_scripts/deploy/AddHostname.sh


3.开启服务

/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-yarn-nodemanager start

二、修改hadoop配置文件

一般在一个hadoop集群中维护一份hadoop配置,这份hadoop配置需要分发到集群中各个成员。这里的做法是 salt + rsync

#修改namenode主服务器的hadoop配置文件  /etc/hadoop/conf/,然后执行以下命令同步到集群中所有成员
sync_h_conf
#脚本目录也是需要维护的,譬如hosts文件/opt/hadoop_scripts/share_data/resolv_host,修改后执行以下命令同步到集群中所有成员
sync_h_script
#其实这两个命令是我自己定义的salt命令的别名,查看这里/opt/hadoop_scripts/profile.d/hadoop.sh

三、监控

比较普遍的方案是,ganglia和nagios监控,ganglia收集大量度量,以图形化程序,nagios在某度量超出阀值后报警.ganglia监控以后补充一下文档

其实,hadoop自带有接口提供我们自己写监控程序,而且这个接口还是比较简单,通过这样便可以访问http://192.168.1.1:9080/jmx,返回值是JSON格式,其中的内容也非常详细。但是每次查询都返回一大串的JSON也是浪费,其实接口还提供更新详细的查询 譬如我只想查找系统信息,可以这样调用接口 http://192.168.1.1:9080/jmx?qry=java.lang:type=OperatingSystem 。qry参考后跟的就是整个JSON的“name”这个key的值


总结

在折腾hadoop集群的部署是还是遇到了很多坑,打算下篇写自己所遭遇的问题。通过本文部署遇到问题的可以联系一下我,互相交流一下。QQ:83766787。当然也欢迎大家一起修改部署的脚本,git地址为:http://git.oschina.net/snake1361222/hadoop_scripts

© 著作权归作者所有

共有 人打赏支持
snakelxc

snakelxc

粉丝 54
博文 8
码字总数 8513
作品 0
珠海
系统管理员
加载中

评论(19)

snakelxc
snakelxc

引用来自“long-e”的评论

您好,有没有遇到安装了salt后,服务器上ssh连接本机或者其他服务器,输入完密码并回车后总是提示被拒绝的情况?我确认我的密码输入正确,而且
[root@hda2 ~]# ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is 28:fb:d2:98:f3:6c:f8:2e:6a:89:35:02:05:b5:d4:05.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
root@127.0.0.1's password:
Permission denied, please try again.
root@127.0.0.1's password:

这个没有遇到过,可以看看/var/log/secure 日志。为什么被拒绝
long-e
long-e
您好,有没有遇到安装了salt后,服务器上ssh连接本机或者其他服务器,输入完密码并回车后总是提示被拒绝的情况?我确认我的密码输入正确,而且
[root@hda2 ~]# ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is 28:fb:d2:98:f3:6c:f8:2e:6a:89:35:02:05:b5:d4:05.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
root@127.0.0.1's password:
Permission denied, please try again.
root@127.0.0.1's password:
long-e
long-e
您好,有没有遇到安装了salt后,服务器上ssh连接本机或者其他服务器,输入完密码并回车后总是提示被拒绝的情况?我确认我的密码输入正确,而且
[root@hda2 ~]# ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is 28:fb:d2:98:f3:6c:f8:2e:6a:89:35:02:05:b5:d4:05.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
root@127.0.0.1's password:
Permission denied, please try again.
root@127.0.0.1's password:
snakelxc
snakelxc

引用来自“daliu8529”的评论

执行:salt -v "data*" cmd.run "/etc/init.d/hadoop-yarn-nodemanager start" 时返回:

data3.hadoop.inwti:
Starting Hadoop nodemanager:[ OK ]
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-data3.hadoop.inwti.out
Execution is still running on data1.hadoop.inwti
data1.hadoop.inwti:
Starting Hadoop nodemanager:[ OK ]
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-data1.hadoop.inwti.out
data2.hadoop.inwti:
Minion did not return

仅data2与其他两台不同,但从网页上看所有node都正常,这里没什么问题吧?

salt '*' test.ping 看node有没有响应
daliu8529
daliu8529
执行:salt -v "data*" cmd.run "/etc/init.d/hadoop-yarn-nodemanager start" 时返回:

data3.hadoop.inwti:
Starting Hadoop nodemanager:[ OK ]
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-data3.hadoop.inwti.out
Execution is still running on data1.hadoop.inwti
data1.hadoop.inwti:
Starting Hadoop nodemanager:[ OK ]
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-data1.hadoop.inwti.out
data2.hadoop.inwti:
Minion did not return

仅data2与其他两台不同,但从网页上看所有node都正常,这里没什么问题吧?
snakelxc
snakelxc

引用来自“daliu8529”的评论

在namenode主服务器(192.168.1.1)格式化hdfs时报错:
13/12/28 17:28:12 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /data/nn_ha/current
  at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:298)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:528)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:549)
  at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:152)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:760)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1128)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
13/12/28 17:28:12 INFO util.ExitUtil: Exiting with status 1

/data/nn_ha是从data1 mount过来的,权限是nobody.nobody,请问是不是这里有问题?

NFS服务器上,把挂载点的目录权限改为777
daliu8529
daliu8529
在namenode主服务器(192.168.1.1)格式化hdfs时报错:
13/12/28 17:28:12 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /data/nn_ha/current
  at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:298)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:528)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:549)
  at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:152)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:760)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1128)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
13/12/28 17:28:12 INFO util.ExitUtil: Exiting with status 1

/data/nn_ha是从data1 mount过来的,权限是nobody.nobody,请问是不是这里有问题?
daliu8529
daliu8529
将两台zk服务stop后重新start, 链接成功,但data3依然报错,这里奇怪的是为什么要去连接data3? data3上没有开zk服务啊?
是不是下面的配置不应该有data3?

<property>
12
<name>ha.zookeeper.quorum</name>
13
<value>dn100.dg.hadoop.cn:2181,dn101.dg.hadoop.cn:2181,dn102.dg.hadoop.cn:2181,</value>
14
<description>指定用于HA的ZooKeeper集群机器列表</description>
15
</property>
daliu8529
daliu8529
看到日志了:
2013-12-28 17:13:24,472 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.9.10.59:46220
2013-12-28 17:13:24,477 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@354] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2013-12-28 17:13:24,477 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /10.9.10.59:46220 (no session established for client)
snakelxc
snakelxc

引用来自“daliu8529”的评论

引用来自“snakelxc”的评论

引用来自“daliu8529”的评论

“在namenode主服务器(192.168.1.1)执行zookeeper的failover功能格式化”这一步怎么都过不去了。
系统报错:
13/12/28 16:35:36 FATAL ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at data1.hadoop.inwti:2181,data2.hadoop.inwti:2181,data3.hadoop.inwti:2181,. Please check the configured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.

三台datanode都拒绝链接。

这是怎么回事呢?

查看一下zk集群的服务起来没,访问一下它们的2181端口,看看zk的日志

sudo -u hdfs hdfs zkfc -formatZK 命令报错。

13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-71.el6.x86_64
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hadoop-hdfs
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root
13/12/28 16:59:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=10.9.10.60:2181,10.9.10.61:2181,10.9.10.62:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@1098928f
13/12/28 16:59:32 INFO zookeeper.ClientCnxn: Opening socket connection to server data3.hadoop.inwti/10.9.10.62:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:32 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: 拒绝连接
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
  at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
13/12/28 16:59:32 INFO zookeeper.ClientCnxn: Opening socket connection to server data1.hadoop.inwti/10.9.10.60:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:32 INFO zookeeper.ClientCnxn: Socket connection established to data1.hadoop.inwti/10.9.10.60:2181, initiating session
13/12/28 16:59:32 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
13/12/28 16:59:33 INFO zookeeper.ClientCnxn: Opening socket connection to server data2.hadoop.inwti/10.9.10.61:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:33 INFO zookeeper.ClientCnxn: Socket connection established to data2.hadoop.inwti/10.9.10.61:2181, initiating session
13/12/28 16:59:33 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
13/12/28 16:59:34 INFO zookeeper.ClientCnxn: Opening socket connection to server data3.hadoop.inwti/10.9.10.62:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:34 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: 拒绝连接
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
  at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
13/12/28 16:59:34 INFO zookeeper.ClientCnxn: Opening socket connection to server data1.hadoop.inwti/10.9.10.60:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:34 INFO zookeeper.ClientCnxn: Socket connection established to data1.hadoop.inwti/10.9.10.60:2181, initiating session
13/12/28 16:59:34 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
13/12/28 16:59:35 INFO zookeeper.ClientCnxn: Opening socket connection to server data2.hadoop.inwti/10.9.10.61:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:35 INFO zookeeper.ClientCnxn: Socket connection established to data2.hadoop.inwti/10.9.10.61:2181, initiating session
13/12/28 16:59:35 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
13/12/28 16:59:37 ERROR ha.ActiveStandbyElector: Connection timed out: couldn't connect to ZooKeeper in 5000 milliseconds
13/12/28 16:59:37 INFO zookeeper.ClientCnxn: Opening socket connection to server data3.hadoop.inwti/10.9.10.62:2181. Will not attempt to authenticate using SASL (unknown error)
13/12/28 16:59:37 INFO zookeeper.ZooKeeper: Session: 0x0 closed
13/12/28 16:59:37 INFO zookeeper.ClientCnxn: EventThread shut down
13/12/28 16:59:37 FATAL ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at 10.9.10.60:2181,10.9.10.61:2181,10.9.10.62:2181. Please check the configured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.

/etc/init.d/hadoop-hdfs-zkfc start
Hadoop入门扫盲:hadoop发行版介绍与选择

一、hadoop发行版介绍 目前Hadoop发行版非常多,有Intel发行版,华为发行版、Cloudera发行版(CDH)、Hortonworks版本等,所有这些发行版均是基于Apache Hadoop衍生出来的,之所以有这么多的...

南非蚂蚁
2016/11/03
0
0
Cloudera CDH 、Hortonworks DHP和MapR比较

目前啊,都知道,大数据集群管理方式分为手工方式(Apache hadoop)和工具方式(Ambari + hdp 和Cloudera Manger + CDH)。   手工部署呢,需配置太多参数,但是,好理解其原理,建议初学这...

hblt-j
08/13
0
0
深入解析大数据虚拟化的架构(下)- 系统架构

继《零起点部署大数据虚拟化》系列教程之后,本着“知其然,亦知其所以然”的原则,本系列走进大数据虚拟化的内部,分上下两篇博文,帮助读者了解vSphere Big Data Extensions(以下简称BDE...

vBigData
2013/08/16
0
0
Hadoop2.6+jdk8的安装部署(1)——使用jar包安装部署【详细】

Hadoop的安装部署可以分为三类: 一. 自动安装部署 Ambari:http://ambari.apache.org/,它是有Hortonworks开源的。 Minos:https://github.com/XiaoMi/minos,中国小米公司开源(为的是把大...

HappyBKs
2015/04/04
0
3
centos6.4用cloudera manager安装hadoop2.0.0-cdh4.3.0集

centos6.4用cloudera manager安装hadoop2.0.0-cdh4.3.0集群(一) cloudera发行的hadoop安装有4种方式,其中使用clouderamanager安装是最简单的。缺点是你不清楚hadoop安装时的配置以及具体流...

Zero零_度
2015/09/07
45
0

没有更多内容

加载失败,请刷新页面

加载更多

注解

今日学习目标 : 能够使用Junit进行单元测试 能够说出注解的作用 能够使用JDK提供的3个注解 能够根据基本语法编写自定义注解实现类 能够了解自定义注解解析 能够了解元注解使用 能够根据上课案...

码农屌丝
11分钟前
1
0
configure: error: xml2-config not found. Please check your libxml2 installation

安装php时的报错 checking libxml2 install dir... no checking for xml2-config path... configure: error: xml2-config not found. Please check your libxml2 installation. 检查是否安装......

bengozhong
13分钟前
0
0
Java8 new Time Api

Java8 new Time Api JAVA8之前Date/Times Api的问题 线程安全问题 Api 不易理解 Api 转换复杂 1. Main class LocalDate LocalTime LocalDateTime ZonedDateTime Period Duration LocalDate ......

Kenny100120
15分钟前
0
0
403. Frog Jump

Description Tag:Dynamic Programming Difficulty:Hard A frog is crossing a river. The river is divided into x units and at each unit there may or may not exist a stone. The fro......

52iSilence7
17分钟前
0
0
nginx+php+swoole安装记录

领了台阿里服务器1vCPU 1G,做下测试研究。 系统 centos7,使用yum安装。 Nginx yum install nginx##开启nginxservice nginx start 安装php72 安装前确定下系统是否有安装php,有请卸载:...

WalkingSun
20分钟前
0
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部