hadoop集群配置datanode无法启动的原因

原创
2017/10/01 18:25
阅读数 440

  2013-10-15 09:52:31,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:32,352 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:33,353 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:34,354 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:35,355 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:37,822 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:38,823 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sunliang/192.168.1.232:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

2013-10-15 09:52:38,824 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.ConnectException: Call to sunliang/192.168.1.232:9000 failed on connection exception: java.net.ConnectException: Connection refused

        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)

        at org.apache.hadoop.ipc.Client.call(Client.java:1118)

        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)

        at com.sun.proxy.$Proxy5.sendHeartbeat(Unknown Source)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1031)

        at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1588)

        at java.lang.Thread.run(Thread.java:662)

Caused by: java.net.ConnectException: Connection refused

        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)

        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)

        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)

        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)

        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)

        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)

        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)

        at org.apache.hadoop.ipc.Client.call(Client.java:1093)

        ... 5 more

当启动hadoop时,用jps命令查看进程是datanode没有启动,而其他的都正常,查看日志显示如上的内容,

解决方案

     删除所用的tmp文件夹

然后执行hadoop namenode -format 进行格式化,在重新启动start-all.sh就都好了

还有个问题就是有防火墙,关闭防火墙

展开阅读全文
打赏
0
0 收藏
分享
加载中
更多评论
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部