阿里云Kafka搭建过程与bug解决

原创
2017/07/07 15:23
阅读数 1.8K

环境准备:

  • Ubuntu 16.04
  • JDK 1.8

先在Ubuntu上把jdk配好,我配的是oracle的jdk,如果想要配openJDK的自行google。命令步骤如下:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

好的,接下来把kafka下载好之后,放到阿里云服务器上。我用的是xshell+xftp。

kafka网址:http://kafka.apache.org/

截止到我发布博文的最新版是 kafka_2.11-0.11.0.0.tgz 。 解压这个文件

tar -zxf kafka_2.11-0.11.0.0.tgz

启动kafka需要先启动zookeeper,不过zookeeper也包含在kafka里面,所以不用另外装zookeeper。先进入kafka的目录:

cd kafka_2.11-0.11.0.0/

接下来启动zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

这一步一般来说不会出错,启动成功的信息如下

[2017-07-07 13:59:23,617] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-07-07 13:59:23,617] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-07-07 13:59:23,617] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-07-07 13:59:23,617] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2017-07-07 13:59:23,655] INFO Reading configuration from: ../config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-07-07 13:59:23,656] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)

接下来再开一个终端,也是跟上面同样的路径下执行下面这个命令,启动kafka。

bin/kafka-server-start.sh config/server.properties

在这一步有可能会遇到两个坑,报错信息如下。

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/kafka_2.11-0.11.0.0/hs_err_pid1492.log

这个报错原因是因为kafka-server-start.sh中的java堆内存设置得太大了,超过了系统的内存,所以无法启动JVM。可以修改kafka-server-start.sh文件中的一行代码:

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

修改"-Xmx1G -Xms1G"为你认为合适的大小,不能大过系统的内存大小。因为我的服务器是乞丐版的阿里云,所以我修改为"-Xmx768m -Xms256m"。

再次启动可能还会有这个报错

[2017-07-07 14:09:38,308] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.net.UnknownHostException: iZwz9gkvuniblaad81xj1yZ: iZwz9gkvuniblaad81xj1yZ: Name or service not known
	at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
	at kafka.server.KafkaHealthcheck$$anonfun$1.apply(KafkaHealthcheck.scala:60)
	at kafka.server.KafkaHealthcheck$$anonfun$1.apply(KafkaHealthcheck.scala:58)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:58)
	at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:50)
	at kafka.server.KafkaServer.startup(KafkaServer.scala:280)
	at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
	at kafka.Kafka$.main(Kafka.scala:65)
	at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: iZwz9gkvuniblaad81xj1yZ: Name or service not known
	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
	... 14 more

这个是因为我们的阿里云服务器都有一个自己的名称,例如:iZwz9gkvuniblaad81xj1yZ,当我们的kafka通过这个名称去寻找服务时找不到会报错,因为我们没有在hosts 文件中声明这个名称所对应的IP。所以接下来需要在hosts文件添加一行:

127.0.0.1  iZwz9gkvuniblaad81xj1yZ(这里换成你的服务器名称)

再次启动应该没有问题了。

bin/kafka-server-start.sh config/server.properties

kafka测试

1.创建一个topic “test”

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test  

2.发送message

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 

3.启动comsumer

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning  

好的,到这里kafka基本上搭建好了,这只是个单机开发环境,想要分布式的搭建教程可以自行google。

参考blog: http://blog.csdn.net/jingshuigg/article/details/24439637

展开阅读全文
打赏
1
33 收藏
分享
加载中
用docker就不会有这些坑了,而且系统干净
2017/07/08 21:15
回复
举报
更多评论
打赏
1 评论
33 收藏
1
分享
返回顶部
顶部