Kafka Installation and Simple Test

原创
2016/05/09 23:11
阅读数 68

In Action Recipe

As a Spark beginner, I want to set up the Kafka Message Service and integrate with Spark Streaming Application. Before that, you have to install and configure the Kafka.

Prerequisite

Ensure you have Zookeeper installed

Steps:

Steps 1: Download Kafka from the official website Kafka Download Page

Steps 2: Unzip and move to the target directory, e.g. /usr/local/kafka

Steps 3: Configure the KAFKA_HOME and bin path in the .bashrc file

Steps 4: execute scp command to deploy the Kakfa to the other worker nodes.

Steps 5: Update the configuration file: ${KAFKA_HOME}/config/server.properties for each cluster nodes

  • broker.id=0

  • host.name=HadoopM

  • advertised.host.name=HadoopM

  • zookeeper.connect=HadoopM:2181,HadoopS1:2181,HadoopS2:2181

    • ———————————–

  • broker.id=1

  • host.name=HadoopS1

  • advertised.host.name=HadoopS1

  • zookeeper.connect=HadoopM:2181,HadoopS1:2181,HadoopS2:2181

    • ———————————–

  • broker.id=2

  • host.name=HadoopS2

  • advertised.host.name=HadoopS2

  • zookeeper.connect=HadoopM:2181,HadoopS1:2181,HadoopS2:2181


HadoopM server.properties Setting


Steps 6: You can now test your kafka service by the following command:

  • Start the Kafka service

  • Create a Kafka topic in master nodes, e.g. HadoopM

  • Verify the just created topics in worker nodes, e.g. HadoopS1 and HadoopS2

  • Enable the consumer in worker nodes, e.g. HadoopS1 and HadoopS2

  • Enable the producer in Master nodes, e.g. HadoopM


#Start and Stop Kafka

${KAFKA_HOME}/bin/kafka-server-start.sh ${KAFKA_HOME}/config/server.properties &
${KAFKA_HOME}/bin/kafka-server-stop.sh ${KAFKA_HOME}/config/server.properties &


#Create a topics in Kafka in Master node,e.g. Topic HelloKafka

${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper HadoopM:2181,HadoopS1:2181,HadoopS2:2181 --replication-factor 3 --partitions 1 --topic HelloKafka


#Describe the topics you just created, e.g. Topic HelloKafka

${KAFKA_HOME}/bin/kafka-topics.sh --describe --zookeeper HadoopM:2181,HadoopS1:2181,HadoopS2:2181 --topic HelloKafka


#Delete the topics in Kafka e.g. Topic HelloKafka

${KAFKA_HOME}/bin/kafka-topics.sh --delete --zookeeper HadoopM:2181,HadoopS1:2181,HadoopS2:2181 --topic HelloKafka


#Go to Master nodes and enable the producer 

${KAFKA_HOME}/bin/kafka-console-producer.sh --broker-list HadoopM:9092,HadoopS1:9092,HadoopS2:9092 --topic HelloKafka

After you successfully created the producer, you can type the message… e.g. Hello Spark


#Go to Worker nodes and enable the consumer 

${KAFKA_HOME}/bin/kafka-console-consumer.sh --zookeeper HadoopM:2181,HadoopS1:2181,HadoopS2:2181 --topic HelloKafka --from-beginning

The command run on my HadoopS1 Server, consumer can successfully receive the message


you can also verify all topics you have created, e.g. Topic HelloKafka

#Go to Worker nodes and list out the topics in Kafka 

${KAFKA_HOME}/bin/kafka-topics.sh --list --zookeeper HadoopM:2181,HadoopS1:2181,HadoopS2:2181





Thanks for reading

Janice

——————————————————————————————–
Reference: DT大数据梦工厂IMF传奇行动绝密课程 – 第89课:Spark Streaming数据源Kafka解析和安装配置及测试实战

Sharing is Good, Learning is Fun.
今天很残酷、明天更残酷,后天很美好。但很多人死在明天晚上、而看不到后天的太阳。 –马云 Jack Ma



展开阅读全文
加载中

作者的其它热门文章

打赏
2
0 收藏
分享
打赏
0 评论
0 收藏
2
分享
返回顶部
顶部