文档章节

kafka3 本地目录结构以及在在zk上的znode

o
 osc_1ee7cxmx
发布于 2018/08/06 15:13
字数 2399
阅读 0
收藏 0

精选30+云产品,助力企业轻松上云!>>>

一 kafka本地目录结构

[root@hadoop ~]# cd /tmp/kafka-logs1

[root@hadoop kafka-logs1]# find .
.
./.lock
./recovery-point-offset-checkpoint
./log-start-offset-checkpoint
./cleaner-offset-checkpoint
./replication-offset-checkpoint
./meta.properties
./mytest-1
./mytest-1/leader-epoch-checkpoint
./mytest-1/00000000000000000000.log
./mytest-1/00000000000000000000.index
./mytest-1/00000000000000000000.timeindex
./mytest-0
./mytest-0/leader-epoch-checkpoint
./mytest-0/00000000000000000000.log
./mytest-0/00000000000000000000.index
./mytest-0/00000000000000000000.timeindex

 

搭建单节点多broker的kafka后,启动zk和kafka。

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# zookeeper-server-start.sh config/zookeeper.properties 
...
[root@hadoop kafka]# kafka-server-start.sh config/server0.properties &
...
[root@hadoop kafka]# kafka-server-start.sh config/server1.properties &
...
[root@hadoop kafka]# kafka-server-start.sh config/server2.properties &
...

[root@hadoop ~]# jps
4133 QuorumPeerMain
4791 Kafka
5452 Kafka
5780 Kafka
6164 Jps

创建kafka集群时我已经创建了一个主题test02,现在我们再创建一个主题mytest(2个分区)

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 2 --topic mytest
Created topic "mytest".

查看日志目录可以发现,3个日志目录几乎是一致的(__consumer_offsets-0是个什么鬼?)

test02和mytest副本数均是3,所以3个broker的log下面均有;
test02只有一个分区,mytest有2个分区,所以有一个test02目录,和2个mytest目录。

[root@hadoop ~]# ll /tmp/kafka-logs0
总用量 16
-rw-r--r-- 1 root root    0 8月   3 11:02 cleaner-offset-checkpoint
drwxr-xr-x 2 root root  141 8月   3 11:02 __consumer_offsets-0
-rw-r--r-- 1 root root    4 8月   5 00:44 log-start-offset-checkpoint
-rw-r--r-- 1 root root   54 8月   3 11:02 meta.properties
drwxr-xr-x 2 root root  141 8月   5 00:31 mytest-0 #命名格式:主题-分区
drwxr-xr-x 2 root root  141 8月   5 00:31 mytest-1
-rw-r--r-- 1 root root 1241 8月   5 00:44 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root 1241 8月   5 00:44 replication-offset-checkpoint
drwxr-xr-x 2 root root  178 8月   3 11:49 test02-0

[root@hadoop ~]# ll /tmp/kafka-logs1
总用量 16
-rw-r--r-- 1 root root   0 7月  31 16:05 cleaner-offset-checkpoint
-rw-r--r-- 1 root root   4 8月   5 00:48 log-start-offset-checkpoint
-rw-r--r-- 1 root root  54 7月  31 16:05 meta.properties
drwxr-xr-x 2 root root 141 8月   5 00:31 mytest-0
drwxr-xr-x 2 root root 141 8月   5 00:31 mytest-1
-rw-r--r-- 1 root root  85 8月   5 00:48 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root  85 8月   5 00:49 replication-offset-checkpoint
drwxr-xr-x 2 root root 178 8月   4 23:29 test02-0

[root@hadoop ~]# ll /tmp/kafka-logs2
总用量 16
-rw-r--r-- 1 root root   0 7月  31 16:06 cleaner-offset-checkpoint
-rw-r--r-- 1 root root   4 8月   5 00:48 log-start-offset-checkpoint
-rw-r--r-- 1 root root  54 7月  31 16:06 meta.properties
drwxr-xr-x 2 root root 141 8月   5 00:31 mytest-0
drwxr-xr-x 2 root root 141 8月   5 00:31 mytest-1
-rw-r--r-- 1 root root  85 8月   5 00:48 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root  85 8月   5 00:49 replication-offset-checkpoint
drwxr-xr-x 2 root root 178 8月   4 23:29 test02-0
View Code

查看主题目录

[root@hadoop ~]# ll /tmp/kafka-logs0/test02-0/
总用量 16
-rw-r--r-- 1 root root 10485760 8月   5 00:12 00000000000000000000.index
-rw-r--r-- 1 root root       79 8月   3 11:06 00000000000000000000.log
-rw-r--r-- 1 root root 10485756 8月   5 00:12 00000000000000000000.timeindex
-rw-r--r-- 1 root root       10 8月   3 11:49 00000000000000000001.snapshot
-rw-r--r-- 1 root root        8 8月   3 11:06 leader-epoch-checkpoint

查看元数据信息

[root@hadoop ~]# cat /tmp/kafka-logs0/meta.properties 
version=0
broker.id=0
[root@hadoop ~]# cat /tmp/kafka-logs1/meta.properties 
version=0
broker.id=1
[root@hadoop ~]# cat /tmp/kafka-logs2/meta.properties 
version=0
broker.id=2

向 mytest主题 生产消息并消费消息,可以看出消息分别保存在了不同分区的log里

#生产消息
[root@hadoop kafka]# kafka-console-producer.sh --broker-list localhost:9092,localhost:9093,localhost:9094 --topic mytest
>hello kafka
>hello world

#消费消息
[root@hadoop kafka]# kafka-console-consumer.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic mytest --from-beginning
hello kafka
hello world

#可以看出,消息分别保存在了不同分区的log里
[root@hadoop ~]# cat /tmp/kafka-logs0/mytest-0/00000000000000000000.log 
C}_Me Ye Yÿÿÿÿÿÿÿÿÿÿÿÿÿÿ"hello kafka[root@hadoop ~]# 
[root@hadoop ~]# cat /tmp/kafka-logs0/mytest-1/00000000000000000000.log 
e ꛁe ꞿÿÿÿÿÿÿÿÿÿÿÿÿÿ"hello world[root@hadoop ~]#

二 kafka在zk上的znode

1./controller                //data = {"version":1,"brokerid":0,"timestamp":"1533396512695"}

2./controller_epoch     //data=27#不清楚什么意思,貌似第一次启动时是1,难道是kafka启动过的次数?

3./brokers/ids             //实时维护active的broker
   /brokers/ids/0          //data = {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints": ["PLAINTEXT://hadoop:9092"],                                              //"jmx_port":-1,"host":"hadoop","timestamp":"1533399158574","port":9092,"version":4}
  /brokers/ids/1
  /brokers/ids/2

  /brokers/topics
  /brokers/topics/mytest/partitions/0/state     //data = {"controller_epoch":28,"leader":0,"version":1,"leader_epoch":0,"isr":[0,2,1]}
  /brokers/topics/mytest/partitions/1/state     //data = {"controller_epoch":28,"leader":1,"version":1,"leader_epoch":0,"isr":[1,0,2]}

  /brokers/seqid

4./admin/delete_topics

5./isr_change_notification

6./consumers

7./config
   /config/changes
  /config/clients
  /config/brokers
  /config/topics
  /config/users

注意:productor不在zk注册

 

启动zk客户端

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# zkCli.sh -server hadoop:2181 #启动zk客户端
...

查看根目录

[zk: hadoop:2181(CONNECTED) 0] ls / #查看znode
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, 
log_dir_event_notification, latest_producer_id_block, config]

查看/controller 

[zk: hadoop:2181(CONNECTED) 1] ls /controller 
[]
[zk: hadoop:2181(CONNECTED) 2] get /controller 
#这里的brokerid为0意思是kafka集群的leader为0
#如果集群中有多个broker,将leader杀死后会发现这里的brokerid变化。
{"version":1,"brokerid":0,"timestamp":"1533396512695"}
cZxid = 0x513
ctime = Sat Aug 04 23:28:32 CST 2018
mZxid = 0x513
mtime = Sat Aug 04 23:28:32 CST 2018
pZxid = 0x513
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x10000711d710001
dataLength = 54
numChildren = 0

查看/controller_epoch

[zk: hadoop:2181(CONNECTED) 3] ls /controller_epoch
[]
[zk: hadoop:2181(CONNECTED) 4] get /controller_epoch
27
cZxid = 0x1c
ctime = Tue Jul 31 14:18:01 CST 2018
mZxid = 0x514
mtime = Sat Aug 04 23:28:32 CST 2018
pZxid = 0x1c
cversion = 0
dataVersion = 26
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 2
numChildren = 0

查看/brokers

[zk: hadoop:2181(CONNECTED) 5] get /brokers
null
cZxid = 0x4
ctime = Tue Jul 31 14:17:50 CST 2018
mZxid = 0x4
mtime = Tue Jul 31 14:17:50 CST 2018
pZxid = 0xd
cversion = 3
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 3 #3个孩子分别是ids, topics, seqid

[zk: hadoop:2181(CONNECTED) 6] ls /brokers
[ids, topics, seqid]

查看/brokers/ids 

[zk: hadoop:2181(CONNECTED) 7] ls /brokers/ids 
#显示kafka集群中的所有active的brokerid
#如果杀死broker 0,这里将会显示[1, 2]
[0, 1, 2]

[zk: hadoop:2181(CONNECTED) 8] get /brokers/ids/0
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://hadoop:9092"],
"jmx_port":-1,"host":"hadoop","timestamp":"1533399158574","port":9092,"version":4} cZxid = 0x55f ctime = Sun Aug 05 00:12:38 CST 2018 mZxid = 0x55f mtime = Sun Aug 05 00:12:38 CST 2018 pZxid = 0x55f cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x10000711d710005 dataLength = 182 numChildren = 0

查看/brokers/topics

[zk: hadoop:2181(CONNECTED) 9] ls /brokers/topics 
[mytest, test02,  __consumer_offsets]

[zk: hadoop:2181(CONNECTED) 10] ls /brokers/topics/mytest
[partitions]
[zk: hadoop:2181(CONNECTED) 11] ls /brokers/topics/mytest/partitions #显示分区个数:mytest主题有2个分区,分别为0和1
[0, 1]
[zk: hadoop:2181(CONNECTED) 12] ls /brokers/topics/mytest/partitions/0
[state]
[zk: hadoop:2181(CONNECTED) 13] ls /brokers/topics/mytest/partitions/0/state
[]

#以下可以看出每个分区拥有不同的leader
[zk: hadoop:2181(CONNECTED) 14] get /brokers/topics/mytest/partitions/0/state
{"controller_epoch":28,"leader":0,"version":1,"leader_epoch":0,"isr":[0,2,1]}
cZxid = 0x5a2
ctime = Sun Aug 05 00:31:16 CST 2018
mZxid = 0x5a2
mtime = Sun Aug 05 00:31:16 CST 2018
pZxid = 0x5a2
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 77
numChildren = 0
[zk: hadoop:2181(CONNECTED) 15] get /brokers/topics/mytest/partitions/1/state
{"controller_epoch":28,"leader":1,"version":1,"leader_epoch":0,"isr":[1,0,2]}
cZxid = 0x5a1
ctime = Sun Aug 05 00:31:16 CST 2018
mZxid = 0x5a1
mtime = Sun Aug 05 00:31:16 CST 2018
pZxid = 0x5a1
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 77
numChildren = 0

查看/brokers/seqid 

[zk: hadoop:2181(CONNECTED) 16] ls /brokers/seqid 
[]
[zk: hadoop:2181(CONNECTED) 17] get /brokers/seqid
null
cZxid = 0xd
ctime = Tue Jul 31 14:17:50 CST 2018
mZxid = 0xd
mtime = Tue Jul 31 14:17:50 CST 2018
pZxid = 0xd
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 0

查看/admin/delete_topics

[zk: hadoop:2181(CONNECTED) 18] ls /admin
[delete_topics]
[zk: hadoop:2181(CONNECTED) 19] ls /admin/delete_topics
[]

查看/isr_change_notification

[zk: hadoop:2181(CONNECTED) 20] ls /isr_change_notification 
[]
[zk: hadoop:2181(CONNECTED) 21] get /isr_change_notification
null
cZxid = 0xe
ctime = Tue Jul 31 14:17:50 CST 2018
mZxid = 0xe
mtime = Tue Jul 31 14:17:50 CST 2018
pZxid = 0x544
cversion = 56
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 0

查看/consumers

#不知为何,我这里一直显示为空,按理说启动消费者之后这里应该显示相应信息的
[zk: hadoop:2181(CONNECTED) 22] ls /consumers
[]
[zk: hadoop:2181(CONNECTED) 23] get /consumers
null
cZxid = 0x2
ctime = Tue Jul 31 14:17:50 CST 2018
mZxid = 0x2
mtime = Tue Jul 31 14:17:50 CST 2018
pZxid = 0x2
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 0

查看/config

[zk: hadoop:2181(CONNECTED) 24] ls /config
[changes, clients, brokers, topics, users]
[zk: hadoop:2181(CONNECTED) 25] ls /config/changes
[]
[zk: hadoop:2181(CONNECTED) 26] ls /config/clients
[]
[zk: hadoop:2181(CONNECTED) 27] ls /config/brokers
[]
[zk: hadoop:2181(CONNECTED) 28] ls /config/topics #与ls /brokers/topics结果一致
[mytest, test02,  __consumer_offsets]
[zk: hadoop:2181(CONNECTED) 29] ls /config/users  
[]

尝试杀死集群的leader后,查看相对应的znode变化

#1.杀死leader
[root@hadoop ~]# ps -ef|grep server0.properties #找到server0的进程号为4791
root       4791   4422  0 8月04 pts/1   00:00:18 ...信息太多,忽略.../server0.properties
root       6327   6119  0 00:03 pts/5    00:00:00 grep --color=auto server0.properties
[root@hadoop ~]# kill -9 4791 #杀死进程
[root@hadoop ~]# ps -ef|grep server0.properties #再次查看
root       6353   6119  0 00:07 pts/5    00:00:00 grep --color=auto server0.properties

#2.查看/controller:brokerid由0变为1
[zk: hadoop:2181(CONNECTED) 30] get /controller
{"version":1,"brokerid":1,"timestamp":"1533398833360"}
cZxid = 0x54c
ctime = Sun Aug 05 00:07:13 CST 2018
mZxid = 0x54c
mtime = Sun Aug 05 00:07:13 CST 2018
pZxid = 0x54c
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x10000711d710002
dataLength = 54
numChildren = 0

#3.查看/brokers/ids:active的broker只剩下1和2了
[zk: hadoop:2181(CONNECTED) 31] ls /brokers/ids
[1, 2]

#4.重新启动broker 0
[root@hadoop kafka]# kafka-server-start.sh config/server0.properties &

尝试删除一个主题test02,查看相对应的znode变化

#1.删除主题
[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# kafka-topics.sh --zookeeper localhost:2181 --delete --topic test02
Topic test02 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

#2.查看主题:没有结果(应该有结果?)
[root@hadoop kafka]# kafka-topics.sh --describe --zookeeper localhost:2181 --topic test02

#3.查看log目录下test02-0目录是否存在:不存在了
[root@hadoop kafka]# ll /tmp/kafka-logs0 #test02目录没有了
总用量 20
-rw-r--r-- 1 root root    4 8月   5 01:34 cleaner-offset-checkpoint
drwxr-xr-x 2 root root  141 8月   3 11:02 __consumer_offsets-0
...
drwxr-xr-x 2 root root  141 8月   3 11:02 __consumer_offsets-49
-rw-r--r-- 1 root root    4 8月   5 01:34 log-start-offset-checkpoint
-rw-r--r-- 1 root root   54 8月   3 11:02 meta.properties
drwxr-xr-x 2 root root  141 8月   5 00:31 mytest-0
drwxr-xr-x 2 root root  141 8月   5 00:31 mytest-1
-rw-r--r-- 1 root root 1230 8月   5 01:34 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root 1230 8月   5 01:35 replication-offset-checkpoint

#4.查看/admin/delete_topics:为空。按理说这里应该显示删除的主题test02
[zk: hadoop:2181(CONNECTED) 32] ls /admin/delete_topics  
[]

#5.查看/brokers/topics:这里显示test02确实被删除了
[zk: hadoop:2181(CONNECTED) 33] ls /brokers/topics
[mytest, __consumer_offsets]

#6.查看/config/topics:
[zk: hadoop:2181(CONNECTED) 34] ls /config/topics
[mytest, __consumer_offsets]

#以上均是我的实际操作结果,与老师演示的有一些出入,暂时无法解释原因

 

o
粉丝 0
博文 500
码字总数 0
作品 0
私信 提问
加载中
请先登录后再评论。

暂无文章

hbase2.1.9 centos7 完全分布式 搭建随记

hbase2.1.9 centos7 完全分布式 搭建随记 这里是当初在三个ECS节点上搭建hadoop+zookeeper+hbase+solr的主要步骤,文章内容未经过润色,请参考的同学搭配其他博客一同使用,并记得根据实际情...

osc_4tfw1dxv
34分钟前
11
0
zookeeper3.5.5 centos7 完全分布式 搭建随记

zookeeper3.5.5 centos7 完全分布式 搭建随记 这里是当初在三个ECS节点上搭建hadoop+zookeeper+hbase+solr的主要步骤,文章内容未经过润色,请参考的同学搭配其他博客一同使用,并记得根据实...

osc_6jhxf9ab
35分钟前
19
0
steam夏日促销悄然开始,用Python爬取排行榜上的游戏打折信息

前言 很多人学习python,不知道从何学起。 很多人学习python,掌握了基本语法过后,不知道在哪里寻找案例上手。 很多已经做案例的人,却不知道如何去学习更加高深的知识。 那么针对这三类人,...

osc_ur9mmbck
36分钟前
16
0
python 里 certifi 库的作用

python 里 certifi 库的作用 安装了certifi之后,和requests库一样也有一个cacert.pem,可以用编辑器打开cacert.pem,里面包含了很多可信任知名公司的证书/公钥 库的路径,我这里是python2.7...

osc_1x6ycmfm
38分钟前
11
0
干掉"ZooKeeper",阿里为什么不用ZK做服务发现?

  20大进阶架构专题每日送达   链接:yq.aliyun.com/articles/601745   2020年Java面试题库连载中   !   正文   站在未来的路口,回望历史的迷途,常常会很有意思,因为我们会不...

osc_q5m9dzk0
39分钟前
17
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部