# kafka connector 监听sqlserver 新手尝试

2020/10/27 10:23

之前拿canal监听mysql的binlog并将消息递给kafka topic,但是canal只能监听mysql,假如数据库是sqlserver\orcale\mongodb那么完全无能为力.看了一下网上的资料,主流是用kafka connect来监听sqlserver,下面分享一下我尝试的过程.

最开始我看博客,上面写的比较乱,介绍了很多东西,但是我不清楚这些东西之间有什么关系,谁和谁通讯,谁又起到什么作用,和当初配置canal完全不同.现在简单说说,配置过程中涉及到kafka connector,confluent,kafka.    kafka connector是kafka自带特性,用来创建和管理数据流管道,是个和其它系统交换数据的简单模型;

confluent是一家围绕kafka做产品的公司,不但提供数据传输的系统,也提供数据传输的工具,内部封装了kafka.在这里我们只用它下载kafka链接sqlserver的connector组件.

我使用的kafka是用CDH cloudera manager安装的,因此kafka的bin目录\配置目录\日志什么的都不在一起,也没有$KAFKA_HOME.虽然这次是测试功能,但是为了以后下载更多connector组件考虑,我还是下载了confluent.建议在官网下载,没翻&墙,网速还可以. confluent下载地址 https://www.confluent.io/download/ 选择下面的Download Confluent Platform,填写邮件地址和用途下载. 在准备下载和解压的位置,开始下载和解压: wget http://packages.confluent.io/archive/5.2/confluent-5.2.3-2.11.zip tar -zxvf confluent-5.2.3-2.11.zip confluent-5.2.3-2.11 解压出来应该是有一下几个文件夹(usr是我自己创建的,用来存储用户的配置文件和语句): 将CONFLUENT_HOME配置进环境变量里: vi /etc/profile export CONFLUENT_HOME=/usr/software/confluent-5.2.3 export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME:$CONFLUENT_HOME/bin 路径是我自己的,大家改成自己的文件路径. 下载connector连接器组件,每个组件连接jdbc的配置文件都可能不一样,注意看官方文档.我选择的是 debezium-connector-sqlserver .先进入bin目录,能够看到有confluent-hub 指令,我们靠它来下载组件. [root@centos04 bin]# confluent-hub install debezium/debezium-connector-sqlserver:latest The component can be installed in any of the following Confluent Platform installations: 1. /usr/software/confluent-5.2.3 (based on$CONFLUENT_HOME)
2. /usr/software/confluent-5.2.3 (where this tool is installed)
Choose one of these to continue the installation (1-2): 2
Do you want to install this into /usr/software/confluent-5.2.3/share/confluent-hub-components? (yN) y^H
Do you want to install this into /usr/software/confluent-5.2.3/share/confluent-hub-components? (yN) y

Apache 2.0
I agree to the software license agreement (yN) y


Debezium SQL Server的说明文档地址:https://docs.confluent.io/current/connect/debezium-connect-sqlserver/index.html#sqlserver-source-connector

##
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
##

# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended
# to be used with the examples, and some settings may differ from those used in a production system, especially
# the bootstrap.servers and those specifying replication factors.

# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
#kafka集群位置,需要配置
bootstrap.servers=centos04:9092,centos05:9092,centos06:9092

# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
#group.id,默认都是connect-cluster,保持一致就行
group.id=connect-cluster

# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
# it to
key.converter.schemas.enable=true
value.converter.schemas.enable=true

# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
offset.storage.partitions=1

# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,
# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
config.storage.topic=connect-configs
config.storage.replication.factor=3

# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
# the topic before starting Kafka Connect if a specific topic configuration is needed.
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
status.storage.topic=connect-status
status.storage.replication.factor=3
#status.storage.partitions=1

offset.storage.file.filename=/var/log/confluent/offset-storage-file

# Flush much faster than normal, which is useful for testing/debugging
offset.flush.interval.ms=10000

# These are provided to inform the user about the presence of the REST host and port configs
# Hostname & Port for the REST API to listen on. If this is set, it will bind to the interface used to listen to requests.
#rest.host.name=
#kafka connector端口号,可以修改
rest.port=8083

# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.

# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
# (connectors, converters, transformations). The list should consist of top level directories that include
# any combination of:
# a) directories immediately containing jars with plugins and their dependencies
# b) uber-jars with plugins and their dependencies
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
# Examples:
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
# Replace the relative path below with an absolute path if you are planning to start Kafka Connect from within a
# directory other than the home directory of Confluent Platform.
#组件位置,把confluent组件下载位置加上去
plugin.path=/usr/software/confluent-5.2.3/share/java/confluent-hub-client,,/usr/software/confluent-5.2.3/share/confluent-hub-client,/usr/software/confluent-5.2.3/share/confluent-hub-components

kafka-topics --create --zookeeper 192.168.49.104:2181 --topic connect-offsets --replication-factor 3 --partitions 1
kafka-topics --create --zookeeper 192.168.49.104:2181 --topic connect-configs --replication-factor 3 --partitions 1
kafka-topics --create --zookeeper 192.168.49.104:2181 --topic connect-status --replication-factor 3 --partitions 1

sh connect-distributed.sh  /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/etc/kafka/conf.dist/connect-distributed.properties

vi connect-distributed.sh

#修改的地方
base_dir=$(dirname$0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/etc/kafka/conf.dist/connect-log4j.properties" fi 这样执行起来就没有问题了. 以上执行的时候是在前台执行,前台停止退出的话kafka connector也就停止了,这种情况适合调试.在后台运行需要加上 -daemon 参数. sh connect-distributed.sh -daemon /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/etc/kafka/conf.dist/connect-distributed.properties 使用Debezium SQL Server来监听的话需要开启sqlserver的CDC功能.CDC功能要先开启库的捕获,再开启表的捕获,才能监听到表的变化. 我使用的是navicat来连接数据库,大家用自己合适的工具来就可以了. 开启库的捕获: use database; EXEC sys.sp_cdc_enable_db 这一步后数据库会多出一个叫cdc的模式,下面有5张表. 查询哪些数据库开启了CDC功能: select * from sys.databases where is_cdc_enabled = 1  启用表的CDC功能: use database; EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'table_name', @role_name = null;  查看哪些表启用了CDC功能: use database; select name, is_tracked_by_cdc from sys.tables where is_tracked_by_cdc = 1 以上就开启了对表监听的CDC功能. 当我们启动KafkaConnector后,就能够通过接口的形式来访问和提交信息. 查看kafka connector信息: [root@centos04 huishui]# curl -s centos04:8083 | jq { "version": "2.2.1-cdh6.3.0", "commit": "unknown", "kafka_cluster_id": "GwdoyDpbT5uP4k2CN6zbrw" } 8083是上面配置的端口号,同样也可以通过web页面来访问. 查看安装了哪些connector连接器: [root@centos04 huishui]# curl -s centos04:8083 | jq { "version": "2.2.1-cdh6.3.0", "commit": "unknown", "kafka_cluster_id": "GwdoyDpbT5uP4k2CN6zbrw" } [root@centos04 huishui]# curl -s centos04:8083/connector-plugins | jq [ { "class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "type": "sink", "version": "10.0.2" }, { "class": "io.confluent.connect.jdbc.JdbcSinkConnector", "type": "sink", "version": "5.5.1" }, { "class": "io.confluent.connect.jdbc.JdbcSourceConnector", "type": "source", "version": "5.5.1" }, { "class": "io.debezium.connector.sqlserver.SqlServerConnector", "type": "source", "version": "1.2.2.Final" }, { "class": "org.apache.kafka.connect.file.FileStreamSinkConnector", "type": "sink", "version": "2.2.1-cdh6.3.0" }, { "class": "org.apache.kafka.connect.file.FileStreamSourceConnector", "type": "source", "version": "2.2.1-cdh6.3.0" } ] 我安装了很多,有io.debezium.connector.sqlserver.SqlServerConnector就说明没问题. 查看当前运行的任务/Task: [root@centos04 huishui]# curl -s centos04:8083/connectors | jq [] 由于我们还没有提交任何用户配置,所以也就没有任务,返回就是一个空的json.到这里说明kafka connector启动成功,能够正常进行用户配置.接下来才是有关业务的操作,编写一个用户配置的json,通过接口进行提交: #我选择把用户配置保存下来.由于我的kafka都不在一个文件夹下面,所以我把配置文件都存在confluent/usr中.其实存不存都无所谓的.按照官方文档,我选择存下来. #当创建好kafka connector之后,会自动创建kafka topic.名称为${server.name}.$tableName.debezium不能监听单独一张表,所有表都会有对应的topic. cd$CONFLUENT
mkdir usr
cd usr
vi register-sqlserver.json
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.sqlserver.SqlServerConnector",
"database.server.name" : "server.name",
"database.hostname" : "localhost",
"database.port" : "1433",
"database.user" : "sa",
"database.dbname" : "rcscounty_quannan",
"database.history.kafka.bootstrap.servers" : "centos04:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" http://centos04:8083/connectors/ -d @register-sqlserver.json

[root@centos04 huishui]# curl -s centos04:8083/connectors | jq
[
"inventory-connector"
]

 kafka-topics --list --zookeeper centos04:2181

[root@centos04 usr]# curl -s centos04:8083/connectors/inventory-connector/status | jq
{
"name": "inventory-connector",
"connector": {
"state": "RUNNING",
"worker_id": "192.168.49.104:8083"
},
{
"id": 0,
"state": "RUNNING",
"worker_id": "192.168.49.104:8083"
}
],
"type": "source"
}

kafka-console-consumer --bootstrap-server centos04:9092 --topic server.name.tableName

Debezium 传递的数据库变动,新增\修改\删除\模式更改的json都有所不同,具体详情请看用于SQL Server的Debezium连接器.

sqlserver数据实时同步至kafka

Kafka Connect简介

confluent介绍

confluent本地安装

SQLServer 启用 CDC

3
2 收藏

0 评论
2 收藏
3