文档章节

Kafka server.properties

Yulong_
 Yulong_
发布于 2017/08/18 08:51
字数 1683
阅读 8
收藏 0
点赞 0
评论 0

0.8 version

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
# broker.id是每个broker instance在Kafka集群里的唯一标识,所以各个broker instance必须要保持唯一。
broker.id=0

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
# host.name指定当前broker绑定的网络接口
#host.name=localhost
#host.name=breath

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured.  Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>

# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>

# The number of threads handling network requests
num.network.threads=3
 
# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
# Kafka的数据存储就是日志的形式,参数log.dirs即数据存储路径,可逗号分隔。生产配置时要根据实际情况合理规划。
log.dirs=/opt/beh/data/kafka0822

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk. 
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. 
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according 
# to the retention policies
log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# 尽量不要使用zookeeper的根目录作为zookeeper.connect的配置
zookeeper.connect=breath:2181/kafka0822

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

0.10 version

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners = PLAINTEXT://breath:9092
# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/opt/beh/data/kafka01021

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=breath:2181/kafka01021

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

© 著作权归作者所有

共有 人打赏支持
Yulong_
粉丝 8
博文 79
码字总数 169760
作品 0
朝阳
部门经理
阿里云上kafka的安装和配置

下载kafka的地址: https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz 镜像下载kafka wget http://mirrors.hust.edu.cn/apache/kafka/1.1.0/kafka2.11-1.1.0.......

y315495146 ⋅ 04/23 ⋅ 0

kafka报错replication factor: 1 larger than available brokers: 0 问题解决方案

用kfafka命令:./kafka-server-start.sh ../config/server.properties &启动时报以下错误: [2016-04-27 21:54:44,745] ERROR [KafkaApi-100] error when handling request Name: TopicMeta......

海军战士 ⋅ 2016/12/16 ⋅ 0

kafka学习记录之配置

学习之地:http://kafka.apache.org/082/documentation.html#configuration http://www.orchome.com/kafka/index http://www.oschina.net/translate/kafka-design?lang=chs&p=1 kafka broker......

jy_niuren ⋅ 2016/11/24 ⋅ 0

kafka设置外网消费者

kafka 的默认配置比较简单,但想把其端口暴露给外网(指定端口),则有一些额外的注意情况 kafka 的版本,0.8和0.10 的配置不同,这里以0.10.0.0 为主, 没有用集群,一台机做测试 zookeeper安...

mingover ⋅ 2017/06/22 ⋅ 0

kafka的单机部署版本

本部署使用的版本为kafka2.8.0-0.8.0。 参考了http://blog.csdn.net/itleochen/article/details/17451455这篇博文; 并根据官网介绍http://kafka.apache.org/documentation.html#quickstart......

lry77 ⋅ 2016/08/16 ⋅ 0

kafka伪分布安装使用

下载:http://kafka.apache.org/downloads.html kafka2.10-0.9.0.0.tgz 上传到:/home/grid/soft/jms/kafka 解压:tar -zxvf kafka2.10-0.9.0.0.tgz cd kafka_2.10-0.9.0.0 cd conf vi serv......

Zero零_度 ⋅ 2016/03/07 ⋅ 0

KAFKA本机部署成功远端无法访问解决

方案1.修改机器Hostname为本机IP,设置kafka的 server.properties 参数host.name为空。 方案2.kafka的server.properties添加参数advertised.host.name=本机IP 这样远端直接访问IP即可连接kaf...

屌丝Lee ⋅ 2016/06/23 ⋅ 2

解决默认的__consumer_offsets这个topic默认副本数为1而存在的单点故障问题

解决kafka集群由于默认的consumer_offsets这个topic的默认的副本数为1而存在的单点故障问题 抛出问题: consumer_offsets这个topic是由kafka自动创建的,默认50个,但是都存在一台kafka服务器...

Mr_zebra ⋅ 06/15 ⋅ 0

Kafka集群及监控安装指南

kafka官网:http://kafka.apache.org/ 官方文档:http://kafka.apache.org/documentation.html#quickstart kafka集群安装 下载: kafka2.10-0.8.2.0.tar.gz 解压: tar -zxvf kafka2.10-0.8.......

zyqJustin ⋅ 2016/04/16 ⋅ 0

windows下kafka环境搭建

之前在windows上安装老是失败原因是 电脑上原来装有IBMMQ,启动kafka报'此时不应有IBMWebSphere' 错. 1.首先要准备好jdk环境 2.安装zookeeper ,配置下日志存放位置,端口等(测试使用,默认就好...

不道归来 ⋅ 2016/05/30 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

Java NIO之字符集

1 字符集和编解码的概念 首先,解释一下什么是字符集。顾名思义,就是字符的集合。它的初衷是把现实世界的符号映射为计算机可以理解的字节。比如我创造一个字符集,叫做sex字符集,就包含两个...

士别三日 ⋅ 28分钟前 ⋅ 0

Spring Bean基础

1、Bean之间引用 <!--如果Bean配置在同一个XML文件中,使用local引用--><ref bean="someBean"/><!--如果Bean配置在不同的XML文件中,使用ref引用--><ref local="someBean"/> 其实两种......

霍淇滨 ⋅ 34分钟前 ⋅ 0

05、基于Consul+Upsync+Nginx实现动态负载均衡

1、Consul环境搭建 下载consul_0.7.5_linux_amd64.zip到/usr/local/src目录 cd /usr/local/srcwget https://releases.hashicorp.com/consul/0.7.5/consul_0.7.5_linux_amd64.zip 解压consu......

北岩 ⋅ 36分钟前 ⋅ 0

Webpack 4 api 了解与使用

webpack 最近升级到了 v4.5+版 01 官方不再支持 node4 以下版本 官方不再支持 node4 以下版本官方不再支持 node4 以下的版本,所以如果你的node版本太低,先开始升级node吧!话说node10 ...

NDweb ⋅ 46分钟前 ⋅ 0

使用nodeJs安装Vue-cli

Vue脚手架就是一个Vue框架开发环境 脚手架的意思是帮你快速开始一个vue的项目,也就是给你一套vue的结构,包含基础的依赖库,只需要 npm install就可以安装,让我们不需要为了编辑或者一些其...

木筏笔歆 ⋅ 今天 ⋅ 0

【微信小程序开发实战】0x00.开发前准备工作

写在开始 本人资深后端码农一枚,近期项目需求,接触到了微信小程序,将学习过程整理成文分享给小伙伴们,由于是边学边整理难免有表述不对的地方,望大家及时指正,感谢。 本人微信号: dream...

dreamans ⋅ 今天 ⋅ 0

linux redis的安装和php7下安装redis扩展

安装redis服务器 (1)下载安装包: $ wget http://download.redis.io/releases/redis-2.8.17.tar.gz (2)编译程序: $ tar xzf redis-2.8.17.tar.gz $ cd redis-2.8.17 $ make $ cd src &&......

concat ⋅ 今天 ⋅ 0

Guava EventBus源码解析

一、EventBus使用场景示例 Guava EventBus是事件发布/订阅框架,采用观察者模式,通过解耦发布者和订阅者简化事件(消息)的传递。这有点像简化版的MQ,除去了Broker,由EventBus托管了订阅&...

SaintTinyBoy ⋅ 今天 ⋅ 0

http怎么做自动跳转https

Apache 版本 如果需要整站跳转,则在网站的配置文件的<Directory>标签内,键入以下内容: RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^(.*)?$ https://%{SERVER_NAME......

Helios51 ⋅ 今天 ⋅ 0

Python爬虫,抓取淘宝商品评论内容

作为一个资深吃货,网购各种零食是很频繁的,但是能否在浩瀚的商品库中找到合适的东西,就只能参考评论了!今天给大家分享用python做个抓取淘宝商品评论的小爬虫! 思路 我们就拿“德州扒鸡”...

python玩家 ⋅ 今天 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部