我有一个单节点 kafka 设置。它工作正常。然后我添加了另一个代理,并创建了一个包含两个节点的 kafka 集群。我没有安装单独的 zookeeper 并使用 kafka 包附带的同一个 zookeeper。为了制作集群,我做了以下更改。
两个节点中 zookeeper.properties 的变化:
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=1000
server.1=10.20.40.120:2888:3888
server.2=10.20.40.119:2888:3888
initLimit=10
syncLimit=5
tickTime=2000
在第一个节点中将 server.properties 复制到 broker1.properties,在第二个节点中复制 broker2.properties:
broker1.properties 的内容:
broker.id=1
listeners=PLAINTEXT://10.20.40.120:9092
advertised.listeners=PLAINTEXT://10.20.40.120:9092
log.dirs=/tmp/kafka-logs
num.partitions=1
log.retention.hours=168
zookeeper.connect=10.20.40.120:2181,10.20.40.119:2181
zookeeper.connection.timeout.ms=6000
replica.fetch.max.bytes=4000012
message.max.bytes=2690123replica.fetch.max.bytes=4000012
message.max.bytes=2690123
max.message.bytes=4000012
第二个节点中 broker2.properties 的内容:
broker.id=2
listeners=PLAINTEXT://10.20.40.119:9092
advertised.listeners=PLAINTEXT://10.20.40.119:9092
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
zookeeper.connect=10.20.40.120:2181,10.20.40.119:2181
zookeeper.connection.timeout.ms=6000
在 broker1 中创建了一个文件 myid 并在其中放入 1。经纪人 2 中的 2。
使用以下命令启动 zookeeper:
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
使用以下命令分别在 broker1 和 broker2 中启动 kafka:
nohup bin/kafka-server-start.sh config/broker1.properties &
nohup bin/kafka-server-start.sh config/broker2.properties &
现在,当我尝试描述__consumer_offsets 时,我看到如下:
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 3 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 4 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 5 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 6 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 7 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 8 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 9 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 10 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 11 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 12 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 13 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 14 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 15 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 16 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 17 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 18 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 19 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 20 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 21 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 22 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 23 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 24 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 25 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 26 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 27 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 28 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 29 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 30 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 31 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 32 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 33 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 34 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 35 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 36 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 37 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 38 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 39 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 40 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 41 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 42 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 43 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 44 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 45 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 46 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 47 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 48 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 49 Leader: 0 Replicas: 0 Isr: 0
不应该将复制 actor 设置为 2,并且 Leader 对于不同的分区是 1 和 2,而 Replicas 和 Isr 是 1,2?
当我尝试启动消费者时,我得到了COORDINATOR NOT AVAILABLE
和error_code=15
。由于这个错误,我发现我的__consumer_offsets 和群集可能存在一些问题。
丢失的链接在哪里以及如何纠正?
1 回答
您应该在代理属性文件中将
num.partitions
设置为 2。另外需要注意的是,在 Kafka 内部创建此主题之前,您必须确保至少有两个正在运行的代理。