首页 文章

在获取主题元数据时,Kafka消费者“未能找到领导者”

提问于
浏览
8

当我尝试使用Kafka 生产环境 者和消费者(0.9.0)脚本来推送/拉取主题中的消息时,我得到以下错误 .

生产环境 者错误

[2016-01-13 02:49:40,078] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

消费者错误

> [2016-01-13 02:47:18,620] WARN
> [console-consumer-90116_f89a0b380f19-1452653212738-9f857257-leader-finder-thread],
> Failed to find leader for Set([test,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.KafkaException: fetching topic metadata for topics
> [Set(test)] from broker
> [ArrayBuffer(BrokerEndPoint(0,192.168.99.100,9092))] failed   at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)    at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)    at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> Caused by: java.io.EOFException   at
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
>   at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:77)
>   at
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
>   at kafka.producer.SyncProducer.send(SyncProducer.scala:119)     at
> kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
>   ... 3 more

为什么我收到错误,我该如何解决?

配置

Docker containers on Mac 中运行所有组件 . ZooKeeper和Kafka在单独的Docker容器中运行 .

Docker Machine(boot2docker)IP地址: 192.168.99.100 ZooKeeper端口: 2181 Kafka端口: 9092

Kafka配置文件 server.properties 设置如下:

host.name=localhost
broker.id=0
port=9092
advertised.host.name=192.168.99.100
advertised.port=9092

命令

我从kafka服务器Docker容器中运行以下命令 . 我已经创建了一个主题,其中包含一个分区,复制因子为1 .

注意 leader designation is 0 可能是问题的一部分 .

root@f89a0b380f19:/opt/kafka/dist# ./bin/kafka-topics.sh --zookeeper 192.168.99.100:2181 --topic test --describe
Topic:test  PartitionCount:1    ReplicationFactor:1 Configs:
    Topic: test Partition: 0    Leader: 0   Replicas: 0 Isr: 0

然后,我执行以下操作以发送一些消息:

root@f89a0b380f19:/opt/kafka/dist# ./bin/kafka-console-producer.sh --broker-list 192.168.99.100:9092 --topic test
one message
two message
three message
four message
[2016-01-13 02:49:40,078] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:50:40,080] ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:51:40,081] ERROR Error when sending message to topic test with key: null, value: 13 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
[2016-01-13 02:52:40,083] ERROR Error when sending message to topic test with key: null, value: 12 bytes with error: Failed to update metadata after 60000 ms. (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

这是我用来尝试使用消息的命令,这些消息产生了我上面发布的消费者错误 .

root@f89a0b380f19:/opt/kafka/dist# ./bin/kafka-console-consumer.sh --zookeeper 192.168.99.100:2181 --topic test --from-beginning

我已确认端口 21819092 是打开的,可从Kafka Docker容器中访问:

root@f89a0b380f19:/# nc -z 192.168.99.100 2181; echo $?;
0
root@f89a0b380f19:/# nc -z 192.168.99.100 9092; echo $?;
0

1 回答

  • 4

    解决方案根本不是我的预期 . 错误消息与实际发生的情况不符 .

    主要问题是将Docker中的日志目录挂载到我的本地文件系统 . 我的 docker run 命令使用卷装置将容器中的Kafka log.dir 文件夹挂载到主机VM上的本地目录,该目录实际上已安装到我的Mac上 . 这是问题的后一点 .

    例如,

    docker run --name kafka -v /Users/<me>/kafka/logs:/var/opt/kafka:rw -p 9092:9092 -d kafka
    

    由于我在Mac上使用docker-machine(例如boot2docker),我必须通过我的 /Users/ 路径挂载boot2docker自动挂载在主机VM中 . 因为底层VM本身使用绑定挂载,Kafka 's I/O engine wasn'能够正确地与它通信 . 如果卷挂载到主机Linux VM(即boot2docker机器)上的目录,则可以正常工作 .

    我无法解释确切的细节,因为我不知道Kafka I / O的来龙去脉,但是当我将已安装的卷移除到我的Mac文件系统时,它可以工作 .

相关问题