首页 文章

在Mesos上运行的Kafka消费者“无法添加分区的领导者”错误

提问于
浏览
4

我正在使用mesos/kafka库运行一个由6个代理组成的Kafka集群 . 我能够在6个不同的机器上添加和启动代理,并使用Python SimpleProducer和kafka-console-producer.sh脚本将消息发布到集群中 .

但是我无法使消费者正常工作 . 我正在运行以下使用者命令:

bin/kafka-console-consumer.sh --zookeeper 192.168.1.199:2181 --topic test --from-beginning --consumer.config config/consumer.properties --delete-consumer-offsets

在consumer.properties文件中,我将group.id设置为 my.group ,并将 zookeeeper.connect 设置为zookeeper集合中的多个节点 . 我从运行此消费者获得以下warninng消息:

[2015-09-24 16:01:06,609] WARN [my.group_my_host-1443106865779-b5a3a1e1-leader-finder-thread], Failed to add l
    eader for partitions [test,4],[test,1],[test,5],[test,2],[test,0],[test,3]; will retry (kafka.consumer.ConsumerFetcherM
    anager$LeaderFinderThread)
    java.nio.channels.ClosedChannelException
            at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
            at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
            at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
            at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
            at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
            at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
            at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
            at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
            at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
    {'some':2}
    [2015-09-24 16:20:02,362] WARN [my.group_my_host-1443108001180-fa0c93e4-leader-finder-thread], Failed to add leader for partitions [test,4],[test,1],[test,5],[test,2],[test,0],[test,3]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
    java.nio.channels.ClosedChannelException
            at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
            at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
            at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
            at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
            at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
            at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
            at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
            at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
            at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
            at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224)
            at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403)
            at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
            at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
            at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
            at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
    ...
    // Lots more of this
    ...
    Consumed 1 messages

我不确定为什么它无法添加领导者,领导者似乎已经在Zookeeper中了 . 除了所有这些错误消息,我只能将一条消息传递给消费者 . 字符串 {'some':2} 是我从控制台 生产环境 者发送的消息 .

我在其中一个Mesos奴隶的 server.log 中发现了这个错误,不确定它是否相关:

[2015-09-24 17:09:41,926] ERROR Closing socket for /192.168.1.199 because of error (kafka.network.Processor)
java.io.IOException: Broken pipe
            at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
            at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
            at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
            at sun.nio.ch.IOUtil.write(IOUtil.java:65)
            at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
            at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:123)
            at kafka.network.MultiSend.writeTo(Transmission.scala:101)
            at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
            at kafka.network.Processor.write(SocketServer.scala:472)
            at kafka.network.Processor.run(SocketServer.scala:342)
            at java.lang.Thread.run(Thread.java:745)

Any suggestions as to what might be happening with the consumer or where I might look to troubleshoot the problem?

其中一个日志分区的Zookeeper代理分区状态:

[zk: localhost:2181(CONNECTED) 166] get /brokers/topics/test/partitions/0/state
{"controller_epoch":1,"leader":0,"version":1,"leader_epoch":0,"isr":[0]}

操作系统:Ubuntu 14.0.4 Mesos:0.23 Kafka:2.10-0.8.2.1

Update: 使用 kafka-console-consumer.sh 进行一些进一步的测试,这些消息似乎正在通过 . 错误消息是常量,因此您无法在 stdout 中看到所有消息 . The Python KafkaConsumer fails immediatelyFailedPayloadsError .

3 回答

  • 0

    我认为你需要研究 property 的 Value “ advertised.host.name ” . 我最近也遇到过这个问题并使用上面的属性修复了 .
    请确保您为每个 BROKER 提到了正确的 IP 地址 .
    如果不起作用,请告诉我 .

  • 5

    尝试运行以下命令:

    bin/kafka-topics.sh --zookeeper your.zookeeper:2181 --describe --topic your_topic
    

    这将显示哪个经纪商是您主题的每个分区的领导者(有关详细信息,请参阅此链接:http://kafka.apache.org/documentation.html#quickstart_multibroker

    就我而言,被设定为领导者的经纪人之一已经失败并且不再存在 . 应该指派一名新的领导人但由于某种原因不是 .

    我修复了这个问题:

    • 停止所有 生产环境 者和消费者

    • 重新启动每个剩余的经纪人

    然后我重新运行 describe 命令(从上面)并且可以看到失败的代理不再被列为领导者 .

    然后我提出了一个与失败的经纪人具有相同ID的新经纪人 . Kafka从那里接过它并从我的其他经纪人那里收集了所有数据(这要求你的主题有足够的复制因子) . 一旦数据结束,Kafka使经纪人成为分区领导者 .

    最后,我重新启动了 生产环境 者和消费者 .

  • 1

    我的问题是:

    • 运行zookeeper

    • 创建主题

    • 运行kafka

    然后我得到“没有领导发现异常”

    但当我在Zookeeper和Kafka正常运行时创建了一个主题时,它运行正常 .

相关问题