首页 文章

德鲁伊 Kafka 摄取(暗示-2.2.3):kafka错误NoReplicaOnlineException

提问于
浏览
0

我使用Druid Kafka Indexing服务从Kafka加载我自己的流 .

我使用Load from Kafka tutorial来实现它 .

Kafka默认设置为全部(仅从tgz中提取) .

当我用空数据启动 imply-2.2.3 (德鲁伊)时(在 var 删除文件夹之后)一切正常 .

但是当我停止 Kafka 2.11-0.10.2.0 并再次启动它时发生错误和德鲁伊 Kafka 摄入不再有效,直到我停止Imply(德鲁伊)并删除所有数据(即删除 var 文件夹) .

有时德鲁伊只是没有从 Kafka 那里摄取数据,即使 Kafka 也没有错误 . 当我删除德鲁伊中的 var 文件夹时,所有人都会被重复,直到下一个相同的错误 .

Error:

kafka.common.NoReplicaOnlineException: No replica for partition [__consumer_offsets,19] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
    at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:73) ~[kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:339) ~[kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:200) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:115) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:112) [kafka_2.11-0.10.2.0.jar:?]
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) [scala-library-2.11.8.jar:?]
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) [scala-library-2.11.8.jar:?]
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.8.jar:?]
    at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:112) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:67) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:342) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:160) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:85) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:51) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController$$anonfun$startup$1.apply$mcV$sp(KafkaController.scala:681) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.controller.KafkaController.startup(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.KafkaServer.startup(KafkaServer.scala:224) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.Kafka$.main(Kafka.scala:67) [kafka_2.11-0.10.2.0.jar:?]
    at kafka.Kafka.main(Kafka.scala) [kafka_2.11-0.10.2.0.jar:?]

我做的步骤:

1. Start Imply:

bin/supervise -c conf/supervise/quickstart.conf

2. Start Kafka:

./bin/kafka-server-start.sh config/server.properties

3. Create topic:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wikiticker

4. Enable Druid Kafka ingestion:

curl -XPOST -H'Content-Type: application/json' -d @quickstart/wikiticker-kafka-supervisor.json http://localhost:8090/druid/indexer/v1/supervisor

5. Post events 到 Kafka 主题,然后被 Kafka 索引服务机构收录到德鲁伊

在所有 .properties 文件(common.runtime.properties,broker,coordinator,historical,middlemanager,overlord)中添加了属性:

druid.extensions.loadList=["druid-caffeine-cache", "druid-histogram", "druid-datasketches", "druid-kafka-indexing-service"]

其中包括 "druid-kafka-indexing-service" 以提供摄取服务 .

我认为Druid Kafka Indexing不应该出现这样的问题 .

有没有办法解决这个问题?

3 回答

  • 0

    该消息表明ID为0的代理已关闭,并且因为它是托管该分区的唯一代理,所以您现在无法使用该分区 . 您必须确保经纪人0已启动并正在服务 .

  • 0

    看起来您有一个节点Kafka集群,并且唯一的代理节点已关闭 . 这不是一个容错的设置 . 您应该拥有3个Kafka代理并设置复制因子为3的所有主题,这样即使一个或两个Kafka代理已关闭,系统也能正常工作 . 单节点集群通常仅用于开发 .

  • 0

    我通过添加3个Kafka代理来修复它,并为Kafka稳定性设置复制因子为3的所有主题 .

    在德鲁伊,我通过在middleManager中增加 druid.worker.capacity 并在主管规范的 ioConfig 中减少 taskDuration 来修复问题 .

    详情another question .

相关问题