当我使用单个实例启动使用者时,它会显示在使用者组中,但它不会消耗主题中的数据 . 之后,如果我启动另一个消费者并且我的第一个消费者开始消费数据,但最新的消费者实例没有分配任何分配 .

下面是第一个消费者实例启动时的信息日志 .

INFO:kafka.client:从[(u'kafka-broker1.ap-south-1.staging.internal',9092,0)引导群集元数据] INFO:kafka.conn ::连接到172.31.1.66:9092 INFO :kafka.client:Bootstrap成功:找到3个经纪人和19个主题 . 信息:kafka.conn ::关闭连接 . 信息:kafka.conn ::连接到172.31.1.148:9092 INFO:kafka.conn:经纪人版本标识为0.11.0 INFO:kafka.conn:设置配置api_version =(0,11,0)以跳过自动check_version请求startup INFO:kafka.consumer.subscription_state:订阅模式:/ events / INFO:kafka.conn ::连接到172.31.1.70:9092 INFO:kafka.cluster:datadog的组协调器是BrokerMetadata(nodeId = 1,host = u 'kafka-broker1.ap-south-1.staging.internal',port = 9092,rack = None)INFO:kafka.coordinator:发现组datadog的协调器1 INFO:kafka.conn ::连接到172.31.1.66:9092信息:kafka.coordinator.consumer:撤消先前为组datadog分配的分区集([])INFO:kafka.coordinator :(重新)加入组datadog INFO:kafka.consumer.subscription_state:将订阅的主题更新为:[u'events '] INFO:kafka.coordinator:加入组'datadog'(代843)与member_id kafka-python-1.3.5-e3c25fb3-39ea-4550-845f-9b663355b4f5信息:kafka.coordinator:成功加入gr oup datadog with generation 843 INFO:kafka.consumer.subscription_state:更新的分区分配:[] INFO:kafka.coordinator.consumer:为组datadog设置新分配的分区([])

当我启动第二个实例时,第一个实例分配了分区,第二个实例分配了0个分区,并且在启动第二个实例之前具有与第一个实例相同的信息

下面是第二个实例启动后的第一个实例的信息日志 .

INFO:kafka.coordinator.consumer:为组datadog设置新分配的分区([])警告:kafka.coordinator:组数据路径的心跳失败,因为它正在重新 balancer 警告:kafka.coordinator:Heartbeat失败([错误27] RebalanceInProgressError );重试INFO:kafka.coordinator.consumer:撤消以前为组datadog分配的分区集([])INFO:kafka.coordinator :(重新)加入组datadog INFO:kafka.coordinator:跳过心跳:没有自动分配或等待重新 balancer 信息:kafka.coordinator:加入组'datadog'(第843代)与member_id kafka-python-1.3.5-ddb66185-c615-4f31-9729-9384131f24c9信息:kafka.coordinator:当选组长 - 使用以下方式执行分区分配范围INFO:kafka.coordinator:使用生成843成功加入组datadog INFO:kafka.consumer.subscription_state:更新的分区分配:[TopicPartition(topic = u'events',partition = 0),TopicPartition(topic = u'events', partition = 1),TopicPartition(topic = u'events',partition = 2),TopicPartition(topic = u'events',partition = 3),TopicPartition(topic = u'events',partition = 4),TopicPartition(topic = u'events',partition = 5),TopicPartition(topic = u'events',partition = 6),TopicPartition(topic = u'events',partition = 7),TopicPartition(t opic = u'events',partition = 8),TopicPartition(topic = u'events',partition = 9)]信息:kafka.coordinator.consumer:设置新分配的分区集([TopicPartition(topic = u'events', partition = 6),TopicPartition(topic = u'events',partition = 7),TopicPartition(topic = u'events',partition = 8),TopicPartition(topic = u'events',partition = 9),TopicPartition(topic = u'events',partition = 0),TopicPartition(topic = u'events',partition = 1),TopicPartition(topic = u'events',partition = 2),TopicPartition(topic = u'events',partition = 3),TopicPartition(topic = u'events',partition = 4),TopicPartition(topic = u'events',partition = 5)])用于组datadog

We always have 1 consumer instance with unassigned partitions on all the cases, last consumer instance always have 0 partition assigned

以下是相同的截图

No partitions assigned to first consumer instance

No partitions assigned to last consumer instance

We also suspect clustering of kafka. When we had only 1 kafka node partition assignment to consumer instance and re-balancing was working fine. But after going into multi-node cluster we are facing this issue