在我清楚地解释这个问题之前,我想分享这些属性.Below是我用过的制作人的属性 . bootstrap.servers = XYZ:9092,的ack =所有,重试= 0,batch.size = 16384,auto.commit.interval.ms = 1000,linger.ms = 0,key.serializer = org.apache.kafka.common . serialization.StringSerializer,value.serializer = org.apache.kafka.common.serialization.StringSerializer,block.on.buffer.full =真

消费者权限bootstrap.servers = ec2-54-218-85-12.us-west-2.compute.amazonaws.com:9092,group.id = test,enable.auto.commit = true,key.deserializer = org . apache.kafka.common.serialization.StringDeserializer,value.deserializer = org.apache.kafka.common.serialization.StringDeserializer,session.timeout.ms = 10000,fetch.min.bytes = 50000,receive.buffer.bytes = 262144, max.partition.fetch.bytes = 2097152

Producer.java

int numOfmessages = Integer.valueOf(args[1]);
    // set up the producer
    KafkaProducer<String, String> producer;
    try (InputStream props = Resources.getResource("producer.props").openStream()) {
        Properties properties = new Properties();
        properties.load(props);
        producer = new KafkaProducer<>(properties);
    }

    try {
        for (int i = 0; i < numOfmessages; i++) {
            String message = "message number " +i;
            // send lots of messages
            producer.send(new ProducerRecord<String, String>("fast-messages", message ));
            logger.info("sent message "+message);
        }
    } catch (Throwable throwable) {
        System.out.printf("%s", throwable.getStackTrace());
    } finally {
        producer.close();
    }

Consumer.java

KafkaConsumer<String, String> consumer;
    try (InputStream props = Resources.getResource("consumer.props").openStream()) {
        Properties properties = new Properties();
        properties.load(props);
        consumer = new KafkaConsumer<>(properties);
    }

    try {
        consumer.subscribe(Arrays.asList("fast-messages"));
        int timeouts = 0;
        //noinspection InfiniteLoopStatement
        while (true) {
            // read records with a short timeout. If we time out, we don't really care.
            ConsumerRecords<String, String> records = consumer.poll(200);
            if (records.count() == 0) {
                timeouts++;
            } else {
                logger.info("Got %d records after %d timeouts\n", records.count(), timeouts);
                timeouts = 0;
            }
            for (ConsumerRecord<String, String> record : records) {

                logger.info("consumed "+record.value());

                logger.info("doing some complex operation in consumer with "+record.value());

                for(int i= 0;i <999999999;i++) {

                    for(int j= 0;j <999999999;j++) {

                    }

                }

            }
        }
    }finally {
        consumer.close();
    }

使用上面的属性和代码,当我运行producer时,它会安全地发送所有消息 . 在消费者方面,我能够消耗所有消息,但是当偏移提交发生时,它失败并出现以下错误 .

2016-11-04 09:55:08 INFO AbstractCoordinator:540 - 标记协调员2147483647死了 . 2016-11-04 09:55:08 ERROR ConsumerCoordinator:544 - 在为组测试提交偏移时发生错误UNKNOWN_MEMBER_ID 2016-11-04 09:55:08 WARN ConsumerCoordinator:418 - 自动偏移提交失败:由于无法完成提交group rebalance 2016-11-04 09:55:09 ERROR ConsumerCoordinator:544 - 在为组测试提交偏移时发生错误UNKNOWN_MEMBER_ID 2016-11-04 09:55:09 WARN ConsumerCoordinator:439 - 自动偏移提交失败:2016-11- 04 09:55:09 INFO AbstractCoordinator:361 - 由于未知的成员ID,重置和重试,尝试加入组测试失败 .

我有点理解这个问题,因为我们在消费者内部进行了复杂的操作而失败了 . 有关如何处理这个的任何建议?这必须是一个非常普遍的场景,只是想了解我们是否需要改变任何配置等 .