首页 文章

Spring 天 Kafka 模板 生产环境 者表现

提问于
浏览
1

我使用Spring Kafka模板生成消息 . 它产生消息的速度太慢了 . 需要大约8分钟才能生成15000条消息 .

以下是我如何创建Kafka模板:

@Bean
  public ProducerFactory<String, GenericRecord> highSpeedAvroProducerFactory(
      @Qualifier("highSpeedProducerProperties") KafkaProperties properties) {
    final Map<String, Object> kafkaPropertiesMap = properties.getKafkaPropertiesMap();
    System.out.println(kafkaPropertiesMap);
    kafkaPropertiesMap.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    kafkaPropertiesMap.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, AvroGenericSerializer.class);
    return new DefaultKafkaProducerFactory<>(kafkaPropertiesMap);
  }

  @Bean
  public KafkaTemplate<String, GenericRecord> highSpeedAvroKafkaTemplate(
      @Qualifier("highSpeedAvroProducerFactory") ProducerFactory<String, GenericRecord> highSpeedAvroProducerFactory) {
    return new KafkaTemplate<>(highSpeedAvroProducerFactory);
  }

以下是我使用模板发送消息的方式:

@Async("servicingPlatformUpdateExecutor")
  public void afterWrite(List<? extends Account> items) {
    LOGGER.info("Batch start:{}",items.size());
    for (Test test : items) {
        if (test.isOmega()) {

          ObjectKeyRecord objectKeyRecord = ObjectKeyRecord.newBuilder().setType("test").setId(test.getId()).build();
          LOGGER.info("build start, {}",test.getId());

          GenericRecord message = MessageUtils.buildEventRecord(
              schemaService.findSchema(topicName)
                  .orElseThrow(() -> new OmegaException("SchemaNotFoundException", topicName)), objectKeyRecord, test);
          LOGGER.info("build end, {}",account.getId());
          LOGGER.info("send Started , {}",account.getId());
          ListenableFuture<SendResult<String, GenericRecord>> future = highSpeedAvroKafkaTemplate.send(topicName, objectKeyRecord.toString(), message);
          LOGGER.info("send Done , {}",test.getId());
          future.addCallback(new KafkaProducerFutureCallback(kafkaSender, topicName, objectKeyRecord.toString(), message));
        }
    }
    LOGGER.info("Batch end}");

  }

制片人属性:

metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [***VALID BROKERS****))]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 9223372036854775807
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-1
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = all
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 2147483647
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 800000000
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 10
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2]
batch.size = 40000000
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = SASL_SSL
max.request.size = 1048576
value.serializer = class com.message.serialization.AvroGenericSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 2

这是显示调用kakfatemplate send方法需要几毫秒的日志:

2018-04-27 05:29:05.691 INFO  - testservice -  - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:70 - build start, 1
2018-04-27 05:29:05.691 INFO  - testservice -  - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:75 - build end, 1
2018-04-27 05:29:05.691 INFO  - testservice -  - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:76 - send Started , 1
2018-04-27 05:29:05.778 INFO  - testservice -  - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:79 - send Done , 1
2018-04-27 05:29:07.794 INFO  - testservice -  - kafka-producer-network-thread | producer-1 - com.test.testservice.adapter.batch.testsyncjob.KafkaProducerFutureCallback:38

关于如何改善发送者的表现的任何建议将不胜感激

Spring Kakfa版本:1.2.3.RELEASE Kafka客户端:0.10.2.1

更新1:

将Serializer更改为ByteArraySerializer,然后生成相同的 . 我仍然看到kafkatempate上的每个send方法调用需要100到200毫秒

ObjectKeyRecord objectKeyRecord = ObjectKeyRecord.newBuilder().setType("test").setId(test.getId()).build();
          GenericRecord message = MessageUtils.buildEventRecord(
              schemaService.findSchema(testConversionTopicName)
                  .orElseThrow(() -> new TestException("SchemaNotFoundException", testTopicName)), objectKeyRecord, test);
          byte[] messageBytes = serializer.serialize(testConversionTopicName,message);
          LOGGER.info("send Started , {}",test.getId());
          ListenableFuture<SendResult<String, byte[]>> future = highSpeedAvroKafkaTemplate.send(testConversionTopicName, objectKeyRecord.toString(), messageBytes);
          LOGGER.info("send Done , {}",test.getId());
          future.addCallback(new KafkaProducerFutureCallback(kafkaSender, testConversionTopicName, objectKeyRecord.toString(), message));

1 回答

  • 0

    你有没有想过你的申请?例如使用YourKit .

    我怀疑它是Avro序列化器;我能够在274ms内发送15,000个1000字节的消息 .

    @SpringBootApplication
    public class So50060086Application {
    
        public static void main(String[] args) {
            SpringApplication.run(So50060086Application.class, args);
        }
    
        @Bean
        public ApplicationRunner runner(KafkaTemplate<String, String> template) {
            return args -> {
                Thread.sleep(5_000);
                String payload = new String(new byte[999]);
                StopWatch watch = new StopWatch();
                watch.start();
                for (int i = 0; i < 15_000; i++) {
                    template.send("so50060086a", "" + i + payload);
                }
                watch.stop();
                System.out.println(watch.prettyPrint());
            };
        }
    
        @Bean
        public NewTopic topic() {
            return new NewTopic("so50060086a", 1, (short) 1);
        }
    }
    

    StopWatch '': running time (millis) = 274
    

相关问题