首页 文章

为什么 Apache Storm KafkaSpout 会从 Kafka 话题中发出这么多项目?

提问于
浏览
1

我遇到了 Kafka 和 Storm 的问题。我不确定在这一点上是否存在我正在设置的 KafkaSpout 配置的问题,或者我是否正确无法确认或是什么。

我在我的 Kafka 主题上有 50 个项目,但是我的鲸鱼喷出了超过 1300 个(并计数)元组。此外,Spout 报告说几乎所有人都“失败了”。拓扑实际上并没有失败,它正在成功地写入数据库,但我只是不知道为什么它显然会重播所有内容(如果那就是它正在做的事情)

最大的问题是:

当我只通过 50 到 Kafka 时,它为什么会发出这么多元组?

在此输入图像描述

以下是我如何设置拓扑和 KafkaSpout

public static void main(String[] args) {
    try {
      String databaseServerIP = "";
      String kafkaZookeepers = "";
      String kafkaTopicName = "";
      int numWorkers = 1;
      int numAckers = 1;
      int numSpouts = 1;
      int numBolts = 1;
      int messageTimeOut = 10;
      String topologyName = "";

      if (args == null || args[0].isEmpty()) {
        System.out.println("Args cannot be null or empty. Exiting");
        return;
      } else {
        if (args.length == 8) {
          for (String arg : args) {
            if (arg == null) {
              System.out.println("Parameters cannot be null. Exiting");
              return;
            }
          }
          databaseServerIP = args[0];
          kafkaZookeepers = args[1];
          kafkaTopicName = args[2];
          numWorkers = Integer.valueOf(args[3]);
          numAckers = Integer.valueOf(args[4]);
          numSpouts = Integer.valueOf(args[5]);
          numBolts = Integer.valueOf(args[6]);
          topologyName = args[7];
        } else {
          System.out.println("Bad parameters: found " + args.length + ", required = 8");
          return;
        }
      }

      Config conf = new Config();

      conf.setNumWorkers(numWorkers);
      conf.setNumAckers(numAckers);
      conf.setMessageTimeoutSecs(messageTimeOut);

      conf.put("databaseServerIP", databaseServerIP);
      conf.put("kafkaZookeepers", kafkaZookeepers);
      conf.put("kafkaTopicName", kafkaTopicName);

      /**
       * Now would put kafkaSpout instance below instead of TemplateSpout()
       */
      TopologyBuilder builder = new TopologyBuilder();
      builder.setSpout(topologyName + "-flatItems-from-kafka-spout", getKafkaSpout(kafkaZookeepers, kafkaTopicName), numSpouts);
      builder.setBolt(topologyName + "-flatItem-Writer-Bolt", new ItemWriterBolt(), numBolts).shuffleGrouping(topologyName + "-flatItems-from-kafka-spout");

      StormTopology topology = builder.createTopology();

      StormSubmitter.submitTopology(topologyName, conf, topology);

    } catch (Exception e) {
      System.out.println("There was a problem starting the topology. Check parameters.");
      e.printStackTrace();
    }
  }

  private static KafkaSpout getKafkaSpout(String zkHosts, String topic) throws Exception {

    //String topic = "FLAT-ITEMS";
    String zkNode = "/" + topic + "-subscriber-pipeline";
    String zkSpoutId = topic + "subscriberpipeline";
    KafkaTopicInZkCreator.createTopic(topic, zkHosts);

    SpoutConfig spoutConfig = new SpoutConfig(new ZkHosts(zkHosts), topic, zkNode, zkSpoutId);
    spoutConfig.startOffsetTime = kafka.api.OffsetRequest.LatestTime();

    // spoutConfig.useStartOffsetTimeIfOffsetOutOfRange = true;
    //spoutConfig.startOffsetTime = System.currentTimeMillis();
    spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

    return new KafkaSpout(spoutConfig);

  }

这是在重要的情况下创建主题

public static void createTopic(String topicName, String zookeeperHosts) throws Exception {
    ZkClient zkClient = null;
    ZkUtils zkUtils = null;
    try {

      int sessionTimeOutInMs = 15 * 1000; // 15 secs
      int connectionTimeOutInMs = 10 * 1000; // 10 secs

      zkClient = new ZkClient(zookeeperHosts, sessionTimeOutInMs, connectionTimeOutInMs, ZKStringSerializer$.MODULE$);
      zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperHosts), false);

      int noOfPartitions = 1;
      int noOfReplication = 1;
      Properties topicConfiguration = new Properties();

      boolean topicExists = AdminUtils.topicExists(zkUtils, topicName);
      if (!topicExists) {
        AdminUtils.createTopic(zkUtils, topicName, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$);
      }
    } catch (Exception ex) {
      ex.printStackTrace();
    } finally {
      if (zkClient != null) {
        zkClient.close();
      }
    }
  }

1 回答

  • 1

    你需要看看螺栓中的消息是否失败。

    如果它们都失败了,你可能没有在螺栓中查询消息,或者在螺栓代码中有异常。

    如果发出螺栓消息,则更有可能是超时。增加拓扑超时配置或 paralisim 应该可以解决问题。

相关问题