首页 文章

使用Kafka Streams DSL进行2步窗口聚合

提问于
浏览
0

假设我有一个流"stream-1"每秒由1个数据点组成,并且我希望能够在不同的进程中运行每个步骤 . 如果stream-5和stream-10包含相同键/时间戳的更新(因此我不一定需要How to send final kafka-streams aggregation result of a time windowed KTable?),只要最后的值正确,这本身就不是问题 .

有没有(简单)方法使用高级Kafka Streams DSL解决这个问题?到目前为止,我没有看到一种优雅的方式来处理由于聚合而在流-5上产生的中间更新 .

我知道中间更新可以通过 cache.max.bytes.bufferingcommit.interval.ms 设置以某种方式控制,但我认为任何设置都不能保证在所有情况下都不会产生中间值 . 此外,我可以尝试使用键的时间戳部分将"stream-5"转换为读取的KTable,但是看起来KTable似乎不支持像KStream这样的窗口操作 .

这是我到目前为止,由于stream-5上的中间聚合值而失败

Reducer<DataPoint> sum = new Reducer<DataPoint>() {                                                                           
    @Override                                                                                                                 
    public DataPoint apply(DataPoint x, DataPoint y) {                                                                        
        return new DataPoint(x.timestamp, x.value + y.value);                                                                 
    }                                                                                                                         
 };                                                                                                                           

 KeyValueMapper<Windowed<String>, DataPoint, String> strip = new 
           KeyValueMapper<Windowed<String>, DataPoint, String>() {      
      @Override                                                                                                               
      public String apply(Windowed<String> wKey, DataPoint arg1) {                                                            
          return wKey.key();                                                                                                  
      }                                                                                                                       
 };                                                                                                                           

KStream<String, DataPoint> s1 = builder.stream("stream-1");                                                                      

s1.groupByKey()                                                                                                               
       .reduce(sum, TimeWindows.of(5000).advanceBy(5000))                                                                     
       .toStream()                                                                                                            
       .selectKey(strip)                                                                                                      
       .to("stream-5");                                                                                                          

KStream<String, DataPoint> s5 = builder.stream("stream-5");                                                                      

s5.groupByKey()                                                                                                               
       .reduce(sum, TimeWindows.of(10000).advanceBy(10000))                                                                   
       .toStream()                                                                                                            
       .selectKey(strip)                                                                                                      
       .to("stream-10");

现在,如果stream-1包含intputs(键只是KEY)

KEY {"timestamp":0,"value":1.0}
KEY {"timestamp":1000,"value":1.0}
KEY {"timestamp":2000,"value":1.0}
KEY {"timestamp":3000,"value":1.0}
KEY {"timestamp":4000,"value":1.0}
KEY {"timestamp":5000,"value":1.0}
KEY {"timestamp":6000,"value":1.0}
KEY {"timestamp":7000,"value":1.0}
KEY {"timestamp":8000,"value":1.0}
KEY {"timestamp":9000,"value":1.0}

stream-5包含正确的(最终)值:

KEY {"timestamp":0,"value":1.0}
KEY {"timestamp":0,"value":2.0}
KEY {"timestamp":0,"value":3.0}
KEY {"timestamp":0,"value":4.0}
KEY {"timestamp":0,"value":5.0}
KEY {"timestamp":5000,"value":1.0}
KEY {"timestamp":5000,"value":2.0}
KEY {"timestamp":5000,"value":3.0}
KEY {"timestamp":5000,"value":4.0}
KEY {"timestamp":5000,"value":5.0}

但是stream-10是错误的(最终值应该是10.0),因为它还考虑了stream-5上的中间值:

KEY {"timestamp":0,"value":1.0}
KEY {"timestamp":0,"value":3.0}
KEY {"timestamp":0,"value":6.0}
KEY {"timestamp":0,"value":10.0}
KEY {"timestamp":0,"value":15.0}
KEY {"timestamp":0,"value":21.0}
KEY {"timestamp":0,"value":28.0}
KEY {"timestamp":0,"value":36.0}
KEY {"timestamp":0,"value":45.0}
KEY {"timestamp":0,"value":55.0}

1 回答

  • 0

    问题是所有聚合的结果都是KTables,这意味着生成到其输出主题的记录代表了一个changlog . 但是,当您随后将它们作为流加载时,下游聚合将重复计算 .

    相反,您需要将中间主题加载为表,而不是流 . 但是,您将无法在它们上使用窗口化聚合,因为它们仅在流上可用 .

    您可以使用以下模式来完成对表而不是流的窗口化聚合:

    https://cwiki.apache.org/confluence/display/KAFKA/Windowed+aggregations+over+successively+increasing+timed+windows

    如果要在单独的进程中运行每个步骤,您可以调整它,只需记住使用builder.table()而不是builder.stream()加载中间表 .

相关问题