我有一个胖树拓扑,我使用Mininet,OpenFlow 1.3,Ryu控制器来模仿基于ECMP的路由 . 我正在使用Group和Flow表来执行此操作 . 例如,s2和s3连接到聚合交换机的端口1和2,例如as1,其中安装了以下规则:

组表定义

# Create two actions that forwards packets to ports 1 and 2 that are connected to two core switches
        action1 = sw.ofproto_parser.OFPActionOutput(1)
        action2 = sw.ofproto_parser.OFPActionOutput(2)

        # Specify two action sets (buckets), each with one action
        bucket1 = sw.ofproto_parser.OFPBucket(weight=1, actions=[action1])
        bucket2 = sw.ofproto_parser.OFPBucket(weight=1, actions=[action2])

        # OFPGT_SELECT chooses between bucket1 and bucket2 based on
        # some logic implemented in the switch, typically, round-robin?!
        group_mod = sw.ofproto_parser.OFPGroupMod(
            datapath=sw, command=ofp.OFPGC_ADD,
            type_=ofp.OFPGT_SELECT, group_id=1,
            buckets=[bucket1, bucket2])
        sw.send_msg(group_mod)

在流表中安装组表操作

match = sw.ofproto_parser.OFPMatch(eth_type = \
                    0x0800)
        action = sw.ofproto_parser.OFPActionGroup(1)


        inst = [ofp_parser.OFPInstructionActions(
                ofp.OFPIT_APPLY_ACTIONS, [action])]
        mod = sw.ofproto_parser.OFPFlowMod(
            datapath=sw, match=match, cookie=0, command=ofp.OFPFC_ADD,
            idle_timeout=0, hard_timeout=0, priority=100,
            flags=ofp.OFPFF_SEND_FLOW_REM, instructions=inst)

   # Other flow entries rules are added here ...

我在Mininet中使用 dpctl dump-flows 命令确认了这一点 . 请注意,对于核心交换机s2,n_packets = n_bytes = 0,而其他核心交换机s3不是这种情况:

*** s2 ------------------------------------------------------------------------
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=71.556s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1200,ip,nw_dst=10.0.0.1 actions=output:1
 cookie=0x0, duration=71.556s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1200,ip,nw_dst=10.0.0.2 actions=output:1
 cookie=0x0, duration=71.556s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1200,ip,nw_dst=10.0.0.3 actions=output:1
 cookie=0x0, duration=71.556s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1200,ip,nw_dst=10.0.0.4 actions=output:1
 cookie=0x0, duration=71.556s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=1200,ip,nw_dst=10.0.0.5 actions=output:2

.....对于核心交换机s3:

*** s3 ------------------------------------------------------------------------
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=71.582s, table=0, n_packets=4436, n_bytes=12043732, send_flow_rem priority=1200,ip,nw_dst=10.0.0.1 actions=output:1
 cookie=0x0, duration=71.582s, table=0, n_packets=6306, n_bytes=11448184, send_flow_rem priority=1200,ip,nw_dst=10.0.0.2 actions=output:1
 cookie=0x0, duration=71.582s, table=0, n_packets=870, n_bytes=1157688, send_flow_rem priority=1200,ip,nw_dst=10.0.0.3 actions=output:1
 cookie=0x0, duration=71.582s, table=0, n_packets=674, n_bytes=644616, send_flow_rem priority=1200,ip,nw_dst=10.0.0.4 actions=output:1
 cookie=0x0, duration=71.582s, table=0, n_packets=4475, n_bytes=11918478, send_flow_rem priority=1200,ip,nw_dst=10.0.0.5 actions=output:2

就像我在上面的评论中提到的,我相信OFPGT_SELECT基于交换机中实现的一些逻辑,例如,循环法,在bucket1和bucket2之间进行选择?这似乎在拓扑中的较低级别交换机中工作良好,即,两个桶都以相等的权重交替选择 . 但是在顶部的聚合交换机的情况下,始终只选择到核心交换机的一个路径(桶) . 通常,所有数据包只选择第一个桶(端口)或最后一个桶(端口),但不在两个桶之间交替!

但是,当给两个桶提供不等重量(1和2)时,它确实有效 . 不确定同等重量的问题是什么 .

任何帮助将不胜感激 . 谢谢!