首页 文章

不同节点上的Pod不能相互ping通

提问于
浏览
1

我根据文档设置了1个主2节点k8s集群 . 一个pod可以ping同一节点上的另一个pod,但无法ping另一个节点上的pod .

为了演示我在具有3个副本的部署下面部署的问题 . 当其中两个位于同一节点上时,另一个pod位于另一个节点上 .

$ cat nginx.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx-svc
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80

    $ kubectl get nodes
    NAME                                          STATUS    ROLES     AGE       VERSION
    ip-172-31-21-115.us-west-2.compute.internal   Ready     master    20m       v1.11.2
    ip-172-31-26-62.us-west-2.compute.internal    Ready         19m       v1.11.2
    ip-172-31-29-204.us-west-2.compute.internal   Ready         14m       v1.11.2

    $ kubectl get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP           NODE                                          NOMINATED NODE
    nginx-deployment-966857787-22qq7   1/1       Running   0          11m       10.244.2.3   ip-172-31-29-204.us-west-2.compute.internal   
    nginx-deployment-966857787-lv7dd   1/1       Running   0          11m       10.244.1.2   ip-172-31-26-62.us-west-2.compute.internal    
    nginx-deployment-966857787-zkzg6   1/1       Running   0          11m       10.244.2.2   ip-172-31-29-204.us-west-2.compute.internal   

    $ kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1               443/TCP   21m
    nginx-svc    ClusterIP   10.105.205.10           80/TCP    11m

一切都很好看 .

让我告诉你容器 .

# docker exec -it 489b180f512b /bin/bash
    root@nginx-deployment-966857787-zkzg6:/# ifconfig
    eth0: flags=4163  mtu 8951
            inet 10.244.2.2  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::cc4d:61ff:fe8a:5aeb  prefixlen 64  scopeid 0x20

    root@nginx-deployment-966857787-zkzg6:/# ping 10.244.2.3
    PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
    64 bytes from 10.244.2.3: icmp_seq=1 ttl=64 time=0.066 ms
    64 bytes from 10.244.2.3: icmp_seq=2 ttl=64 time=0.055 ms
    ^C

因此它将其邻居pod放在同一节点上 .

root@nginx-deployment-966857787-zkzg6:/# ping 10.244.1.2
    PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
    ^C
    --- 10.244.1.2 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 1059ms

并且无法在其他节点上ping其副本 .

这是主机接口:

# ifconfig
    cni0: flags=4163  mtu 8951
            inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0

    docker0: flags=4099  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

    eth0: flags=4163  mtu 9001
            inet 172.31.29.204  netmask 255.255.240.0  broadcast 172.31.31.255

    flannel.1: flags=4163  mtu 8951
            inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0

    lo: flags=73  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0

    veth09fb984a: flags=4163  mtu 8951
            inet6 fe80::d819:14ff:fe06:174c  prefixlen 64  scopeid 0x20

    veth87b3563e: flags=4163  mtu 8951
            inet6 fe80::d09c:d2ff:fe7b:7dd7  prefixlen 64  scopeid 0x20

    # ifconfig
    cni0: flags=4163  mtu 8951
            inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0

    docker0: flags=4099  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

    eth0: flags=4163  mtu 9001
            inet 172.31.26.62  netmask 255.255.240.0  broadcast 172.31.31.255

    flannel.1: flags=4163  mtu 8951
            inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0

    lo: flags=73  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0

    veth9733e2e6: flags=4163  mtu 8951
            inet6 fe80::8003:46ff:fee2:abc2  prefixlen 64  scopeid 0x20

节点上的进程:

# ps auxww|grep kube
    root      4059  0.1  2.8  43568 28316 ?        Ssl  00:31   0:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
    root      4260  0.0  3.4 358984 34288 ?        Ssl  00:31   0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
    root      4455  1.1  9.6 760868 97260 ?        Ssl  00:31   0:14 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

由于此网络问题,clusterIP也无法访问:

$ curl 10.105.205.10:80

有什么建议吗?

谢谢 .

2 回答

  • 1

    docker虚拟桥接口 docker0 现在在两台主机上都有IP 172.17.0.1 .

    但是根据docker / flannel集成指南, docker0 虚拟网桥应该在每个主机上的法兰绒网络中 .

    以下法兰绒/码头网络集成的高级工作流程

    • Flannel在 flanneld 启动期间根据etcd网络配置创建 /run/flannel/subnet.env .

    • Docker在 dockerd 启动期间引用文件 /run/flannel/subnet.env 并设置 --bip 标志,并将法兰绒网络的IP分配给 docker0

    有关更多详细信息,请参阅docker / flannel集成文档:http://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-flannel.html#restart-docker-daemon-with-flannel-network

  • 1

    我发现了这个问题 .

    Flannel使用UDP端口8285和8472,被AWS安全组阻止 . 我只打开了TCP端口 .

    我启用UDP端口8285和UDP端口8472以及TCP 6443,10250,10256 .

相关问题