首页 文章

为什么实际CPU利用率百分比超过Kubernetes中的Pod CPU限制

提问于
浏览
0

我在我的集群中运行了几个kubernetes pod(10个节点) . 每个pod只包含一个容纳一个工作进程的容器 . 我已经为容器指定了CPU“限制”和“请求” . 以下是节点(crypt12)上运行的一个pod的说明 .

Name:           alexnet-worker-6-9954df99c-p7tx5
Namespace:      default
Node:           crypt12/172.16.28.136
Start Time:     Sun, 15 Jul 2018 22:26:57 -0400
Labels:         job=worker
                name=alexnet
                pod-template-hash=551089557
                task=6
Annotations:    <none>
Status:         Running
IP:             10.38.0.1
Controlled By:  ReplicaSet/alexnet-worker-6-9954df99c
Containers:
  alexnet-v1-container:
    Container ID:  docker://214e30e87ed4a7240e13e764200a260a883ea4550a1b5d09d29ed827e7b57074
    Image:         alexnet-tf150-py3:v1
    Image ID:      docker://sha256:4f18b4c45a07d639643d7aa61b06bfee1235637a50df30661466688ab2fd4e6d
    Port:          5000/TCP
    Host Port:     0/TCP
    Command:
      /usr/bin/python3
      cifar10_distributed.py
    Args:
      --data_dir=xxxx

    State:          Running
      Started:      Sun, 15 Jul 2018 22:26:59 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     800m
      memory:  6G
    Requests:
      cpu:        800m
      memory:     6G
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hfnlp (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-hfnlp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hfnlp
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  kubernetes.io/hostname=crypt12
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

以下是我运行“kubectl describeble node crypt12”时的输出

Name:               crypt12
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=crypt12
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Wed, 11 Jul 2018 23:07:41 -0400
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Mon, 16 Jul 2018 16:25:43 -0400   Wed, 11 Jul 2018 22:57:22 -0400   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Mon, 16 Jul 2018 16:25:43 -0400   Wed, 11 Jul 2018 22:57:22 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 16 Jul 2018 16:25:43 -0400   Wed, 11 Jul 2018 22:57:22 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 16 Jul 2018 16:25:43 -0400   Wed, 11 Jul 2018 22:57:22 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 16 Jul 2018 16:25:43 -0400   Wed, 11 Jul 2018 22:57:42 -0400   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.16.28.136
  Hostname:    crypt12
Capacity:
 cpu:                8
 ephemeral-storage:  144937600Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8161308Ki
 pods:               110
Allocatable:
 cpu:                8
 ephemeral-storage:  133574491939
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8058908Ki
 pods:               110
System Info:
 Machine ID:                 f0444e00ba2ed20e5314e6bc5b0f0f60
 System UUID:                37353035-3836-5355-4530-32394E44414D
 Boot ID:                    cf2a9daf-c959-4c7e-be61-5e44a44670c4
 Kernel Version:             4.4.0-87-generic
 OS Image:                   Ubuntu 16.04.3 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.11.0
 Kube-Proxy Version:         v1.11.0
Non-terminated Pods:         (3 in total)
  Namespace                  Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                ------------  ----------  ---------------  -------------
  default                    alexnet-worker-6-9954df99c-p7tx5    800m (10%)    800m (10%)  6G (72%)         6G (72%)
  kube-system                kube-proxy-7kdkd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-dpclj                     20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       820m (10%)  800m (10%)
  memory    6G (72%)    6G (72%)
Events:     <none>

如图所示,在节点描述("Non-terminated Pods"部分)中,CPU限制为10% . 但是,当我在节点(crypt12)上运行"ps"或"top"命令时,工作进程的CPU利用率超过10%(约20%) . 为什么会这样?有人能说清楚这个吗?
enter image description here

更新:我找到了一个github问题讨论,在那里我找到了我的问题的答案:来自“kubectl describe node”的cpu百分比是“CPU-limits /#of Cores” . 由于我将CPU限制设置为0.8,10%是0.8 / 8的结果 .

2 回答

  • 0

    我找到了一个github问题讨论,在那里我找到了我的问题的答案:来自"kubectl describe node"的cpu百分比是"CPU-limits/# of Cores" . 由于我将CPU限制设置为0.8,10%是0.8 / 8的结果 .
    这是链接:https://github.com/kubernetes/kubernetes/issues/24925

  • 1

    首先,默认情况下,Top显示每个核心的百分比利用率 . 因此,使用8个核心,您可以获得800%的利用率 .

    如果你正在阅读最新的统计数据,那么它可能与你的节点运行的进程多于你的pod的事实有关 . 想想kube-proxy,kubelet和任何其他控制器 . GKE还运行仪表板并调用api进行统计 .

    另请注意,资源每100毫秒计算一次 . 容器的利用率可以超过10%,但平均而言,在此期间内不得超过允许使用量 .

    official documentation中它写着:

    spec.containers [] .resources.limits.cpu将转换为其millicore值并乘以100.结果值是容器每100ms可以使用的CPU时间总量 . 在此间隔期间,容器不能使用超过其CPU时间的份额 .

相关问题