首页 文章

Google Container Engine上的Kubernetes pod不断重启,从未准备好

提问于
浏览
2

我正试图在GKE上部署一个鬼博客,工作在persistent disks with WordPress tutorial . 我有一个工作容器,可以在GKE节点上手动运行:

docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog

我也可以使用另一个教程中的以下方法正确创建一个pod:

kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368

当我这样做时,我可以在群集内卷曲内部IP上的博客,并从 kubectl get pod 获得以下输出:

Name:       ghosty-nqgt0
Namespace:      default
Image(s):     us.gcr.io/my_project_id/my-ghost-blog
Node:       very-long-node-name/10.240.51.18
Labels:       run=ghost
Status:       Running
Reason:
Message:
IP:       10.216.0.9
Replication Controllers:  ghost (1/1 replicas created)
Containers:
  ghosty:
    Image:  us.gcr.io/my_project_id/my-ghost-blog
    Limits:
      cpu:    100m
    State:    Running
      Started:    Fri, 04 Sep 2015 12:18:44 -0400
    Ready:    True
    Restart Count:  0
Conditions:
  Type    Status
  Ready   True
Events:
  ...

_1179965_每个Wordpress教程 . 这是yaml:

metadata:
  name: ghost
  labels:
    name: ghost
spec:
  containers:
    - image: us.gcr.io/my_project_id/my-ghost-blog
      name: ghost
      env:
        - name: NODE_ENV
          value: production
        - name: VIRTUAL_HOST
          value: myghostblog.com
      ports:
        - containerPort: 2368

当我运行 kubectl create -f ghost.yaml 时,已创建pod,但从未准备好:

> kubectl get pod ghost
NAME      READY     STATUS    RESTARTS   AGE
ghost     0/1       Running   11         3m

pod连续重启,由 kubectl describe pod ghost 的输出确认:

Name:       ghost
Namespace:      default
Image(s):     us.gcr.io/my_project_id/my-ghost-blog
Node:       very-long-node-name/10.240.51.18
Labels:       name=ghost
Status:       Running
Reason:
Message:
IP:       10.216.0.12
Replication Controllers:  <none>
Containers:
  ghost:
    Image:  us.gcr.io/my_project_id/my-ghost-blog
    Limits:
      cpu:    100m
    State:    Running
      Started:    Fri, 04 Sep 2015 14:08:20 -0400
    Ready:    False
    Restart Count:  10
Conditions:
  Type    Status
  Ready   False
Events:
  FirstSeen       LastSeen      Count From              SubobjectPath       Reason    Message
  Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler }                      scheduled Successfully assigned ghost to very-long-node-name
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created   Created with docker id dbbc27b4d280
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started   Started with docker id dbbc27b4d280
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id ceb14ba72929
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id ceb14ba72929
  Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled    Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id 0b8957fe9b61
  Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id 0b8957fe9b61
  Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      created   Created with docker id edaf0df38c01
  Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id edaf0df38c01
  Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost}      started   Started with docker id d33f5e5a9637
...

如果我不杀死pod,这个创建/启动的循环将永远持续下去 . 与成功pod的唯一区别是缺少复制控制器 . 我不认为这是问题,因为教程没有提到rc .

为什么会这样?如何从配置文件创建成功的pod?我会在哪里找到关于正在发生的事情的更详细的日志?

3 回答

  • 2

    也许你可以在yaml文件中使用不同的重启策略?

    你所拥有的我认为相当于

    - restartPolicy: Never
    

    没有复制控制器 . 您可以尝试将此行添加到yaml并将其设置为Always(这将为您提供RC)或OnFailure .

    https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy

  • 0

    使用kubectl日志,容器日志可能很有用

    用法:

    kubectl logs [-p] POD [-c CONTAINER]

    http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_logs.html

  • 0

    如果相同的docker镜像通过 kubectl run 工作但没有在pod中工作,则pod规范出现问题 . 比较从规范创建的pod的完整输出和由rc创建的pod的完整输出,通过运行 kubectl get pods <name> -o yaml 来查看两者的不同之处 . 在黑暗中拍摄:pod规范中指定的env vars是否可能导致它在启动时崩溃?

相关问题