首次使用GCE,以前在AWS中使用k8和kops .
我有一个PV和PVC设置,两者都是状态限制 .
我的第一个部署/ pod尝试运行,大部分的yaml配置主要是从AWS中的工作设置复制的 .
当我从部署中删除卷时,它会启动并进入运行状态 .
随着附加卷,它停止在:开始时间:尚未开始阶段:待定状态:ContainerCreating
容器中没有任何日志记录,而不是单行 .
编辑:终于在pod事件中发现了一些有用的东西而不是容器日志
卷“tio-pv-ssl”的MountVolume.SetUp失败:挂载失败:退出状态1挂载命令:systemd-run挂载参数: - description = Kubernetes瞬态挂载/ var / lib / kubelet / pods / c64b2284-de81 -11e8-9ead-42010a9400a0 / volumes / kubernetes.io~nfs / tio-pv-ssl --scope - / home / kubernetes / containerized_mounter / mounter mount -t nfs 10.148.0.6:/ssl / var / lib / kubelet / pods / c64b2284-de81-11e8-9ead-42010a9400a0 / volumes / kubernetes.io~nfs / tio-pv-ssl输出:以单位运行范围:run-r68f0f0ac5bf54be2b47ac60d9e533712.scope挂载失败:挂载失败:退出状态32挂载命令:chroot挂载参数:[/ home / kubernetes / containerized_mounter / rootfs mount -t nfs 10.148.0.6:/ssl /var/lib/kubelet/pods/c64b2284-de81-11e8-9ead-42010a9400a0/volumes/kubernetes.io~nfs/tio -pv-ssl]输出:mount.nfs:服务器在安装10.148.0.6:/ssl时拒绝访问
使用_362127设置NFS服务器10.148.0.6似乎运行正常并且NFS根目录下存在/ ssl文件夹(/ data / ssl)
Kubectl状态
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
tio-pv-ssl 1000Gi RWX Retain Bound core/tio-pv-claim-ssl standard 17m
kubectl get pvc --namespace=core
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tio-pv-claim-ssl Bound tio-pv-ssl 1000Gi RWX standard 18m
kubectl get pods --namespace=core
NAME READY STATUS RESTARTS AGE
proxy-deployment-64b9cdb55d-8htjf 0/1 ContainerCreating 0 13m
卷Yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tio-pv-ssl
spec:
capacity:
storage: 1000Gi
storageClassName: standard
accessModes:
- ReadWriteMany
nfs:
server: 10.148.0.6
path: "/ssl"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tio-pv-claim-ssl
namespace: core
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
volumeName: tio-pv-ssl
storageClassName: standard
部署yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: proxy-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: proxy-ctr
image: asia.gcr.io/xyz/nginx-proxy:latest
resources:
limits:
cpu: "500m"
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: tio-ssl-storage
mountPath: "/etc/nginx/ssl"
volumes:
- name: tio-ssl-storage
persistentVolumeClaim:
claimName: tio-pv-claim-ssl
strategy:
type: "RollingUpdate"
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
---
apiVersion: v1
kind: Service
metadata:
name: proxyservice
namespace: core
labels:
app: proxy
spec:
ports:
- port: 80
name: port-http
protocol: TCP
- port: 443
name: port-https
protocol: TCP
selector:
app: proxy
type: LoadBalancer
1 回答
一旦找到隐藏日志的地方就解决了我自己的问题 .
应该是服务器上的完整路径,而不是相对于nfs数据文件夹