首页 文章

Kubernetes Cinder卷不能使用cloud-provider = openstack挂载

提问于
浏览
2

我正在尝试使用kubernetes的cinder插件来创建静态定义的PV以及StorageClasses,但我发现我的集群和cinder之间没有用于创建/安装设备的活动 .

Kubernetes版本:

kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

命令kubelet以及其状态启动:

systemctl status kubelet -l
● kubelet.service - Kubelet service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
  Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
  Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
 Main PID: 2408 (kubelet)
   CGroup: /system.slice/kubelet.service
           ├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf

这是我的cloud.conf文件:

# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne

看来k8s能够与openstack成功通信 . 来自/ var / log / messages:

kubelet: I1020 11:43:51.770948    2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642    2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679    2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688    2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332    2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]

我的PV / PVC yaml文件和cinder列表输出:

# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: jk-test
  labels:
    type: test
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  cinder:
    volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
    fsType: ext4

# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available |              jk-cinder              |  10  |      -      |  false   |

如上所示,cinder报告具有pv.yaml文件中引用的ID的设备是可用的 . 当我创建它们时,事情似乎有效:

NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM             REASON    AGE
pv/jk-test   10Gi       RWO           Retain          Bound     default/myclaim             5h
NAME               STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/myclaim        Bound     jk-test   10Gi       RWO           5h

然后我尝试使用pvc创建一个pod,但它无法安装卷:

# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: jk-test3
  labels:
    name: jk-test
spec:
  containers:
    - name: front-end
      image: example-front-end:latest
      ports:
        - hostPort: 6000
          containerPort: 3000
  volumes:
    - name: jk-test
      persistentVolumeClaim:
        claimName: myclaim

以下是pod的状态:

3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedMount     Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
  3h            46s             109     {kubelet jk-kube2-master}                       Warning         FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]

我已经验证我的openstack提供程序正在公开cinder v1和v2 API,而openstack_instances的先前日志显示可以访问nova API . 尽管如此,我从来没有看到任何尝试k8s与cinder或nova进行通信以安装音量 .

以下是我认为有关无法安装的相关日志消息:

kubelet: I1020 06:51:11.840341   24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424   24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474   24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361   24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390   24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420   24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566   24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.

有一件我想念的吗?我按照这里的说明进行操作:k8s - mysql-cinder-pd example但是未能进行任何沟通 . 作为另一个数据点,我尝试定义k8s提供的Storage类,这里是相关的StorageClass和PVC文件:

# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dynamicclaim
  annotations:
    volume.beta.kubernetes.io/storage-class: "gold"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

StorageClass报告成功,但是当我尝试创建PVC时,它会陷入“挂起”状态并报告“没有匹配的卷插件”:

# kubectl get storageclass
NAME      TYPE
gold      kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name:           dynamicclaim
Namespace:      default
Status:         Pending
Volume:
Labels:         <none>
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  1d            15s             5867    {persistentvolume-controller }                  Warning         ProvisioningFailed      no volume plugin matched

这与已加载的插件的日志中的内容相矛盾:

grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517   22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853   22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"

我在我的机器上安装了nova和cinder客户端:

# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder

感谢任何帮助,我相信我在这里缺少一些简单的东西 .

谢谢!

2 回答

  • 1

    使用Kubernetes 1.5.0和1.5.3时,cinder卷肯定可以正常工作(我认为它们也适用于我第一次尝试的1.4.6,我不知道以前的版本) .

    简答

    在您的Pod yaml文件中,您丢失了: volumeMounts: 部分 .

    更长的答案

    第一种可能性:没有PV或PVC

    实际上,当您已经有一个现有的煤渣量时,您可以使用Pod(或部署),不需要PV或PVC . 示例: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vol-test labels: fullname: vol-test spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test spec: containers: - name: nginx image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data cinder: volumeID: e143368a-440a-400f-b8a4-dd2f46c51888 这将创建部署和Pod . 煤渣体积将安装在nginx容器中 . 要验证您是否正在使用卷,可以在 /usr/share/nginx/html/ 目录中编辑nginx容器内的文件并停止容器 . Kubernetes将创建一个新容器,在其中, /usr/share/nginx/html/ 目录中的文件将与停止容器中的文件相同 .

    删除部署资源后,不会删除cinder卷,但会将其与vm分离 .

    第二种可能性:使用PV和PVC

    其他可能性,如果您已有现有的煤渣量,则可以使用PV和PVC资源 . 你说你想使用存储类,虽然Kubernetes文档允许不使用它:

    没有注释的PV或其类注释设置为“”没有类,只能绑定到不请求特定类的PVC

    source

    示例存储类是: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: # to be used as value for annotation: # volume.beta.kubernetes.io/storage-class name: cinder-gluster-hdd provisioner: kubernetes.io/cinder parameters: # openstack volume type type: gluster_hdd # openstack availability zone availability: nova

    然后,在PV中使用ID为48d2d1e6-e063-437a-855f-8b62b640a950的现有cinder卷: apiVersion: v1 kind: PersistentVolume metadata: # name of a pv resource visible in Kubernetes, not the name of # a cinder volume name: pv0001 labels: pv-first-label: "123" pv-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain cinder: # ID of cinder volume volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950 然后创建一个PVC,标签选择器匹配PV的标签: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vol-test labels: pvc-first-label: "123" pvc-second-label: abc annotations: volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd" spec: accessModes: # the volume can be mounted as read-write by a single node - ReadWriteOnce resources: requests: storage: "1Gi" selector: matchLabels: pv-first-label: "123" pv-second-label: abc 然后部署: kind: Deployment metadata: name: vol-test labels: fullname: vol-test environment: testing spec: strategy: type: Recreate replicas: 1 template: metadata: labels: fullname: vol-test environment: testing spec: nodeSelector: "is_worker": "true" containers: - name: nginx-exist-vol image: "nginx:1.11.6-alpine" imagePullPolicy: IfNotPresent args: - /bin/sh - -c - echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;" ports: - name: http containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html/ volumes: - name: data persistentVolumeClaim: claimName: vol-test

    删除k8s资源后,不会删除cinder卷,但会将其与vm分离 .

    使用PV可以设置 persistentVolumeReclaimPolicy .

    第三种可能性:未创建煤渣体积

    如果您没有创建煤渣卷,Kubernetes可以为您创建它 . 然后,您必须提供PVC资源 . 我不会描述这种变体,因为它没有被要求 .

    免责声明

    我建议有兴趣找到最佳选择的人应该自己试验并比较一下这些方法 . 此外,我使用标签名称如 pv-first-labelpvc-first-label 仅为了更好地理解原因 . 你可以用例如 first-label 无处不在 .

  • 2

    我怀疑动态StorageClass方法不起作用,因为Cinder配置程序尚未实现,在文档中给出以下语句(http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner):

    存储类具有一个配置程序,用于确定用于配置PV的卷插件 . 必须指定此字段 . 在测试期间,可用的供应商类型是kubernetes.io/aws-ebs和kubernetes.io/gce-pd

    至于为什么使用Cinder卷ID的静态方法不起作用,我会遇到完全相同的问题 . Kubernetes 1.2似乎工作正常,1.3和1.4没有 . 这似乎与1.3-beta2(https://github.com/kubernetes/kubernetes/pull/26801)中PersistentVolume处理的主要变化相吻合:

    在kubelet中引入了一个新的卷管理器,用于同步卷装入/卸载(如果未启用附加/分离控制器,则连接/分离) . (#26801,@ saad-ali)这消除了pod创建循环和孤立卷循环之间的竞争条件 . 它还会从syncPod()路径中删除卸载/分离,因此卷清理永远不会阻止syncPod循环 .

相关问题