You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Service, PVCs, PVs created successfully. Checking AWS, the EBS volumes were created and are available.
Pods failed, due to an inability to mount the PVs:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
20m 20m 1 {default-scheduler } Normal Scheduled Successfully assigned patroni-0 to ip-172-31-43-85
18m 1m 9 {kubelet ip-172-31-43-85} Warning FailedMount Unable to mount volumes for pod "patroni-0_default(a8e441ac-a854-11e6-bb5c-06027ce5ae69)": timeout expired waiting for volumes to attach/mount for pod "patroni-0"/"default". list of unattached/unmounted volumes=[pgdata]
18m 1m 9 {kubelet ip-172-31-43-85} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "patroni-0"/"default". list of unattached/unmounted volumes=[pgdata]
20m 41s 18 {kubelet ip-172-31-43-85} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/aws-ebs/a8e441ac-a854-11e6-bb5c-06027ce5ae69-pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69" (spec.Name: "pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69") pod "a8e441ac-a854-11e6-bb5c-06027ce5ae69" (UID: "a8e441ac-a854-11e6-bb5c-06027ce5ae69") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-b893ce0c /var/lib/kubelet/pods/a8e441ac-a854-11e6-bb5c-06027ce5ae69/volumes/kubernetes.io~aws-ebs/pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69 [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-b893ce0c does not exist
20m 39s 18 {controller-manager } Warning FailedMount Failed to attach volume "pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69" on node "ip-172-31-43-85" with: error finding instance ip-172-31-43-85: instance not found
What you expected to happen:
PVs should have mounted on the pod containers, and the remaining pods should have come up.
How to reproduce it (as minimally and precisely as possible):
Follow steps above.
Anything else do we need to know:
kubectl logs and journalctl -xe don't show any errors related to pvs or mounts
I can't kube exec into the pod because it's down (to check paths)
the same StatefulSet, using only EmptyDir(), works fine.
I tried starting the pod on the kube-master node, thinking this was the issue of failing to pass --cloud-provider to the kubelets. However, it also won't start on the master node, with the same error.
From @jberkus on November 11, 2016 21:45
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
No.
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
PetSet, Statefulset, PV, persistent volume, AWS
This is similar to issue #36589 except that (a) it's a StatefulSet, and (b) the kubelets were not restarted.
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Kubernetes version (use
kubectl version
):Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
uname -a
):Linux ip-172-31-47-16.us-west-2.compute.internal 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
kubeadm version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.2.380+85fe0f1aadf91e", GitCommit:"85fe0f1aadf91e134102cf3c01a9eed11a7e257f", GitTreeState:"clean", BuildDate:"2016-11-02T14:58:17Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
What happened:
All EC2 nodes are set up in an IAM role with EBS management access. To test this, I created a regular PV, which created successfully.
Built 4-node cluster on EC2 using the Kubeadm instructions for CentOS, including kubeadm init --cloud-provider=aws
Had to manually fix SSL certs by creating hard links, due to this bug: #36150
Added weave network
Created this StatefulSet: https://github.com/jberkus/atomicdb/blob/master/patroni_petset/kubernetes/ps-patroni.yaml
(supporting objects are in same dir)
Service, PVCs, PVs created successfully. Checking AWS, the EBS volumes were created and are available.
Pods failed, due to an inability to mount the PVs:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
20m 20m 1 {default-scheduler } Normal Scheduled Successfully assigned patroni-0 to ip-172-31-43-85
18m 1m 9 {kubelet ip-172-31-43-85} Warning FailedMount Unable to mount volumes for pod "patroni-0_default(a8e441ac-a854-11e6-bb5c-06027ce5ae69)": timeout expired waiting for volumes to attach/mount for pod "patroni-0"/"default". list of unattached/unmounted volumes=[pgdata]
18m 1m 9 {kubelet ip-172-31-43-85} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "patroni-0"/"default". list of unattached/unmounted volumes=[pgdata]
20m 41s 18 {kubelet ip-172-31-43-85} Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/aws-ebs/a8e441ac-a854-11e6-bb5c-06027ce5ae69-pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69" (spec.Name: "pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69") pod "a8e441ac-a854-11e6-bb5c-06027ce5ae69" (UID: "a8e441ac-a854-11e6-bb5c-06027ce5ae69") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-b893ce0c /var/lib/kubelet/pods/a8e441ac-a854-11e6-bb5c-06027ce5ae69/volumes/kubernetes.io~aws-ebs/pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69 [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-b893ce0c does not exist
20m 39s 18 {controller-manager } Warning FailedMount Failed to attach volume "pvc-a1c46983-a854-11e6-bb5c-06027ce5ae69" on node "ip-172-31-43-85" with: error finding instance ip-172-31-43-85: instance not found
What you expected to happen:
PVs should have mounted on the pod containers, and the remaining pods should have come up.
How to reproduce it (as minimally and precisely as possible):
Follow steps above.
Anything else do we need to know:
Copied from original issue: kubernetes/kubernetes#36670
The text was updated successfully, but these errors were encountered: