|
| 1 | +<!-- BEGIN MUNGE: UNVERSIONED_WARNING --> |
| 2 | + |
| 3 | +<!-- BEGIN STRIP_FOR_RELEASE --> |
| 4 | + |
| 5 | +<img src="http://kubernetes.io/img/warning.png" alt="WARNING" |
| 6 | + width="25" height="25"> |
| 7 | +<img src="http://kubernetes.io/img/warning.png" alt="WARNING" |
| 8 | + width="25" height="25"> |
| 9 | +<img src="http://kubernetes.io/img/warning.png" alt="WARNING" |
| 10 | + width="25" height="25"> |
| 11 | +<img src="http://kubernetes.io/img/warning.png" alt="WARNING" |
| 12 | + width="25" height="25"> |
| 13 | +<img src="http://kubernetes.io/img/warning.png" alt="WARNING" |
| 14 | + width="25" height="25"> |
| 15 | + |
| 16 | +<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2> |
| 17 | + |
| 18 | +If you are using a released version of Kubernetes, you should |
| 19 | +refer to the docs that go with that version. |
| 20 | + |
| 21 | +<strong> |
| 22 | +The latest 1.0.x release of this document can be found |
| 23 | +[here](http://releases.k8s.io/release-1.0/docs/user-guide/known-issues.md). |
| 24 | + |
| 25 | +Documentation for other releases can be found at |
| 26 | +[releases.k8s.io](http://releases.k8s.io). |
| 27 | +</strong> |
| 28 | +-- |
| 29 | + |
| 30 | +<!-- END STRIP_FOR_RELEASE --> |
| 31 | + |
| 32 | +<!-- END MUNGE: UNVERSIONED_WARNING --> |
| 33 | + |
| 34 | +## Known Issues |
| 35 | + |
| 36 | +This document summarizes known issues with existing Kubernetes releases. |
| 37 | + |
| 38 | +Please consult this document before filing new bugs. |
| 39 | + |
| 40 | +### Release 1.0.1 |
| 41 | + |
| 42 | + * `exec` liveness/readiness probes leak resources due to Docker exec leaking resources (#10659) |
| 43 | + * `docker load` sometimes hangs which causes the `kube-apiserver` not to start. Restarting the Docker daemon should fix the issue (#10868) |
| 44 | + * The kubelet on the master node doesn't register with the `kube-apiserver` so statistics aren't collected for master daemons (#10891) |
| 45 | + * Heapster and InfluxDB both leak memory (#10653) |
| 46 | + * Wrong node cpu/memory limit metrics from Heapster (https://github.com/GoogleCloudPlatform/heapster/issues/399) |
| 47 | + * Services that set `type=LoadBalancer` can not use port `10250` because of Google Compute Engine firewall limitations |
| 48 | + * Add-on services can not be created or deleted via `kubectl` or the Kubernetes API (#11435) |
| 49 | + * If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (https://github.com/GoogleCloudPlatform/kubernetes/issues/11231#issuecomment-122049113) |
| 50 | + * Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node. |
| 51 | + * Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321) |
| 52 | + * GCE PDs may sometimes fail to attach (#11302) |
| 53 | + * If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed. |
| 54 | + * A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, `rados -p poolname listwatchers image_name.rbd` can list RBD clients that are mapping the image. |
| 55 | + |
| 56 | + |
| 57 | +<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> |
| 58 | +[]() |
| 59 | +<!-- END MUNGE: GENERATED_ANALYTICS --> |
0 commit comments