Skip to content

Commit a6fad87

Browse files
author
JENNIFER RONDEAU
committed
final copyedits
1 parent 53e0c6b commit a6fad87

File tree

1 file changed

+27
-27
lines changed

1 file changed

+27
-27
lines changed

content/en/docs/setup/independent/high-availability.md

+27-27
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,9 @@ content_template: templates/task
1010

1111
{{% capture overview %}}
1212

13-
This page explains two different approaches setting up a highly available Kubernetes
13+
{{</* feature-state for_k8s_version="v1.11" state="beta" */>}}
14+
15+
This page explains two different approaches to setting up a highly available Kubernetes
1416
cluster using kubeadm:
1517

1618
- With stacked masters. This approach requires less infrastructure. etcd members
@@ -22,8 +24,8 @@ Your clusters must run Kubernetes version 1.11 or later.
2224

2325
{{< caution >}}
2426
**Caution**: This page does not address running your cluster on a cloud provider.
25-
In a cloud environment, neither approach documented here works with services of type
26-
LoadBalancer, or with dynamic PersistentVolumes.
27+
In a cloud environment, neither approach documented here works with Service objects
28+
of type LoadBalancer, or with dynamic PersistentVolumes.
2729
{{< /caution >}}
2830

2931
{{% /capture %}}
@@ -43,7 +45,7 @@ For both methods you need this infrastructure:
4345
- SSH access from one device to all nodes in the system
4446
- sudo privileges on all machines
4547

46-
For the external etcd cluster only:
48+
For the external etcd cluster only, you also need:
4749

4850
- Three additional machines for etcd members
4951

@@ -83,7 +85,7 @@ run as root.
8385
ssh-add ~/.ssh/path_to_private_key
8486
```
8587
86-
1. SSH between nodes to check that the connection is working properly.
88+
1. SSH between nodes to check that the connection is working correctly.
8789
8890
**Notes:**
8991
@@ -118,7 +120,7 @@ different configuration.
118120
119121
It is not recommended to use an IP address directly in a cloud environment.
120122
121-
The load balancer must be able to communicate with all control plane node
123+
The load balancer must be able to communicate with all control plane nodes
122124
on the apiserver port. It must also allow incoming traffic on its
123125
listening port.
124126
@@ -167,10 +169,10 @@ will fail the health check until the apiserver is running.
167169
168170
1. Run `sudo kubeadm init --config kubeadm-config.yaml`
169171
170-
### Copy certificates to other control plane nodes
172+
### Copy required files to other control plane nodes
171173
172-
The following certificates were created when you ran `kubeadm init`. Copy these certificates
173-
to your other control plane nodes:
174+
The following certificates and other required files were created when you ran `kubeadm init`.
175+
Copy these files to your other control plane nodes:
174176
175177
- `/etc/kubernetes/pki/ca.crt`
176178
- `/etc/kubernetes/pki/ca.key`
@@ -238,8 +240,7 @@ done
238240
# This CIDR is a calico default. Substitute or remove for your CNI provider.
239241
podSubnet: "192.168.0.0/16"
240242

241-
1. Replace the following variables in the template that was just created with
242-
values for your specific situation:
243+
1. Replace the following variables in the template with the appropriate values for your cluster:
243244

244245
- `LOAD_BALANCER_DNS`
245246
- `LOAD_BALANCER_PORT`
@@ -248,7 +249,7 @@ done
248249
- `CP1_HOSTNAME`
249250
- `CP1_IP`
250251

251-
1. Move the copied certificates to the proper locations
252+
1. Move the copied files to the correct locations:
252253

253254
```sh
254255
USER=ubuntu # customizable
@@ -264,7 +265,7 @@ done
264265
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
265266
```
266267

267-
1. Run the kubeadm phase commands to bootstrap the kubelet
268+
1. Run the kubeadm phase commands to bootstrap the kubelet:
268269

269270
```sh
270271
kubeadm alpha phase certs all --config kubeadm-config.yaml
@@ -330,8 +331,7 @@ done
330331
# This CIDR is a calico default. Substitute or remove for your CNI provider.
331332
podSubnet: "192.168.0.0/16"
332333

333-
1. Replace the following variables in the template that was just created with
334-
values for your specific situation:
334+
1. Replace the following variables in the template with the appropriate values for your cluster:
335335

336336
- `LOAD_BALANCER_DNS`
337337
- `LOAD_BALANCER_PORT`
@@ -342,7 +342,7 @@ done
342342
- `CP2_HOSTNAME`
343343
- `CP2_IP`
344344

345-
1. Move the copied certificates to the proper locations:
345+
1. Move the copied files to the correct locations:
346346

347347
```sh
348348
USER=ubuntu # customizable
@@ -368,7 +368,7 @@ done
368368
systemctl start kubelet
369369
```
370370

371-
1. Run the commands to add the node to the etcd cluster
371+
1. Run the commands to add the node to the etcd cluster:
372372

373373
```sh
374374
CP0_IP=10.0.0.7
@@ -380,7 +380,7 @@ done
380380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
381381
```
382382

383-
1. Deploy the control plane components and mark the node as a master
383+
1. Deploy the control plane components and mark the node as a master:
384384

385385
```sh
386386
kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
@@ -395,10 +395,10 @@ done
395395
- Follow [these instructions](/docs/tasks/administer-cluster/setup-ha-etcd-with-kubeadm/)
396396
to set up the etcd cluster.
397397

398-
### Copy certificates to other control plane nodes
398+
### Copy required files to other control plane nodes
399399

400-
The following certificates were created when you created the cluster. Copy these
401-
certificates to your other control plane nodes:
400+
The following certificates were created when you created the cluster. Copy them
401+
to your other control plane nodes:
402402

403403
- `/etc/kubernetes/pki/etcd/ca.crt`
404404
- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
@@ -451,10 +451,10 @@ for your environment.
451451

452452
1. Run `kubeadm init --config kubeadm-config.yaml`
453453

454-
### Copy certificates
454+
### Copy required files to the correct locations
455455

456-
The following certificates were created when you ran `kubeadm init`. Copy these certificates
457-
to your other control plane nodes:
456+
The following certificates and other required files were created when you ran `kubeadm init`.
457+
Copy these files to your other control plane nodes:
458458

459459
- `/etc/kubernetes/pki/ca.crt`
460460
- `/etc/kubernetes/pki/ca.key`
@@ -463,8 +463,8 @@ to your other control plane nodes:
463463
- `/etc/kubernetes/pki/front-proxy-ca.crt`
464464
- `/etc/kubernetes/pki/front-proxy-ca.key`
465465

466-
In the following example, replace
467-
`CONTROL_PLANE_IP` with the IP addresses of the other control plane nodes.
466+
In the following example, replace the list of
467+
`CONTROL_PLANE_IP` values with the IP addresses of the other control plane nodes.
468468

469469
```sh
470470
USER=ubuntu # customizable
@@ -485,7 +485,7 @@ In the following example, replace
485485

486486
### Set up the other control plane nodes
487487

488-
Verify the location of the certificates.
488+
Verify the location of the copied files.
489489
Your `/etc/kubernetes` directory should look like this:
490490

491491
- `/etc/kubernetes/pki/apiserver-etcd-client.crt`

0 commit comments

Comments
 (0)