@@ -10,7 +10,9 @@ content_template: templates/task
10
10
11
11
{{% capture overview %}}
12
12
13
- This page explains two different approaches setting up a highly available Kubernetes
13
+ {{</* feature-state for_k8s_version="v1.11" state="beta" * />}}
14
+
15
+ This page explains two different approaches to setting up a highly available Kubernetes
14
16
cluster using kubeadm:
15
17
16
18
- With stacked masters. This approach requires less infrastructure. etcd members
@@ -22,8 +24,8 @@ Your clusters must run Kubernetes version 1.11 or later.
22
24
23
25
{{< caution >}}
24
26
** Caution** : This page does not address running your cluster on a cloud provider.
25
- In a cloud environment, neither approach documented here works with services of type
26
- LoadBalancer, or with dynamic PersistentVolumes.
27
+ In a cloud environment, neither approach documented here works with Service objects
28
+ of type LoadBalancer, or with dynamic PersistentVolumes.
27
29
{{< /caution >}}
28
30
29
31
{{% /capture %}}
@@ -43,7 +45,7 @@ For both methods you need this infrastructure:
43
45
- SSH access from one device to all nodes in the system
44
46
- sudo privileges on all machines
45
47
46
- For the external etcd cluster only:
48
+ For the external etcd cluster only, you also need :
47
49
48
50
- Three additional machines for etcd members
49
51
@@ -83,7 +85,7 @@ run as root.
83
85
ssh-add ~/.ssh/path_to_private_key
84
86
```
85
87
86
- 1. SSH between nodes to check that the connection is working properly .
88
+ 1. SSH between nodes to check that the connection is working correctly .
87
89
88
90
**Notes:**
89
91
@@ -118,7 +120,7 @@ different configuration.
118
120
119
121
It is not recommended to use an IP address directly in a cloud environment.
120
122
121
- The load balancer must be able to communicate with all control plane node
123
+ The load balancer must be able to communicate with all control plane nodes
122
124
on the apiserver port. It must also allow incoming traffic on its
123
125
listening port.
124
126
@@ -167,10 +169,10 @@ will fail the health check until the apiserver is running.
167
169
168
170
1. Run `sudo kubeadm init --config kubeadm-config.yaml`
169
171
170
- ### Copy certificates to other control plane nodes
172
+ ### Copy required files to other control plane nodes
171
173
172
- The following certificates were created when you ran `kubeadm init`. Copy these certificates
173
- to your other control plane nodes:
174
+ The following certificates and other required files were created when you ran `kubeadm init`.
175
+ Copy these files to your other control plane nodes:
174
176
175
177
- `/etc/kubernetes/pki/ca.crt`
176
178
- `/etc/kubernetes/pki/ca.key`
238
240
# This CIDR is a calico default. Substitute or remove for your CNI provider.
239
241
podSubnet: "192.168.0.0/16"
240
242
241
- 1 . Replace the following variables in the template that was just created with
242
- values for your specific situation:
243
+ 1 . Replace the following variables in the template with the appropriate values for your cluster:
243
244
244
245
- ` LOAD_BALANCER_DNS `
245
246
- ` LOAD_BALANCER_PORT `
248
249
- ` CP1_HOSTNAME `
249
250
- ` CP1_IP `
250
251
251
- 1 . Move the copied certificates to the proper locations
252
+ 1 . Move the copied files to the correct locations:
252
253
253
254
``` sh
254
255
USER=ubuntu # customizable
264
265
mv /home/${USER} /admin.conf /etc/kubernetes/admin.conf
265
266
```
266
267
267
- 1. Run the kubeadm phase commands to bootstrap the kubelet
268
+ 1. Run the kubeadm phase commands to bootstrap the kubelet:
268
269
269
270
` ` ` sh
270
271
kubeadm alpha phase certs all --config kubeadm-config.yaml
330
331
# This CIDR is a calico default. Substitute or remove for your CNI provider.
331
332
podSubnet: " 192.168.0.0/16"
332
333
333
- 1. Replace the following variables in the template that was just created with
334
- values for your specific situation:
334
+ 1. Replace the following variables in the template with the appropriate values for your cluster:
335
335
336
336
- ` LOAD_BALANCER_DNS`
337
337
- ` LOAD_BALANCER_PORT`
342
342
- ` CP2_HOSTNAME`
343
343
- ` CP2_IP`
344
344
345
- 1. Move the copied certificates to the proper locations:
345
+ 1. Move the copied files to the correct locations:
346
346
347
347
` ` ` sh
348
348
USER=ubuntu # customizable
368
368
systemctl start kubelet
369
369
` ` `
370
370
371
- 1. Run the commands to add the node to the etcd cluster
371
+ 1. Run the commands to add the node to the etcd cluster:
372
372
373
373
` ` ` sh
374
374
CP0_IP=10.0.0.7
380
380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
381
381
` ` `
382
382
383
- 1. Deploy the control plane components and mark the node as a master
383
+ 1. Deploy the control plane components and mark the node as a master:
384
384
385
385
` ` ` sh
386
386
kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
@@ -395,10 +395,10 @@ done
395
395
- Follow [these instructions](/docs/tasks/administer-cluster/setup-ha-etcd-with-kubeadm/)
396
396
to set up the etcd cluster.
397
397
398
- # ## Copy certificates to other control plane nodes
398
+ # ## Copy required files to other control plane nodes
399
399
400
- The following certificates were created when you created the cluster. Copy these
401
- certificates to your other control plane nodes:
400
+ The following certificates were created when you created the cluster. Copy them
401
+ to your other control plane nodes:
402
402
403
403
- ` /etc/kubernetes/pki/etcd/ca.crt`
404
404
- ` /etc/kubernetes/pki/apiserver-etcd-client.crt`
@@ -451,10 +451,10 @@ for your environment.
451
451
452
452
1. Run ` kubeadm init --config kubeadm-config.yaml`
453
453
454
- # ## Copy certificates
454
+ # ## Copy required files to the correct locations
455
455
456
- The following certificates were created when you ran ` kubeadm init` . Copy these certificates
457
- to your other control plane nodes:
456
+ The following certificates and other required files were created when you ran ` kubeadm init` .
457
+ Copy these files to your other control plane nodes:
458
458
459
459
- ` /etc/kubernetes/pki/ca.crt`
460
460
- ` /etc/kubernetes/pki/ca.key`
@@ -463,8 +463,8 @@ to your other control plane nodes:
463
463
- ` /etc/kubernetes/pki/front-proxy-ca.crt`
464
464
- ` /etc/kubernetes/pki/front-proxy-ca.key`
465
465
466
- In the following example, replace
467
- ` CONTROL_PLANE_IP` with the IP addresses of the other control plane nodes.
466
+ In the following example, replace the list of
467
+ ` CONTROL_PLANE_IP` values with the IP addresses of the other control plane nodes.
468
468
469
469
` ` ` sh
470
470
USER=ubuntu # customizable
@@ -485,7 +485,7 @@ In the following example, replace
485
485
486
486
# ## Set up the other control plane nodes
487
487
488
- Verify the location of the certificates .
488
+ Verify the location of the copied files .
489
489
Your ` /etc/kubernetes` directory should look like this:
490
490
491
491
- ` /etc/kubernetes/pki/apiserver-etcd-client.crt`
0 commit comments