Skip to content

Commit 2130f66

Browse files
docs: Fix issues found in Queeny's test (#507)
* docs/dind: add instructions for XFS filesystems * docs: use `kubectl --watch` instead of `watch kubectl` in tutorials * docs/aws: add detail instructions for requirements * docs/dind: add notes for accessing TiDB cluster with MySQL client * docs: add sample output of terraform * docs/aws: minor updates of words * Update docs/local-dind-tutorial.md Co-Authored-By: Keke Yi <[email protected]> * docs: update instructions for macOS & update argument in sample * docs: Apply suggestions from code review Co-Authored-By: Keke Yi <[email protected]> * docs/aws: fix synatax of intrucduction * docs/aliyun: update instructions when deploying with terraform * docs/aws: adjust instructions to access DB and grafana & cleanup * Apply suggestions from code review Co-Authored-By: Keke Yi <[email protected]> * docs/dind: update contents to fix issues found in the test * docs/aws: add an introduction sentense of terraform installation * docs/dind: pointing out xfs issue only applies to Linux users * docs/dind: make port-forward instruction more clear * Apply suggestions from code review Co-Authored-By: Keke Yi <[email protected]> * docs/dind: make delete instructions more clear * docs/aws: update instructions of customizing params * docs/dind: clean up * docs/aws: add examples and adjust order of sections
1 parent cb9badc commit 2130f66

File tree

5 files changed

+257
-66
lines changed

5 files changed

+257
-66
lines changed

deploy/aliyun/README.md

+7-4
Original file line numberDiff line numberDiff line change
@@ -44,15 +44,18 @@ The `variables.tf` file contains default settings of variables used for deployin
4444
Apply the stack:
4545

4646
```shell
47+
# Get the code
4748
$ git clone https://github.com/pingcap/tidb-operator
48-
$ cd tidb-operator/deploy/alicloud
49+
$ cd tidb-operator/deploy/aliyun
50+
51+
# Apply the configs, note that you must answer "yes" to `terraform apply` to continue
4952
$ terraform init
5053
$ terraform apply
5154
```
5255

5356
`terraform apply` will take 5 to 10 minutes to create the whole stack, once complete, basic cluster information will be printed:
5457

55-
> **Note:** You can use the `terraform output` command to get this information again.
58+
> **Note:** You can use the `terraform output` command to get the output again.
5659
5760
```
5861
Apply complete! Resources: 3 added, 0 changed, 1 destroyed.
@@ -82,7 +85,7 @@ $ helm ls
8285

8386
## Access the DB
8487

85-
You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished:
88+
You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished (replace the `<>` parts with values from the output):
8689

8790
```shell
8891
$ ssh -i credentials/<cluster_name>-bastion-key.pem root@<bastion_ip>
@@ -106,7 +109,7 @@ To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in `
106109
This may take a while to complete, watch the process using command:
107110

108111
```
109-
watch kubectl get pods --namespace tidb -o wide
112+
kubectl get pods --namespace tidb -o wide --watch
110113
```
111114

112115
## Scale TiDB cluster

deploy/aws/README.md

+105-31
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,98 @@
11
# Deploy TiDB Operator and TiDB cluster on AWS EKS
22

3-
## Requirements:
4-
* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73
3+
This document describes how to deploy TiDB Operator and a TiDB cluster on AWS EKS with your laptop (Linux or macOS) for development or testing.
4+
5+
## Prerequisites
6+
7+
Before deploying a TiDB cluster on AWS EKS, make sure the following requirements are satisfied:
8+
* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources
9+
10+
The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way is using the `aws configure` command:
11+
12+
``` shell
13+
# Replace AWS Access Key ID and AWS Secret Access Key with your own keys
14+
$ aws configure
15+
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
16+
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
17+
Default region name [None]: us-west-2
18+
Default output format [None]: json
19+
```
20+
> **Note:** The access key must have at least permissions to: create VPC, create EBS, create EC2 and create role
21+
* [terraform](https://learn.hashicorp.com/terraform/getting-started/install.html)
522
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) >= 1.11
623
* [helm](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client) >= 2.9.0
724
* [jq](https://stedolan.github.io/jq/download/)
8-
* [aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) installed in `PATH`
25+
* [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS
26+
27+
The easiest way to install `aws-iam-authenticator` is to download the prebuilt binary:
928

10-
## Configure awscli
29+
``` shell
30+
# Download binary for Linux
31+
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
1132

12-
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
33+
# Or, download binary for macOS
34+
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/darwin/amd64/aws-iam-authenticator
1335

14-
## Setup
36+
chmod +x ./aws-iam-authenticator
37+
sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
38+
```
1539

16-
The default setup will create a new VPC and a t2.micro instance as bastion machine. And EKS cluster with the following ec2 instance worker nodes:
40+
## Deploy
41+
42+
The default setup will create a new VPC and a t2.micro instance as bastion machine, and an EKS cluster with the following ec2 instances as worker nodes:
1743

1844
* 3 m5d.xlarge instances for PD
1945
* 3 i3.2xlarge instances for TiKV
2046
* 2 c4.4xlarge instances for TiDB
2147
* 1 c5.xlarge instance for monitor
2248

23-
You can change default values in `variables.tf` (like the cluster name and versions) as needed. The default value of `cluster_name` is `my-cluster`.
49+
Use the following commands to set up the cluster:
2450

2551
``` shell
52+
# Get the code
2653
$ git clone --depth=1 https://github.com/pingcap/tidb-operator
2754
$ cd tidb-operator/deploy/aws
55+
56+
# Apply the configs, note that you must answer "yes" to `terraform apply` to continue
2857
$ terraform init
2958
$ terraform apply
3059
```
3160

32-
It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some basic information is printed to the console. You can access the `monitor_endpoint` using your web browser.
61+
It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console.
3362

34-
> **Note:** You can use the `terraform output` command to get that information again.
63+
A successful deployment will give the output like:
3564

36-
To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client:
65+
```
66+
Apply complete! Resources: 67 added, 0 changed, 0 destroyed.
67+
68+
Outputs:
69+
70+
bastion_ip = [
71+
52.14.50.145
72+
]
73+
eks_endpoint = https://E10A1D0368FFD6E1E32E11573E5CE619.sk1.us-east-2.eks.amazonaws.com
74+
eks_version = 1.12
75+
monitor_endpoint = http://abd299cc47af411e98aae02938da0762-1989524000.us-east-2.elb.amazonaws.com:3000
76+
region = us-east-2
77+
tidb_dns = abd2e3f7c7af411e98aae02938da0762-17499b76b312be02.elb.us-east-2.amazonaws.com
78+
tidb_port = 4000
79+
tidb_version = v3.0.0-rc.1
80+
```
81+
82+
> **Note:** You can use the `terraform output` command to get the output again.
83+
84+
## Access the database
85+
86+
To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output):
3787

3888
``` shell
3989
ssh -i credentials/k8s-prod-<cluster_name>.pem ec2-user@<bastion_ip>
4090
mysql -h <tidb_dns> -P <tidb_port> -u root
4191
```
4292

43-
If the DNS name is not resolvable, be patient and wait a few minutes.
93+
The default value of `cluster_name` is `my-cluster`. If the DNS name is not resolvable, be patient and wait a few minutes.
4494

45-
You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`.
95+
You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`:
4696

4797
``` shell
4898
# By specifying --kubeconfig argument
@@ -55,30 +105,48 @@ kubectl get po -n tidb
55105
helm ls
56106
```
57107

58-
# Destory
108+
## Monitor
59109

60-
It may take some while to finish destroying the cluster.
110+
You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics.
61111

62-
```shell
63-
$ terraform destroy
64-
```
112+
The initial Grafana login credentials are:
65113

66-
> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.
114+
- User: admin
115+
- Password: admin
67116

68-
## Upgrade TiDB cluster
117+
## Upgrade
69118

70-
To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`.
119+
To upgrade the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tidb_version` variable to a higher version, and then run `terraform apply`.
71120

72-
> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
121+
For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`:
73122

74-
## Scale TiDB cluster
123+
```
124+
variable "tidb_version" {
125+
description = "tidb cluster version"
126+
default = "v2.1.10"
127+
}
128+
```
129+
130+
> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
131+
132+
## Scale
133+
134+
To scale the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tikv_count` or `tidb_count` variable to your desired count, and then run `terraform apply`.
75135

76-
To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count, and then run `terraform apply`.
136+
For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3:
77137

78-
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
138+
```
139+
variable "tidb_count" {
140+
default = 4
141+
}
142+
```
143+
144+
> *Note*: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
79145
80146
## Customize
81147

148+
You can change default values in `variables.tf` (such as the cluster name and image versions) as needed.
149+
82150
### Customize AWS related resources
83151

84152
By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id` and `subnets` variables.
@@ -91,11 +159,17 @@ Currently, the instance type of TiDB cluster component is not configurable becau
91159

92160
### Customize TiDB parameters
93161

94-
Currently, there are not much parameters exposed to be customizable. If you need to customize these, you should modify the `templates/tidb-cluster-values.yaml.tpl` files before deploying. Or if you modify it and run `terraform apply` again after the cluster is running, it will not take effect unless you manually delete the pod via `kubectl delete po -n tidb --all`. This will be resolved when issue [#255](https://github.com/pingcap/tidb-operator/issues/225) is fixed.
162+
Currently, there are not many customizable TiDB parameters. And there are two ways to customize the parameters:
95163

96-
## TODO
164+
* Before deploying the cluster, you can directly modify the `templates/tidb-cluster-values.yaml.tpl` file and then deploy the cluster with customized configs.
165+
* After the cluster is running, you must run `terraform apply` again every time you make changes to the `templates/tidb-cluster-values.yaml.tpl` file, or the cluster will still be using old configs.
97166

98-
- [ ] Use [cluster autoscaler](https://github.com/kubernetes/autoscaler)
99-
- [ ] Allow create a minimal TiDB cluster for testing
100-
- [ ] Make the resource creation synchronously to follow Terraform convention
101-
- [ ] Make more parameters customizable
167+
## Destroy
168+
169+
It may take some while to finish destroying the cluster.
170+
171+
``` shell
172+
$ terraform destroy
173+
```
174+
175+
> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.

docs/aws-eks-tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ We can now point a browser to `localhost:3000` and view the dashboards.
180180

181181
To scale out TiDB cluster, modify `tikv_count` or `tidb_count` in `aws-tutorial.tfvars` to your desired count, and then run `terraform apply -var-file=aws-tutorial.tfvars`.
182182

183-
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb`
183+
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb --watch`.
184184
185185
> *Note*: There are taints and tolerations in place such that only a single pod will be scheduled per node. The count is also passed onto helm via terraform. For this reason attempting to scale out pods via helm or `kubectl scale` will not work as expected.
186186
---

0 commit comments

Comments
 (0)