You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* docs/dind: add instructions for XFS filesystems
* docs: use `kubectl --watch` instead of `watch kubectl` in tutorials
* docs/aws: add detail instructions for requirements
* docs/dind: add notes for accessing TiDB cluster with MySQL client
* docs: add sample output of terraform
* docs/aws: minor updates of words
* Update docs/local-dind-tutorial.md
Co-Authored-By: Keke Yi <[email protected]>
* docs: update instructions for macOS & update argument in sample
* docs: Apply suggestions from code review
Co-Authored-By: Keke Yi <[email protected]>
* docs/aws: fix synatax of intrucduction
* docs/aliyun: update instructions when deploying with terraform
* docs/aws: adjust instructions to access DB and grafana & cleanup
* Apply suggestions from code review
Co-Authored-By: Keke Yi <[email protected]>
* docs/dind: update contents to fix issues found in the test
* docs/aws: add an introduction sentense of terraform installation
* docs/dind: pointing out xfs issue only applies to Linux users
* docs/dind: make port-forward instruction more clear
* Apply suggestions from code review
Co-Authored-By: Keke Yi <[email protected]>
* docs/dind: make delete instructions more clear
* docs/aws: update instructions of customizing params
* docs/dind: clean up
* docs/aws: add examples and adjust order of sections
You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished:
88
+
You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished (replace the `<>` parts with values from the output):
This document describes how to deploy TiDB Operator and a TiDB cluster on AWS EKS with your laptop (Linux or macOS) for development or testing.
4
+
5
+
## Prerequisites
6
+
7
+
Before deploying a TiDB cluster on AWS EKS, make sure the following requirements are satisfied:
8
+
*[awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources
9
+
10
+
The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way is using the `aws configure` command:
11
+
12
+
```shell
13
+
# Replace AWS Access Key ID and AWS Secret Access Key with your own keys
*[aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) installed in `PATH`
25
+
*[aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS
26
+
27
+
The easiest way to install `aws-iam-authenticator` is to download the prebuilt binary:
The default setup will create a new VPC and a t2.micro instance as bastion machine. And EKS cluster with the following ec2 instance worker nodes:
40
+
## Deploy
41
+
42
+
The default setup will create a new VPC and a t2.micro instance as bastion machine, and an EKS cluster with the following ec2 instances as worker nodes:
17
43
18
44
* 3 m5d.xlarge instances for PD
19
45
* 3 i3.2xlarge instances for TiKV
20
46
* 2 c4.4xlarge instances for TiDB
21
47
* 1 c5.xlarge instance for monitor
22
48
23
-
You can change default values in `variables.tf` (like the cluster name and versions) as needed. The default value of `cluster_name` is `my-cluster`.
# Apply the configs, note that you must answer "yes" to `terraform apply` to continue
28
57
$ terraform init
29
58
$ terraform apply
30
59
```
31
60
32
-
It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some basic information is printed to the console. You can access the `monitor_endpoint` using your web browser.
61
+
It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console.
33
62
34
-
> **Note:** You can use the `terraform output` command to get that information again.
63
+
A successful deployment will give the output like:
35
64
36
-
To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client:
> **Note:** You can use the `terraform output` command to get the output again.
83
+
84
+
## Access the database
85
+
86
+
To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output):
If the DNS name is not resolvable, be patient and wait a few minutes.
93
+
The default value of `cluster_name` is `my-cluster`. If the DNS name is not resolvable, be patient and wait a few minutes.
44
94
45
-
You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`.
95
+
You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`:
46
96
47
97
```shell
48
98
# By specifying --kubeconfig argument
@@ -55,30 +105,48 @@ kubectl get po -n tidb
55
105
helm ls
56
106
```
57
107
58
-
#Destory
108
+
## Monitor
59
109
60
-
It may take some while to finish destroying the cluster.
110
+
You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics.
61
111
62
-
```shell
63
-
$ terraform destroy
64
-
```
112
+
The initial Grafana login credentials are:
65
113
66
-
> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.
114
+
- User: admin
115
+
- Password: admin
67
116
68
-
## Upgrade TiDB cluster
117
+
## Upgrade
69
118
70
-
To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`.
119
+
To upgrade the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tidb_version` variable to a higher version, and then run `terraform apply`.
71
120
72
-
> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
121
+
For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`:
73
122
74
-
## Scale TiDB cluster
123
+
```
124
+
variable "tidb_version" {
125
+
description = "tidb cluster version"
126
+
default = "v2.1.10"
127
+
}
128
+
```
129
+
130
+
> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
131
+
132
+
## Scale
133
+
134
+
To scale the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tikv_count` or `tidb_count` variable to your desired count, and then run `terraform apply`.
75
135
76
-
To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count, and then run `terraform apply`.
136
+
For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3:
77
137
78
-
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
138
+
```
139
+
variable "tidb_count" {
140
+
default = 4
141
+
}
142
+
```
143
+
144
+
> *Note*: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
79
145
80
146
## Customize
81
147
148
+
You can change default values in `variables.tf` (such as the cluster name and image versions) as needed.
149
+
82
150
### Customize AWS related resources
83
151
84
152
By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id` and `subnets` variables.
@@ -91,11 +159,17 @@ Currently, the instance type of TiDB cluster component is not configurable becau
91
159
92
160
### Customize TiDB parameters
93
161
94
-
Currently, there are not much parameters exposed to be customizable. If you need to customize these, you should modify the `templates/tidb-cluster-values.yaml.tpl` files before deploying. Or if you modify it and run `terraform apply` again after the cluster is running, it will not take effect unless you manually delete the pod via `kubectl delete po -n tidb --all`. This will be resolved when issue [#255](https://github.com/pingcap/tidb-operator/issues/225) is fixed.
162
+
Currently, there are not many customizable TiDB parameters. And there are two ways to customize the parameters:
95
163
96
-
## TODO
164
+
* Before deploying the cluster, you can directly modify the `templates/tidb-cluster-values.yaml.tpl` file and then deploy the cluster with customized configs.
165
+
* After the cluster is running, you must run `terraform apply` again every time you make changes to the `templates/tidb-cluster-values.yaml.tpl` file, or the cluster will still be using old configs.
97
166
98
-
-[ ] Use [cluster autoscaler](https://github.com/kubernetes/autoscaler)
99
-
-[ ] Allow create a minimal TiDB cluster for testing
100
-
-[ ] Make the resource creation synchronously to follow Terraform convention
101
-
-[ ] Make more parameters customizable
167
+
## Destroy
168
+
169
+
It may take some while to finish destroying the cluster.
170
+
171
+
```shell
172
+
$ terraform destroy
173
+
```
174
+
175
+
> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.
Copy file name to clipboardexpand all lines: docs/aws-eks-tutorial.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -180,7 +180,7 @@ We can now point a browser to `localhost:3000` and view the dashboards.
180
180
181
181
To scale out TiDB cluster, modify `tikv_count` or `tidb_count` in `aws-tutorial.tfvars` to your desired count, and then run `terraform apply -var-file=aws-tutorial.tfvars`.
182
182
183
-
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb`
183
+
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb --watch`.
184
184
185
185
> *Note*: There are taints and tolerations in place such that only a single pod will be scheduled per node. The count is also passed onto helm via terraform. For this reason attempting to scale out pods via helm or `kubectl scale` will not work as expected.
0 commit comments