Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add a full example with kube, node pool and an app #468

Merged
merged 1 commit into from
Sep 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,7 @@ terraform-provider-ovh
# Test exclusions
!command/test-fixtures/**/*.tfstate
!command/test-fixtures/**/.terraform/
examples/kube-nodepool-deployment/.terraform.lock.hcl
examples/kube-nodepool-deployment/.terraform.tfstate.lock.info
examples/kube-nodepool-deployment/logs
examples/kube-nodepool-deployment/my-kube-cluster-*.yml
5 changes: 5 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# OVH Terraform Provider examples

The following examples demonstrate various ways to create and work with [OVHcloud](https://www.ovhcloud.com/fr/) Cloud provider.

1. [Deploy an OVHcloud Manager Kubernetes cluster, a Node Pool and an application](./kube-nodepool-deployment)
64 changes: 64 additions & 0 deletions examples/kube-nodepool-deployment/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Deploy an OVHcloud Manager Kubernetes cluster, a Node Pool and an application

Through this procedure, you can [create multiple OVHcloud Managed Kubernetes, through Terraform](https://docs.ovh.com/gb/en/kubernetes/creating-a-cluster-through-terraform/).

You will create:

* in parallel, several Kubernetes clusters in OVH (depending on the number setted in `scripts/create.sh` script, by default is one cluster)
* a node pool with 3 nodes
* a functional deployment (an app)
* and a service of type Load Balancer

# Prerequisites

Generate [OVH API credentials](https://api.ovh.com/createToken/?GET=/*&POST=/*&PUT=/*&DELETE=/*) and then export in environment variables in your machine like this:

```
$ export OVH_ENDPOINT="ovh-eu"
$ export OVH_APPLICATION_KEY="xxx"
$ export OVH_APPLICATION_SECRET="xxxxx"
$ export OVH_CONSUMER_KEY="xxxxx"
```

Or you can directly put them in `provider.tf` in OVH provider definition:

```
provider "ovh" {
version = "~> 0.16"
endpoint = "ovh-eu"
application_key = "xxx"
application_secret = "xxx"
consumer_key = "xxx"
}
```

Set in `variables.tf` your service_name parameter (Public Cloud project ID):

```
variable "service_name" {
default = "xxxxx"
}
```

# How To

Create the Kubernetes clusters and for each, apply a deployment a service and when the OVHcloud Load Balancer is created, curl the app:

```
./scripts/create.sh
```

Output are writted in `logs` file.
Display the logs in realtime:

```sh
$ tail -f logs
```

# Clean

You can remove/destroy generated files and OVHcloud resources:

```
./scripts/clean.sh
```
37 changes: 37 additions & 0 deletions examples/kube-nodepool-deployment/hello.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: ovhplatform/hello
ports:
- containerPort: 80
23 changes: 23 additions & 0 deletions examples/kube-nodepool-deployment/kube.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
resource "ovh_cloud_project_kube" "my_kube_cluster" {
count = var.nb_cluster
service_name = var.service_name
name = "my-cluster-${count.index}"
region = "GRA5"
}

resource "ovh_cloud_project_kube_nodepool" "node_pool" {
count = var.nb_cluster
service_name = var.service_name
kube_id = ovh_cloud_project_kube.my_kube_cluster[count.index].id
name = "my-pool-${count.index}" //Warning: "_" char is not allowed!
flavor_name = "b2-7"
desired_nodes = 3
max_nodes = 3
min_nodes = 3
}

resource "local_file" "kubeconfig" {
count = var.nb_cluster
content = ovh_cloud_project_kube.my_kube_cluster[count.index].kubeconfig
filename = "my-kube-cluster-${count.index}.yml"
}
11 changes: 11 additions & 0 deletions examples/kube-nodepool-deployment/provider.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Configure the OVHcloud Provider
terraform {
required_providers {
ovh = {
source = "ovh/ovh"
}
}
}

provider "ovh" {
}
12 changes: 12 additions & 0 deletions examples/kube-nodepool-deployment/scripts/clean.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/bash

# kill still running processes in case of errors
ps -ef| grep -i "sleep 10" | grep -v grep| awk '{print "kill "$2}' | sh
ps -ef| grep -i "create" | grep -v grep| awk '{print "kill "$2}' | sh

# Destroy/Remove created resources
terraform destroy -auto-approve

# Clean logs
> ../logs

52 changes: 52 additions & 0 deletions examples/kube-nodepool-deployment/scripts/create.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/bin/bash

exec &> logs

NB_CLUSTER=1
let "MAX=NB_CLUSTER-1"

terraform init

START=$(date +%s)
date

terraform plan -var="nb_cluster=${NB_CLUSTER}"

terraform apply -var="nb_cluster=${NB_CLUSTER}" -auto-approve

date

apply_deploy() {
KUBECONFIG_FILE="my-kube-cluster-$1.yml"
kubectl --kubeconfig=$KUBECONFIG_FILE get nodes
kubectl --kubeconfig=$KUBECONFIG_FILE apply -f hello.yaml -n default
kubectl --kubeconfig=$KUBECONFIG_FILE get all -n default
kubectl --kubeconfig=$KUBECONFIG_FILE get services -n default -l app=hello-world

ip=""
while [ -z $ip ]; do
echo "Waiting for external IP"
ip=$(kubectl --kubeconfig=${KUBECONFIG_FILE} -n default get service hello-world -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
[ -z "$ip" ] && sleep 10
done
echo 'Found external IP: '$ip
export APP_IP=$ip
echo $APP_IP
curl $APP_IP
}

wait

# Deploy on each cluster as jobs
for ((i=0;i<=$MAX;i++));
do
apply_deploy $i &
done

# Wait for jobs to finish
wait

date
END=$(date +%s)

echo $((END-START)) | awk '{printf "%d:%02d:%02d", $1/3600, ($1/60)%60, $1%60}'
7 changes: 7 additions & 0 deletions examples/kube-nodepool-deployment/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
variable "service_name" {
default = "xxxxx"
}

variable "nb_cluster" {
default = 1
}