You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 001-Lab-Setup/README.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -22,24 +22,24 @@ Note: You may need to refresh the page a few times before seeing your Kubernetes
22
22
In the navigation on the left side of the console, click `Kubernetes Engine`. Here you will find the details about the cluster and a GUI for accessing and administering workloads and services.
23
23
24
24
## Task 3: Launch Cloud Shell
25
-
There is a button titled `Activate Google Cloud Shell` located in the top-bar navigation of the console. When clicked, a terminal will appear in the lower half of the console. This gives you direct command-line access to your Kubernetes cluster.
25
+
There is a button titled `Activate Google Cloud Shell` located in the top-bar navigation of the console. When clicked, a terminal will appear in the lower half of the console. This gives you direct command-line access to your Kubernetes cluster.
26
26
27
27
Cloud shell comes packaged with a beta feature called `code editor` which gives you a minimal IDE for viewing and editing files. This will be used throughout the remainder of the labs. The link is found in the upper-right hand corner of the terminal.
28
28
29
29
## Task 4: Clone the Git Repository
30
30
In your home directory, we are going to pull in the documentation and source code used for the course labs. We can do this by running the following command:
Most of the tools necessary to complete the labs come pre-installed in Google Cloud Shell including `kubectl` which is used extensively to interact with your cluster. Ensure your cluster is operational by running the following commands.
36
+
Most of the tools necessary to complete the labs come pre-installed in Google Cloud Shell including `kubectl` which is used extensively to interact with your cluster. Ensure your cluster is operational by running the following commands.
37
37
38
38
First, we need to use connect to the cluster using Cloud Shell. In the navigation on the left, click `Kubernetes Engine -> Cluster` then click the `Connect` button next to your cluster:
39
39
40
40

41
41
42
-
You will then be presented with options to connect to the cluster. Click `Run in Cloud Shell`. This will open Google Cloud Shell in the same browser tab. It will also paste a command into the terminal. All you need to do now is hit enter to run the command.
42
+
You will then be presented with options to connect to the cluster. Click `Run in Cloud Shell`. This will open Google Cloud Shell in the same browser tab. It will also paste a command into the terminal. All you need to do now is hit enter to run the command.
Copy file name to clipboardExpand all lines: 002-Containerizing-An-Application/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ The source code for the application located in the `src/link-unshorten` director
8
8
### Task 1: Browse the Application
9
9
Open up the files in `src/link-unshorten` in your favorite IDE or the Cloud Shell editor and familiarize yourself with the application.
10
10
11
-
### Task 2: Build the Docker Image
11
+
### Task 2: Build the Docker Image
12
12
In the `src/link-unshorten` directory run the following command (substituting <yourname> with your own identifier) to build the image on the Cloud Shell VM:
13
13
```
14
14
docker build -t <yourname>/link-unshorten:0.1 .
@@ -92,7 +92,7 @@ Hint 3: Yes, the answer is commented in the source code
92
92
Hint 4: You will need to run `docker stop` on the first running container before running another one with the same port
93
93
94
94
### Bonus 3: Inspect the Docker image
95
-
[dive](https://github.com/wagoodman/dive) is an OSS project that helps with visualization and optimization of images.
95
+
[dive](https://github.com/wagoodman/dive) is an OSS project that helps with visualization and optimization of images.
96
96
97
97
Install `dive` in Cloud Shell and inspect the unshorten image that was created.
Copy file name to clipboardExpand all lines: 003-Cluster-Setup/README.md
+11-11
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ echo "Default Namespace Switched:" $(kubectl get sa default -o jsonpath='{.metad
16
16
1.`kubectl` is the command line utility that we will use to interact with our Kubernetes cluster. The first task is to view the Pods that are running on our cluster with an out-of-the-box installation. Run the following command in you terminal:
17
17
```
18
18
kubectl get pods
19
-
```
19
+
```
20
20
21
21
2. As you can see no pods are running. This is because we are dropped into the `default` namespace and the `default` namespace has nothing deployed to it. Try running the same command with the following argument. This will list the pods used by the Kubernetes system itself:
22
22
```
@@ -86,7 +86,7 @@ exit
86
86
### Task 3: Exposing your Pod to the World
87
87
There are a variety of ways to make our Pod accessible to the outside world. A Service with the type `LoadBalancer` will be used to give our Pod a stable existence and an IP we can reach from our web browser.
88
88
89
-
The `LoadBalancer` type spins up a load balancer in GCP automatically.
89
+
The `LoadBalancer` type spins up a load balancer in GCP automatically.
90
90
91
91
1. To expose the application we create a Service with the type of LoadBalancer:
4. This is no way to manage a real Kubernetes cluster. Tear down your app using the following commands:
107
107
```
108
108
kubectl delete pod link-unshorten && kubectl delete svc link-unshorten
109
-
```
109
+
```
110
110
111
111
### Task 4: "Codifying" Your Deployment
112
112
Running ad hoc commands in a terminal are no way to maintain a proper DevOps infrastructure. Kubernetes is built with "Infrastructure as Code" in mind by using manifests. Manifests can be written in JSON and YAML. We will be using YAML for all labs.
4. Under the hood we can see the new ReplicaSet that was created. Remember, a Deployment actually creates a ReplicaSet. Deployments provide the same replication functions via ReplicaSets and also the ability to rollout changes and roll them back if necessary.
127
+
4. Under the hood we can see the new ReplicaSet that was created. Remember, a Deployment actually creates a ReplicaSet. Deployments provide the same replication functions via ReplicaSets and also the ability to rollout changes and roll them back if necessary.
6. Similar to how we interacted with our application earlier, we use the IP from the above output and paste it into our browser.
138
138
```
139
139
http://<EXTERNAL-IP>/api/check?url=bit.ly/test
140
-
```
140
+
```
141
141
142
142
### Task 5: Scale
143
143
144
144
1. We will first increase the number of pods in our Deployment using `kubectl scale`. Note - This will not reflect what is defined in the manifest. These values will be out of sync.
7. Inspect the Pods scaling. Note that others will be terminating at the same time:
183
183
```
184
-
kubectl get pods
184
+
kubectl get pods
185
185
```
186
186
187
187
### Multi-Container Pods
@@ -207,7 +207,7 @@ exit
207
207
```
208
208
209
209
### Bonus
210
-
A critical RCE vulnerability was just reported through a bug bounty and was fixed late into the night. Roll out a new version of the app (0.2) in your cluster to patch the vulnerability on each of your three running pods. No downtime allowed! Show the deployment history using `kubectl rollout history`
210
+
A critical RCE vulnerability was just reported through a bug bounty and was fixed late into the night. Roll out a new version of the app (0.2) in your cluster to patch the vulnerability on each of your three running pods. No downtime allowed! Show the deployment history using `kubectl rollout history`
211
211
212
212
### Bonus 2
213
213
The new version you just rolled out contains a critical bug! Quickly rollback the deployment to 0.1 (Yes, 0.1 is the vulnerable version, but this is just for practice!)
@@ -221,6 +221,6 @@ echo "Default Namespace Switched:" $(kubectl get sa default -o jsonpath='{.metad
221
221
```
222
222
223
223
### Discussion Questions
224
-
1. What would be a good piece of your application or infrastructure to start breaking up into Pods within Kubernetes?
224
+
1. What would be a good piece of your application or infrastructure to start breaking up into Pods within Kubernetes?
225
225
226
-
2. What security challenges does administering a Kubernetes cluster using a tool like kubectl present?
226
+
2. What security challenges does administering a Kubernetes cluster using a tool like kubectl present?
First, we will spin up our application in both a `development` and `production` namespace.
29
+
First, we will spin up our application in both a `development` and `production` namespace.
30
30
31
31
Note: You should be logged in to Cloud Shell using the admin account provided at the beginning of class to run the following commands, NOT `<your-intern-email>@manicode.us`.
32
32
33
-
We need to retrieve the credentials of our running cluster using the following `gcloud` command. This command updates our kubeconfig in Cloud Shell file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine.
33
+
We need to retrieve the credentials of our running cluster using the following `gcloud` command. This command updates our kubeconfig in Cloud Shell file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine.
34
34
35
35
```
36
36
# Use gcloud get-credentials to retrieve the cert
@@ -65,11 +65,11 @@ kubectl get pods --all-namespaces
65
65
Take note of this process. Our user has full administrative access to our cluster due to being provisioned with the `Kubernetes Engine Admin` role. We will now see how RBAC helps give us granular access control at the object-level within our cluster.
66
66
67
67
### Task 2: Authenticate as a Restricted User
68
-
We will now log in using a separate user who has very locked down access to the entire project. In an incognito window browse to `cloud.google.com` and authenticate with the user `<your-intern-email>@manicode.us` and the same password that was provided to you for the admin user.
68
+
We will now log in using a separate user who has very locked down access to the entire project. In an incognito window browse to `cloud.google.com` and authenticate with the user `<your-intern-email>@manicode.us` and the same password that was provided to you for the admin user.
69
69
70
-
Note: *Using the same password for multiple accounts is bad. Don't do this at home.*
70
+
Note: *Using the same password for multiple accounts is bad. Don't do this at home.*
71
71
72
-
Now open up Cloud Shell and use the following `gcloud get-credentials` command to retrieve the credentials for your user so we can start interacting with the cluster. This is the same cluster you just launched the `production` and `development` infrastructure in.
72
+
Now open up Cloud Shell and use the following `gcloud get-credentials` command to retrieve the credentials for your user so we can start interacting with the cluster. This is the same cluster you just launched the `production` and `development` infrastructure in.
73
73
74
74
```
75
75
# Authenticate to the cluster
@@ -80,7 +80,7 @@ Now, attempt to run some `kubectl` queries on the cluster.
80
80
```
81
81
kubectl get pods --namespace=production
82
82
kubectl get pods --namespace=development
83
-
kubectl get secrets
83
+
kubectl get secrets
84
84
kubectl run link-unshorten --image=jmbmxer/link-unshorten:0.1 --port=8080
85
85
```
86
86
These should all fail with a `Forbidden` error. While <your-intern-email>@manicode.us does technically have an account on the cluster, RBAC is stopping it from accessing any of the objects.
By default, User 1 will not be able to create the `roles` or `rolebindings` needed to begin building our RBAC policies. We need to ensure User 1 (our Administrator) has the appropriate access to the cluster by granting the user `cluster-admin` rights.
96
+
By default, User 1 will not be able to create the `roles` or `rolebindings` needed to begin building our RBAC policies. We need to ensure User 1 (our Administrator) has the appropriate access to the cluster by granting the user `cluster-admin` rights.
97
97
98
98
`cluster-admin` is one of several Default User-facing roles included with every Kubernetes installation. They should be used with caution as many of these roles grant excessive privileges and are often abused for a quick fix.
Our user `<your-intern-email>@manicode.us` is a restricted user so we only want to grant access to read pods in the `development` namespace and nothing more. We will use RBAC to enforce a policy
129
129
130
-
Now, open the file `user-role-binding.yaml` in the `manifests/role` directory and replace <your-intern-email> with the one provided to you. It will be the same as your admin account but with the word `intern` at the end (eg. `manicode0003intern@manicode.us`).
130
+
Now, open the file `user-role-binding.yaml` in the `manifests/role` directory and replace <your-intern-email> with the one provided to you. It will be the same as your admin account but with the word `intern` at the end (eg. `manicode0003intern@manicode.us`).
Copy file name to clipboardExpand all lines: 007-Network-Policies/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ echo "Default Namespace Switched:" $(kubectl get sa default -o jsonpath='{.metad
14
14
```
15
15
16
16
### Task 2: Create our Network Policy
17
-
Go to the `manifests/network-policies` directory and inspect the Network policy named `hello-unshorten.yaml`. This policy simply selects Pods with label `app=unshorten-api` and specifies an ingress policy to allow traffic only from Pods with the label `app=unshorten-fe`. We only want to allow traffic from pods that are acting as frontends to our API.
17
+
Go to the `manifests/network-policies` directory and inspect the Network policy named `hello-unshorten.yaml`. This policy simply selects Pods with label `app=unshorten-api` and specifies an ingress policy to allow traffic only from Pods with the label `app=unshorten-fe`. We only want to allow traffic from pods that are acting as frontends to our API.
18
18
19
19
In the `manifests/network-policies` directory run:
Copy file name to clipboardExpand all lines: 010-Security-Pipeline/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Security Pipeline and Automation
2
-
This lab will spin up Jenkins in our cluster along with a private Docker image repository. Jenkins will also handle zero-downtime deploys of the unshorten API upon a successful build. The humble beginnings of a self-contained DevSecOps pipeline.
2
+
This lab will spin up Jenkins in our cluster along with a private Docker image repository. Jenkins will also handle zero-downtime deploys of the unshorten API upon a successful build. The humble beginnings of a self-contained DevSecOps pipeline.
3
3
4
4
### Create the `lab010` Namespace and Use as Default
5
5
@@ -39,7 +39,7 @@ We need a location to store our versioned Docker images within our Kubernetes cl
39
39
kubectl create -f .
40
40
```
41
41
42
-
2. Once all of the Pods and Services are up and healthy, grab the URL for our freshly created registry and visit it in your browser.
42
+
2. Once all of the Pods and Services are up and healthy, grab the URL for our freshly created registry and visit it in your browser.
5. Inspect the `Jenkinsfile` in the repo. It has the humble beginnings of an AppSec and DevSecOps pipeline. Each stage is meant to apply automation to the process where issues result in failed builds.
86
+
5. Inspect the `Jenkinsfile` in the repo. It has the humble beginnings of an AppSec and DevSecOps pipeline. Each stage is meant to apply automation to the process where issues result in failed builds.
87
87
88
88
### Task 4: Trigger a Build
89
89
Most pipeline setups will trigger builds on a git commit or through some other automated manner. To simulate this, we will tell Jenkins to trigger a build manually:
0 commit comments