Skip to content

Files

Latest commit

e4f9159 · Apr 1, 2022

History

History
425 lines (310 loc) · 21.8 KB

managed-aad.md

File metadata and controls

425 lines (310 loc) · 21.8 KB
title description services ms.topic ms.date ms.author
Use Azure AD in Azure Kubernetes Service
Learn how to use Azure AD in Azure Kubernetes Service (AKS)
container-service
article
10/20/2021
miwithro

AKS-managed Azure Active Directory integration

AKS-managed Azure AD integration simplifies the Azure AD integration process. Previously, users were required to create a client and server app, and required the Azure AD tenant to grant Directory Read permissions. In the new version, the AKS resource provider manages the client and server apps for you.

Azure AD authentication overview

Cluster administrators can configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the Open ID connect documentation.

Learn more about the Azure AD integration flow on the Azure Active Directory integration concepts documentation.

Limitations

  • AKS-managed Azure AD integration can't be disabled
  • Changing a AKS-managed Azure AD integrated cluster to legacy AAD is not supported
  • Clusters without Kubernetes RBAC enabled aren't supported for AKS-managed Azure AD integration

Prerequisites

  • The Azure CLI version 2.29.0 or later
  • Kubectl with a minimum version of 1.18.1 or kubelogin
  • If you are using helm, minimum version of helm 3.3.

Important

You must use Kubectl with a minimum version of 1.18.1 or kubelogin. The difference between the minor versions of Kubernetes and kubectl should not be more than 1 version. If you don't use the correct version, you will notice authentication issues.

To install kubectl and kubelogin, use the following commands:

sudo az aks install-cli
kubectl version --client
kubelogin --version

Use these instructions for other operating systems.

Before you begin

For your cluster, you need an Azure AD group. This group will be registered as an admin group on the cluster to grant cluster admin permissions. You can use an existing Azure AD group, or create a new one. Record the object ID of your Azure AD group.

# List existing groups in the directory
az ad group list --filter "displayname eq '<group-name>'" -o table

To create a new Azure AD group for your cluster administrators, use the following command:

# Create an Azure AD group
az ad group create --display-name myAKSAdminGroup --mail-nickname myAKSAdminGroup

Create an AKS cluster with Azure AD enabled

Create an AKS cluster by using the following CLI commands.

Create an Azure resource group:

# Create an Azure resource group
az group create --name myResourceGroup --location centralus

Create an AKS cluster, and enable administration access for your Azure AD group

# Create an AKS-managed Azure AD cluster
az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]

A successful creation of an AKS-managed Azure AD cluster has the following section in the response body

"AADProfile": {
    "adminGroupObjectIds": [
      "5d24****-****-****-****-****afa27aed"
    ],
    "clientAppId": null,
    "managed": true,
    "serverAppId": null,
    "serverAppSecret": null,
    "tenantId": "72f9****-****-****-****-****d011db47"
  }

Once the cluster is created, you can start accessing it.

Access an Azure AD enabled cluster

Before you access the cluster using an Azure AD defined group, you'll need the Azure Kubernetes Service Cluster User built-in role.

Get the user credentials to access the cluster:

 az aks get-credentials --resource-group myResourceGroup --name myManagedCluster

Follow the instructions to sign in.

Use the kubectl get nodes command to view nodes in the cluster:

kubectl get nodes

NAME                       STATUS   ROLES   AGE    VERSION
aks-nodepool1-15306047-0   Ready    agent   102m   v1.15.10
aks-nodepool1-15306047-1   Ready    agent   102m   v1.15.10
aks-nodepool1-15306047-2   Ready    agent   102m   v1.15.10

Configure Azure role-based access control (Azure RBAC) to configure additional security groups for your clusters.

Troubleshooting access issues with Azure AD

Important

The steps described below are bypassing the normal Azure AD group authentication. Use them only in an emergency.

If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly.

To do these steps, you'll need to have access to the Azure Kubernetes Service Cluster Admin built-in role.

az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin

Enable AKS-managed Azure AD Integration on your existing cluster

You can enable AKS-managed Azure AD Integration on your existing Kubernetes RBAC enabled cluster. Ensure to set your admin group to keep access on your cluster.

az aks update -g MyResourceGroup -n MyManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>]

A successful activation of an AKS-managed Azure AD cluster has the following section in the response body

"AADProfile": {
    "adminGroupObjectIds": [
      "5d24****-****-****-****-****afa27aed"
    ],
    "clientAppId": null,
    "managed": true,
    "serverAppId": null,
    "serverAppSecret": null,
    "tenantId": "72f9****-****-****-****-****d011db47"
  }

Download user credentials again to access your cluster by following the steps here.

Upgrading to AKS-managed Azure AD Integration

If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD Integration.

az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]

A successful migration of an AKS-managed Azure AD cluster has the following section in the response body

"AADProfile": {
    "adminGroupObjectIds": [
      "5d24****-****-****-****-****afa27aed"
    ],
    "clientAppId": null,
    "managed": true,
    "serverAppId": null,
    "serverAppSecret": null,
    "tenantId": "72f9****-****-****-****-****d011db47"
  }

Update kubeconfig in order to access the cluster, follow the steps here.

Non-interactive sign in with kubelogin

There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with kubectl. You can use kubelogin to access the cluster with non-interactive service principal sign-in.

Disable local accounts

When deploying an AKS Cluster, local accounts are enabled by default. Even when enabling RBAC or Azure Active Directory integration, --admin access still exists, essentially as a non-auditable backdoor option. With this in mind, AKS offers users the ability to disable local accounts via a flag, disable-local-accounts. A field, properties.disableLocalAccounts, has also been added to the managed cluster API to indicate whether the feature has been enabled on the cluster.

Note

On clusters with Azure AD integration enabled, users belonging to a group specified by aad-admin-group-object-ids will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and properties.disableLocalAccounts set to true, obtaining both user and admin credentials will fail.

Note

After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must rotate the cluster certificates, in order to revoke the certificates those users might have access to. If this is a new cluster then no action is required.

Create a new cluster without local accounts

To create a new AKS cluster without any local accounts, use the az aks create command with the disable-local-accounts flag:

az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts

In the output, confirm local accounts have been disabled by checking the field properties.disableLocalAccounts is set to true:

"properties": {
    ...
    "disableLocalAccounts": true,
    ...
}

Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:

az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin

Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.

Disable local accounts on an existing cluster

To disable local accounts on an existing AKS cluster, use the az aks update command with the disable-local-accounts flag:

az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts

In the output, confirm local accounts have been disabled by checking the field properties.disableLocalAccounts is set to true:

"properties": {
    ...
    "disableLocalAccounts": true,
    ...
}

Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:

az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin

Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.

Re-enable local accounts on an existing cluster

AKS also offers the ability to re-enable local accounts on an existing cluster with the enable-local flag:

az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --enable-local

In the output, confirm local accounts have been re-enabled by checking the field properties.disableLocalAccounts is set to false:

"properties": {
    ...
    "disableLocalAccounts": false,
    ...
}

Attempting to get admin credentials will succeed:

az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin

Merged "<cluster-name>-admin" as current context in C:\Users\<username>\.kube\config

Use Conditional Access with Azure AD and AKS

When integrating Azure AD with your AKS cluster, you can also use Conditional Access to control access to your cluster.

Note

Azure AD Conditional Access is an Azure AD Premium capability.

To create an example Conditional Access policy to use with AKS, complete the following steps:

  1. At the top of the Azure portal, search for and select Azure Active Directory.
  2. In the menu for Azure Active Directory on the left-hand side, select Enterprise applications.
  3. In the menu for Enterprise applications on the left-hand side, select Conditional Access.
  4. In the menu for Conditional Access on the left-hand side, select Policies then New policy. :::image type="content" source="./media/managed-aad/conditional-access-new-policy.png" alt-text="Adding a Conditional Access policy":::
  5. Enter a name for the policy such as aks-policy.
  6. Select Users and groups, then under Include select Select users and groups. Choose the users and groups where you want to apply the policy. For this example, choose the same Azure AD group that has administration access to your cluster. :::image type="content" source="./media/managed-aad/conditional-access-users-groups.png" alt-text="Selecting users or groups to apply the Conditional Access policy":::
  7. Select Cloud apps or actions, then under Include select Select apps. Search for Azure Kubernetes Service and select Azure Kubernetes Service AAD Server. :::image type="content" source="./media/managed-aad/conditional-access-apps.png" alt-text="Selecting Azure Kubernetes Service AD Server for applying the Conditional Access policy":::
  8. Under Access controls, select Grant. Select Grant access then Require device to be marked as compliant. :::image type="content" source="./media/managed-aad/conditional-access-grant-compliant.png" alt-text="Selecting to only allow compliant devices for the Conditional Access policy":::
  9. Under Enable policy, select On then Create. :::image type="content" source="./media/managed-aad/conditional-access-enable-policy.png" alt-text="Enabling the Conditional Access policy":::

Get the user credentials to access the cluster, for example:

 az aks get-credentials --resource-group myResourceGroup --name myManagedCluster

Follow the instructions to sign in.

Use the kubectl get nodes command to view nodes in the cluster:

kubectl get nodes

Follow the instructions to sign in again. Notice there is an error message stating you are successfully logged in, but your admin requires the device requesting access to be managed by your Azure AD to access the resource.

In the Azure portal, navigate to Azure Active Directory, select Enterprise applications then under Activity select Sign-ins. Notice an entry at the top with a Status of Failed and a Conditional Access of Success. Select the entry then select Conditional Access in Details. Notice your Conditional Access policy is listed.

:::image type="content" source="./media/managed-aad/conditional-access-sign-in-activity.png" alt-text="Failed sign-in entry due to Conditional Access policy":::

Configure just-in-time cluster access with Azure AD and AKS

Another option for cluster access control is to use Privileged Identity Management (PIM) for just-in-time requests.

Note

PIM is an Azure AD Premium capability requiring a Premium P2 SKU. For more on Azure AD SKUs, see the pricing guide.

To integrate just-in-time access requests with an AKS cluster using AKS-managed Azure AD integration, complete the following steps:

  1. At the top of the Azure portal, search for and select Azure Active Directory.
  2. Take note of the Tenant ID, referred to for the rest of these instructions as <tenant-id> :::image type="content" source="./media/managed-aad/jit-get-tenant-id.png" alt-text="In a web browser, the Azure portal screen for Azure Active Directory is shown with the tenant's ID highlighted.":::
  3. In the menu for Azure Active Directory on the left-hand side, under Manage select Groups then New Group. :::image type="content" source="./media/managed-aad/jit-create-new-group.png" alt-text="Shows the Azure portal Active Directory groups screen with the 'New Group' option highlighted.":::
  4. Make sure a Group Type of Security is selected and enter a group name, such as myJITGroup. Under Azure AD Roles can be assigned to this group (Preview), select Yes. Finally, select Create. :::image type="content" source="./media/managed-aad/jit-new-group-created.png" alt-text="Shows the Azure portal's new group creation screen.":::
  5. You will be brought back to the Groups page. Select your newly created group and take note of the Object ID, referred to for the rest of these instructions as <object-id>. :::image type="content" source="./media/managed-aad/jit-get-object-id.png" alt-text="Shows the Azure portal screen for the just-created group, highlighting the Object Id":::
  6. Deploy an AKS cluster with AKS-managed Azure AD integration by using the <tenant-id> and <object-id> values from earlier:
    az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <object-id> --aad-tenant-id <tenant-id>
    
  7. Back in the Azure portal, in the menu for Activity on the left-hand side, select Privileged Access (Preview) and select Enable Privileged Access. :::image type="content" source="./media/managed-aad/jit-enabling-priv-access.png" alt-text="The Azure portal's Privileged access (Preview) page is shown, with 'Enable privileged access' highlighted":::
  8. Select Add Assignments to begin granting access. :::image type="content" source="./media/managed-aad/jit-add-active-assignment.png" alt-text="The Azure portal's Privileged access (Preview) screen after enabling is shown. The option to 'Add assignments' is highlighted.":::
  9. Select a role of member, and select the users and groups to whom you wish to grant cluster access. These assignments can be modified at any time by a group admin. When you're ready to move on, select Next. :::image type="content" source="./media/managed-aad/jit-adding-assignment.png" alt-text="The Azure portal's Add assignments Membership screen is shown, with a sample user selected to be added as a member. The option 'Next' is highlighted.":::
  10. Choose an assignment type of Active, the desired duration, and provide a justification. When you're ready to proceed, select Assign. For more on assignment types, see Assign eligibility for a privileged access group (preview) in Privileged Identity Management. :::image type="content" source="./media/managed-aad/jit-set-active-assignment-details.png" alt-text="The Azure portal's Add assignments Setting screen is shown. An assignment type of 'Active' is selected and a sample justification has been given. The option 'Assign' is highlighted.":::

Once the assignments have been made, verify just-in-time access is working by accessing the cluster. For example:

 az aks get-credentials --resource-group myResourceGroup --name myManagedCluster

Follow the steps to sign in.

Use the kubectl get nodes command to view nodes in the cluster:

kubectl get nodes

Note the authentication requirement and follow the steps to authenticate. If successful, you should see output similar to the following:

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
NAME                                STATUS   ROLES   AGE     VERSION
aks-nodepool1-61156405-vmss000000   Ready    agent   6m36s   v1.18.14
aks-nodepool1-61156405-vmss000001   Ready    agent   6m42s   v1.18.14
aks-nodepool1-61156405-vmss000002   Ready    agent   6m33s   v1.18.14

Apply Just-in-Time access at the namespace level

  1. Integrate your AKS cluster with Azure RBAC.
  2. Associate the group you want to integrate with Just-in-Time access with a namespace in the cluster through role assignment.
az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name>
  1. Associate the group you just configured at the namespace level with PIM to complete the configuration.

Troubleshooting

If kubectl get nodes returns an error similar to the following:

Error from server (Forbidden): nodes is forbidden: User "aaaa11111-11aa-aa11-a1a1-111111aaaaa" cannot list resource "nodes" in API group "" at the cluster scope

Make sure the admin of the security group has given your account an Active assignment.

Next steps