Skip to content

Files

Latest commit

b13708e · Jan 14, 2022

History

History
403 lines (329 loc) · 15.4 KB

configure-kubenet-dual-stack.md

File metadata and controls

403 lines (329 loc) · 15.4 KB
title description services ms.topic ms.date
Configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)
container-service
article
12/15/2021

Use dual-stack kubenet networking in Azure Kubernetes Service (AKS) (Preview)

AKS clusters can now be deployed in a dual-stack (using both IPv4 and IPv6 addresses) mode when using kubenet networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).

This article shows you how to use dual-stack networking with an AKS cluster. For more information on network options and considerations, see Network concepts for Kubernetes and AKS.

[!INCLUDE preview features callout]

Limitations

Note

Dual-stack kubenet networking is currently not available in sovereign clouds. This note will be removed when rollout is complete.

  • Azure Route Tables have a hard limit of 400 routes per table. Because each node in a dual-stack cluster requires two routes, one for each IP address family, dual-stack clusters are limited to 200 nodes.
  • During preview, service objects are only supported with externalTrafficPolicy: Local.
  • Dual-stack networking is required for the Azure Virtual Network and the pod CIDR - single stack IPv6-only isn't supported for node or pod IP addresses. Services can be provisioned on IPv4 or IPv6.
  • Features not supported on dual-stack kubenet include:

Prerequisites

  • All prerequisites from configure kubenet networking apply.
  • AKS dual-stack clusters require Kubernetes version v1.21.2 or greater. v1.22.2 or greater is recommended to take advantage of the out-of-tree cloud controller manager, which is the default on v1.22 and up.
  • Azure CLI with the aks-preview extension 0.5.48 or newer.
  • If using Azure Resource Manager templates, schema version 2021-10-01 is required.

Register the AKS-EnableDualStack preview feature

To create an AKS dual-stack cluster, you must enable the AKS-EnableDualStack feature flag on your subscription.

Register the AKS-EnableDualStack feature flag by using the az feature register command, as shown in the following example:

az feature register --namespace "Microsoft.ContainerService" --name "AKS-EnableDualStack"

It takes a few minutes for the status to show Registered. Verify the registration status by using the az feature list command:

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-EnableDualStack')].{Name:name,State:properties.state}"

When ready, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:

az provider register --namespace Microsoft.ContainerService

Install the aks-preview CLI extension

# Install the aks-preview extension
az extension add --name aks-preview

# Update the extension to make sure you have the latest version installed
az extension update --name aks-preview

Overview of dual-stack networking in Kubernetes

Kubernetes v1.23 brings stable upstream support for IPv4/IPv6 dual-stack clusters, including pod and service networking. Nodes and pods are always assigned both an IPv4 and an IPv6 address, while services can be single-stack on either address family or dual-stack.

AKS configures the required supporting services for dual-stack networking. This configuration includes:

  • Dual-stack virtual network configuration (if managed Virtual Network is used)
  • IPv4 and IPv6 node and pod addresses
  • Outbound rules for both IPv4 and IPv6 traffic
  • Load balancer setup for IPv4 and IPv6 services

Deploying a dual-stack cluster

Three new attributes are provided to support dual-stack clusters:

  • --ip-families - takes a comma-separated list of IP families to enable on the cluster.
    • Currently only ipv4 or ipv4,ipv6 are supported.
  • --pod-cidrs - takes a comma-separated list of CIDR notation IP ranges to assign pod IPs from.
    • The count and order of ranges in this list must match the value provided to --ip-families.
    • If no values are supplied, the default values of 10.244.0.0/16,fd12:3456:789a::/64 will be used.
  • --service-cidrs - takes a comma-separated list of CIDR notation IP ranges to assign service IPs from.
    • The count and order of ranges in this list must match the value provided to --ip-families.
    • If no values are supplied, the default values of 10.0.0.0/16,fd12:3456:789a:1::/108 will be used.
    • The IPv6 subnet assigned to --service-cidrs can be no larger than a /108.

Deploy the cluster

Deploying a dual-stack cluster requires passing the --ip-families parameter with the parameter value of ipv4,ipv6 to indicate that a dual-stack cluster should be created.

  1. First, create a resource group to create the cluster in:

    az group create -l <Region> -n <ResourceGroupName>
    
  2. Then create the cluster itself:

    az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --ip-families ipv4,ipv6
    

When using an Azure Resource Manager template to deploy, pass ["IPv4", "IPv6"] to the ipFamilies parameter to the networkProfile object. See the Azure Resource Manager template documentation for help with deploying this template, if needed.

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "clusterName": {
      "type": "string",
      "defaultValue": "aksdualstack"
    },
    "location": {
      "type": "string",
      "defaultValue": "[resourceGroup().location]"
    },
    "kubernetesVersion": {
      "type": "string",
      "defaultValue": "1.22.2"
    },
    "nodeCount": {
      "type": "int",
      "defaultValue": 3
    },
    "nodeSize": {
      "type": "string",
      "defaultValue": "Standard_B2ms"
    }
  },
  "resources": [
    {
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2021-10-01",
      "name": "[parameters('clusterName')]",
      "location": "[parameters('location')]",
      "identity": {
        "type": "SystemAssigned"
      },
      "properties": {
        "agentPoolProfiles": [
          {
            "name": "nodepool1",
            "count": "[parameters('nodeCount')]",
            "mode": "System",
            "vmSize": "[parameters('nodeSize')]"
          }
        ],
        "dnsPrefix": "[parameters('clusterName')]",
        "kubernetesVersion": "[parameters('kubernetesVersion')]",
        "networkProfile": {
          "ipFamilies": [
            "IPv4",
            "IPv6"
          ]
        }
      }
    }
  ]
}

When using a Bicep template to deploy, pass ["IPv4", "IPv6"] to the ipFamilies parameter to the networkProfile object. See the Bicep template documentation for help with deploying this template, if needed.

param clusterName string = 'aksdualstack'
param location string = resourceGroup().location
param kubernetesVersion string = '1.22.2'
param nodeCount int = 3
param nodeSize string = 'Standard_B2ms'

resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-10-01' = {
  name: clusterName
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    agentPoolProfiles: [
      {
        name: 'nodepool1'
        count: nodeCount
        mode: 'System'
        vmSize: nodeSize
      }
    ]
    dnsPrefix: clusterName
    kubernetesVersion: kubernetesVersion
    networkProfile: {
      ipFamilies: [
        'IPv4'
        'IPv6'
      ]
    }
  }
}

Finally, after the cluster has been created, get the admin credentials:

az aks get-credentials -g <ResourceGroupName> -n <ClusterName> -a

Inspect the nodes to see both IP families

Once the cluster is provisioned, confirm that the nodes are provisioned with dual-stack networking:

kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address,PODCIDRS:.spec.podCIDRs[*]"

The output from the kubectl get nodes command will show that the nodes have addresses and pod IP assignment space from both IPv4 and IPv6.

NAME                                ADDRESSES                           PODCIDRS
aks-nodepool1-14508455-vmss000000   10.240.0.4,2001:1234:5678:9abc::4   10.244.0.0/24,fd12:3456:789a::/80
aks-nodepool1-14508455-vmss000001   10.240.0.5,2001:1234:5678:9abc::5   10.244.1.0/24,fd12:3456:789a:0:1::/80
aks-nodepool1-14508455-vmss000002   10.240.0.6,2001:1234:5678:9abc::6   10.244.2.0/24,fd12:3456:789a:0:2::/80

Create an example workload

Deploy an nginx web server

Once the cluster has been created, workloads can be deployed as usual. A simple example webserver can be created using the following command:

kubectl create deployment nginx --image=nginx:latest --replicas=3
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx

Using the following kubectl get pods command will show that the pods have both IPv4 and IPv6 addresses (note that the pods will not show IP addresses until they are ready):

kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
NAME                     IPs                                NODE                                READY
nginx-55649fd747-9cr7h   10.244.2.2,fd12:3456:789a:0:2::2   aks-nodepool1-14508455-vmss000002   True
nginx-55649fd747-p5lr9   10.244.0.7,fd12:3456:789a::7       aks-nodepool1-14508455-vmss000000   True
nginx-55649fd747-r2rqh   10.244.1.2,fd12:3456:789a:0:1::2   aks-nodepool1-14508455-vmss000001   True

Expose the workload via a LoadBalancer-type service

Important

There are currently two limitations pertaining to IPv6 services in AKS. These are both preview limitations and work is underway to remove them.

  • Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. This traffic cannot be routed to a pod and thus traffic flowing to IPv6 services deployed with externalTrafficPolicy: Cluster will fail. During preview, IPv6 services MUST be deployed with externalTrafficPolicy: Local, which causes kube-proxy to respond to the probe on the node, in order to function.
  • Only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service will only receive a public IP for its first listed IP family. In order to provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.

IPv6 services in Kubernetes can be exposed publicly similarly to an IPv4 service.

kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local"}}'
kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"externalTrafficPolicy":"Local", "ipFamilies": ["IPv6"]}}'
service/nginx-ipv4 exposed
service/nginx-ipv6 exposed
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx-ipv4
spec:
  externalTrafficPolicy: Local
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx-ipv6
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv6
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

Once the deployment has been exposed and the LoadBalancer services have been fully provisioned, kubectl get services will show the IP addresses of the services:

kubectl get services
NAME         TYPE           CLUSTER-IP               EXTERNAL-IP         PORT(S)        AGE
nginx-ipv4   LoadBalancer   10.0.88.78               20.46.24.24         80:30652/TCP   97s
nginx-ipv6   LoadBalancer   fd12:3456:789a:1::981a   2603:1030:8:5::2d   80:32002/TCP   63s

Next, we can verify functionality via a command-line web request from an IPv6 capable host (note that Azure Cloud Shell is not IPv6 capable):

SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s "http://[${SERVICE_IP}]" | head -n5
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>