diff --git a/menu/navigation.json b/menu/navigation.json
index 966f828ffd..53fdf66dae 100644
--- a/menu/navigation.json
+++ b/menu/navigation.json
@@ -1919,10 +1919,6 @@
"label": "Use the scratch storage on H100 GPU Instances with Kapsule",
"slug": "use-scratch-storage-h100"
},
- {
- "label": "Migrate existing ENT1 pools to POP2 Instances",
- "slug": "migrate-ent1-pools-to-pop2"
- },
{
"label": "Deploy x86 and ARM images in Kubernetes",
"slug": "deploy-x86-arm-images"
@@ -2050,6 +2046,10 @@
{
"label": "Wildcard DNS routing",
"slug": "wildcard-dns"
+ },
+ {
+ "label": "Migrate end-of-life pools to newer Instances",
+ "slug": "migrate-end-of-life-pools-to-newer-instances"
}
],
"label": "Additional Content",
diff --git a/pages/kubernetes/how-to/migrate-ent1-pools-to-pop2.mdx b/pages/kubernetes/how-to/migrate-ent1-pools-to-pop2.mdx
deleted file mode 100644
index dcaeb96199..0000000000
--- a/pages/kubernetes/how-to/migrate-ent1-pools-to-pop2.mdx
+++ /dev/null
@@ -1,97 +0,0 @@
----
-meta:
- title: Migrating ENT1 pools to POP2 in your Kubernetes cluster
- description: A step-by-step guide to transitioning from ENT1 to POP2 Instances in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance.
-content:
- h1: Migrating ENT1 pools to POP2 in your Kubernetes cluster
- paragraph: A step-by-step guide to transitioning from ENT1 to POP2 Instances in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance.
-tags: kubernetes kapsule pop2 transition
-dates:
- validation: 2025-01-24
- posted: 2025-01-24
-categories:
- - containers
----
-
-Scaleway is deprecating [production-optimized **ENT1** Instances](/instances/reference-content/production-optimized/).
-This guide provides a step-by-step process to migrate from **ENT1** Instances to **POP2** Instances within your Scaleway Kubernetes Kapsule clusters.
-
-
-
-- A Scaleway account logged into the [Scaleway console](https://console.scaleway.com)
-- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing actions in the intended Organization
-- [Created](/kubernetes/how-to/create-cluster) a Kubernetes Kapsule or Kosmos cluster
-
-## Identifying your ENT1 pools
-
-1. Log in to the [Scaleway Console](https://console.scaleway.com).
-2. Navigate to **Kubernetes** under the **Containers** section in the side menu of the console.
-3. From the drop-down menu, select the geographical region you want to manage.
-4. Select the cluster containing the ENT1 pools you intend to migrate.
-5. In the **Pools** tab, identify and note the pools using **ENT1** Instances.
-
-## Creating equivalent POP2 pools
-
-1. For each ENT1 pool identified:
- - Click **+ Create pool** (or **Add pool**).
- - Select **POP2** from the **Node Type** dropdown menu.
- - Configure the pool settings (e.g., Availability Zone, size, autoscaling, autoheal) to mirror the existing ENT1 pool as closely as possible.
- - Click **Create** (or **Add pool**) to initiate the new pool.
-
-2. Monitor the status of the new POP2 nodes until they reach the **Ready** state:
- - In the **Pools** tab of the console.
- - Alternatively, use `kubectl` with the command:
- ```
- kubectl get nodes
- ```
- Ensure all POP2 nodes display a **Ready** status.
-
-
- It is recommended to perform these steps during a maintenance window or periods of low traffic to minimize potential disruptions.
-
-
-## Verifying workloads on the new pool
-
-1. [**Cordon**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) the ENT1 nodes to prevent them from accepting new pods:
- ```
- kubectl cordon
- ```
-
-2. Drain the ENT1 nodes to reschedule workloads onto the POP2 nodes:
- ```
- kubectl drain --ignore-daemonsets --delete-emptydir-data
- ```
-
- The flags `--ignore-daemonsets` and `--delete-emptydir-data` may be necessary depending on your environment. Refer to the official [Kubernetes documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain) for detailed information on these options.
-
-
-These commands ensure that your workloads are running on the new POP2 nodes before proceeding to delete the ENT1 pool.
-
-## Deleting the ENT1 pool
-
-1. Return to your cluster’s **Pools** tab and wait a few minutes to ensure all workloads have been rescheduled onto POP2 nodes.
-2. Click the **three-dot menu** next to the ENT1 pool.
-3. Select **Delete pool**.
-4. Confirm the deletion.
-
-## Verifying the migration
-
-1. Run the following command to ensure no ENT1-based nodes remain:
- ```
- kubectl get nodes
- ```
-
- Only **POP2** nodes should be listed.
-
-
-2. Test your applications to confirm they are functioning correctly on the new POP2 nodes.
-
-### Migration Highlights
-
-- **Minimal disruption:** Kubernetes manages pod eviction and rescheduling automatically. However, the level of disruption may vary based on your specific workloads and setup. It is recommended to maintain multiple replicas of your services, set up [Pod Disruption Budgets (PDBs)](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) to minimize downtime, and scale up workloads prior to the upgrade.
-- **Flexible scaling:** You can configure the same autoscaling and autoheal policies on your POP2 pools as were set on your ENT1 pools.
-- **Equivalent performance:** In most scenarios, POP2 Instances surpass the performance of ENT1 Instances, with additional CPU and memory-optimized variants available.
-
-
- If you require assistance during the transitioning process, please [contact our Support team](https://console.scaleway.com/support/tickets).
-
\ No newline at end of file
diff --git a/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx b/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx
new file mode 100644
index 0000000000..a8e90870b6
--- /dev/null
+++ b/pages/kubernetes/reference-content/migrate-end-of-life-pools-to-newer-instances.mdx
@@ -0,0 +1,106 @@
+---
+meta:
+ title: Migrating pools of End-of-Life Instances to newer Instances in your Kubernetes Kapsule cluster
+ description: A step-by-step guide to transitioning from deprecated Instance types to newer ones in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance.
+content:
+ h1: Migrating End-of-Life instance pools in your Kubernetes Kapsule cluster
+ paragraph: A step-by-step guide to transitioning from deprecated Instance types to more recent ones in Scaleway's Kubernetes Kapsule clusters, ensuring minimal disruption and optimal performance.
+tags: kubernetes kapsule instance-migration
+dates:
+ validation: 2025-01-24
+ posted: 2025-01-24
+categories:
+ - containers
+---
+
+Scaleway is deprecating support for certain Instance types that have reached their End of Life (EOL).
+This guide outlines the recommended steps to migrate your Kubernetes Kapsule cluster node pools from deprecated Instance types to currently supported ones.
+
+
+
+* A Scaleway account logged into the [Scaleway console](https://console.scaleway.com)
+* [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing actions in the intended Organization
+* [Created](/kubernetes/how-to/create-cluster) a Kubernetes Kapsule or Kosmos cluster
+
+
+## Identifying deprecated Instance pools
+
+1. Log in to the [Scaleway Console](https://console.scaleway.com).
+2. Navigate to **Kubernetes** under the **Containers** section in the side menu of the console.
+3. From the drop-down menu, select the geographical region you want to manage.
+4. Select the cluster containing the node pools using deprecated Instances.
+5. In the **Pools** tab, check the **Instance type** column for any pools using deprecated or soon-to-be-removed types.
+
+
+## Creating replacement pools with supported Instance types
+
+1. For each ENT1 pool identified:
+ - Click **+ Create pool** (or **Add pool**).
+ - Choose a supported Instance type from the **Node Type** dropdown menu.
+ - Configure the pool settings (e.g., Availability Zone, size, autoscaling, autoheal) to mirror the existing pool configuration as closely as possible.
+ - Click **Create** (or **Add pool**) to initiate the new pool.
+
+2. Monitor the status of the new nodes until they reach **Ready** state:
+ - In the **Pools** tab of the console.
+ - Alternatively, use `kubectl` with the command:
+ ```bash
+ kubectl get nodes
+ ```
+
+
+ Schedule this migration during a maintenance window or low-traffic period to minimize service disruption.
+
+
+
+## Migrating workloads to the new pool
+
+1. [**Cordon**](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cordon/) the deprecated nodes to prevent them from receiving new pods:
+ ```bash
+ kubectl cordon
+ ```
+
+2. **Drain** the deprecated nodes to reschedule workloads onto the new nodes:
+
+ ```bash
+ kubectl drain --ignore-daemonsets --delete-emptydir-data
+ ```
+
+
+ The flags `--ignore-daemonsets` and `--delete-emptydir-data` may be necessary depending on your environment. Refer to the official [Kubernetes documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain) for detailed information on these options.
+
+
+These commands ensure that your workloads are running on the new nodes before proceeding to delete the old pool.
+
+
+## Removing deprecated Instance pools
+
+After verifying that workloads have been rescheduled, continue by deleting the old pool(s).
+
+1. Return to your cluster’s **Pools** tab and wait a few minutes to ensure all workloads have been rescheduled onto new nodes.
+2. Click next to the deprecated pool.
+3. Select **Delete pool**.
+4. Confirm the deletion.
+
+
+## Verifying migration success
+
+1. Check your nodes:
+
+ ```bash
+ kubectl get nodes
+ ```
+
+
+ Only nodes based on supported Instance types should now be listed.
+
+
+2. Test your applications to confirm they are functioning correctly on the new nodes.
+
+
+
+ Minimize downtime by maintaining multiple replicas of key workloads and setting up [Pod Disruption Budgets (PDBs)](https://kubernetes.io/docs/tasks/run-application/configure-pdb/).
+
+
+
+ If you require assistance during the transitioning process, please [contact our Support team](https://console.scaleway.com/support/tickets).
+