title | titleSuffix | description | ms.service | ms.subservice | ms.topic | ms.author | author | ms.reviewer | ms.custom | ms.date |
---|---|---|---|---|---|---|---|---|---|---|
Autoscale online endpoints |
Azure Machine Learning |
Learn to scale up online endpoints. Get more CPU, memory, disk space, and extra features. |
machine-learning |
core |
how-to |
seramasu |
rsethur |
larryfr |
devplatv2, cliv2, event-tier1-build-2022 |
04/27/2022 |
Autoscale automatically runs the right amount of resources to handle the load on your application. Online endpoints supports autoscaling through integration with the Azure Monitor autoscale feature.
Azure Monitor autoscaling supports a rich set of rules. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. For more information, see Overview of autoscale in Microsoft Azure.
:::image type="content" source="media/how-to-autoscale-endpoints/concept-autoscale.png" alt-text="Diagram for autoscale adding/removing instance as needed":::
Today, you can manage autoscaling using either the Azure CLI, REST, ARM, or the browser-based Azure portal. Other Azure ML SDKs, such as the Python SDK, will add support over time.
- A deployed endpoint. Deploy and score a machine learning model by using an online endpoint.
To enable autoscale for an endpoint, you first define an autoscale profile. This profile defines the default, minimum, and maximum scale set capacity. The following example sets the default and minimum capacity as two VM instances, and the maximum capacity as five:
[!INCLUDE cli v2]
The following snippet sets the endpoint and deployment names:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="set_endpoint_deployment_name" :::
Next, get the Azure Resource Manager ID of the deployment and endpoint:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="set_other_env_variables" :::
The following snippet creates the autoscale profile:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="create_autoscale_profile" :::
Note
For more, see the reference page for autoscale
In Azure Machine Learning studio, select your workspace and then select Endpoints from the left side of the page. Once the endpoints are listed, select the one you want to configure.
:::image type="content" source="media/how-to-autoscale-endpoints/select-endpoint.png" alt-text="Screenshot of an endpoint deployment entry in the portal.":::
From the Details tab for the endpoint, select Configure auto scaling.
:::image type="content" source="media/how-to-autoscale-endpoints/configure-auto-scaling.png" alt-text="Screenshot of the configure auto scaling link in endpoint details.":::
Under Choose how to scale your resources, select Custom autoscale to begin the configuration. For the default scale condition, use the following values:
- Set Scale mode to Scale based on a metric.
- Set Minimum to 2.
- Set Maximum to 5.
- Set Default to 2.
:::image type="content" source="media/how-to-autoscale-endpoints/choose-custom-autoscale.png" alt-text="Screenshot showing custom autoscale choice.":::
A common scaling out rule is one that increases the number of VM instances when the average CPU load is high. The following example will allocate two more nodes (up to the maximum) if the CPU average a load of greater than 70% for five minutes::
[!INCLUDE cli v2]
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_out_on_cpu_util" :::
The rule is part of the my-scale-settings
profile (autoscale-name
matches the name
of the profile). The value of its condition
argument says the rule should trigger when "The average CPU consumption among the VM instances exceeds 70% for five minutes." When that condition is satisfied, two more VM instances are allocated.
Note
For more information on the CLI syntax, see az monitor autoscale
.
In the Rules section, select Add a rule. The Scale rule page is displayed. Use the following information to populate the fields on this page:
- Set Metric name to CPU Utilization Percentage.
- Set Operator to Greater than and set the Metric threshold to 70.
- Set Duration (minutes) to 5. Leave the Time grain statistic as Average.
- Set Operation to Increase count by and set Instance count to 2.
Finally, select the Add button to create the rule.
:::image type="content" source="media/how-to-autoscale-endpoints/scale-out-rule.png" lightbox="media/how-to-autoscale-endpoints/scale-out-rule.png" alt-text="Screenshot showing scale out rule >70% CPU for 5 minutes.":::
When load is light, a scaling in rule can reduce the number of VM instances. The following example will release a single node, down to a minimum of 2, if the CPU load is less than 30% for 5 minutes:
[!INCLUDE cli v2]
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_in_on_cpu_util" :::
In the Rules section, select Add a rule. The Scale rule page is displayed. Use the following information to populate the fields on this page:
- Set Metric name to CPU Utilization Percentage.
- Set Operator to Less than and the Metric threshold to 30.
- Set Duration (minutes) to 5.
- Set Operation to Decrease count by and set Instance count to 1.
Finally, select the Add button to create the rule.
:::image type="content" source="media/how-to-autoscale-endpoints/scale-in-rule.png" lightbox="media/how-to-autoscale-endpoints/scale-in-rule.png" alt-text="Screenshot showing scale-in rule":::
If you have both scale out and scale in rules, your rules will look similar to the following screenshot. You've specified that if average CPU load exceeds 70% for 5 minutes, 2 more nodes should be allocated, up to the limit of 5. If CPU load is less than 30% for 5 minutes, a single node should be released, down to the minimum of 2.
:::image type="content" source="media/how-to-autoscale-endpoints/autoscale-rules-final.png" lightbox="media/how-to-autoscale-endpoints/autoscale-rules-final.png" alt-text="Screenshot showing autoscale settings including rules.":::
The previous rules applied to the deployment. Now, add a rule that applies to the endpoint. In this example, if the request latency is greater than an average of 70 milliseconds for 5 minutes, allocate another node.
[!INCLUDE cli v2]
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="scale_up_on_request_latency" :::
From the bottom of the page, select + Add a scale condition.
Select Scale based on metric, and then select Add a rule. The Scale rule page is displayed. Use the following information to populate the fields on this page:
- Set Metric source to Other resource.
- Set Resource type to Machine Learning online endpoints.
- Set Resource to your endpoint.
- Set Metric name to Request latency.
- Set Operator to Greater than and set Metric threshold to 70.
- Set Duration (minutes) to 5.
- Set Operation to Increase count by and set Instance count to 1
:::image type="content" source="media/how-to-autoscale-endpoints/endpoint-rule.png" lightbox="media/how-to-autoscale-endpoints/endpoint-rule.png" alt-text="Screenshot showing endpoint metrics rules.":::
You can also create rules that apply only on certain days or at certain times. In this example, the node count is set to 2 on the weekend.
[!INCLUDE cli v2]
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="weekend_profile" :::
From the bottom of the page, select + Add a scale condition. On the new scale condition, use the following information to populate the fields:
- Select Scale to a specific instance count.
- Set the Instance count to 2.
- Set the Schedule to Repeat specific days.
- Set the schedule to Repeat every Saturday and Sunday.
:::image type="content" source="media/how-to-autoscale-endpoints/schedule-rules.png" lightbox="media/how-to-autoscale-endpoints/schedule-rules.png" alt-text="Screenshot showing schedule-based rules.":::
If you are not going to use your deployments, delete them:
[!INCLUDE cli v2]
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="delete_endpoint" :::
To learn more about autoscale with Azure Monitor, see the following articles: