title | titleSuffix | description | author | ms.author | ms.reviewer | services | ms.service | ms.subservice | ms.topic | ms.custom | ms.date |
---|---|---|---|---|---|---|---|---|---|---|---|
Azure Machine Learning inference HTTP server |
Azure Machine Learning |
Learn how to enable local development with Azure machine learning inference http server. |
shivanissambare |
ssambare |
larryfr |
machine-learning |
machine-learning |
core |
how-to |
inference server, local development, local debugging, devplatv2 |
05/14/2021 |
The Azure Machine Learning inference HTTP server (preview) is a Python package that allows you to easily validate your entry script (score.py
) in a local development environment. If there's a problem with the scoring script, the server will return an error. It will also return the location where the error occurred.
The server can also be used when creating validation gates in a continuous integration and deployment pipeline. For example, start the server with the candidate script and run the test suite against the local endpoint.
- Requires: Python >=3.7
Note
To avoid package conflicts, install the server in a virtual environment.
To install the azureml-inference-server-http package
, run the following command in your cmd/terminal:
python -m pip install azureml-inference-server-http
-
Create a directory to hold your files:
mkdir server_quickstart cd server_quickstart
-
To avoid package conflicts, create a virtual environment and activate it:
virtualenv myenv source myenv/bin/activate
-
Install the
azureml-inference-server-http
package from the pypi feed:python -m pip install azureml-inference-server-http
-
Create your entry script (
score.py
). The following example creates a basic entry script:echo ' import time def init(): time.sleep(1) def run(input_data): return {"message":"Hello, World!"} ' > score.py
-
Start the server and set
score.py
as the entry script:azmlinfsrv --entry_script score.py
[!NOTE] The server is hosted on 0.0.0.0, which means it will listen to all IP addresses of the hosting machine.
-
Send a scoring request to the server using
curl
:curl -p 127.0.0.1:5001/score
The server should respond like this.
{"message": "Hello, World!"}
Now you can modify the scoring script and test your changes by running the server again.
The server is listening on port 5001 at these routes.
Name | Route |
---|---|
Liveness Probe | 127.0.0.1:5001/ |
Score | 127.0.0.1:5001/score |
The following table contains the parameters accepted by the server:
Parameter | Required | Default | Description |
---|---|---|---|
entry_script | True | N/A | The relative or absolute path to the scoring script. |
model_dir | False | N/A | The relative or absolute path to the directory holding the model used for inferencing. |
port | False | 5001 | The serving port of the server. |
worker_count | False | 1 | The number of worker threads that will process concurrent requests. |
appinsights_instrumentation_key | False | N/A | The instrumentation key to the application insights where the logs will be published. |
The following steps explain how the Azure Machine Learning inference HTTP server works handles incoming requests:
- A Python CLI wrapper sits around the server's network stack and is used to start the server.
- A client sends a request to the server.
- When a request is received, it goes through the WSGI server and is then dispatched to one of the workers.
- The requests are then handled by a Flask app, which loads the entry script & any dependencies.
- Finally, the request is sent to your entry script. The entry script then makes an inference call to the loaded model and returns a response.
:::image type="content" source="./media/how-to-inference-server-http/inference-server-architecture.png" alt-text="Diagram of the HTTP server process":::
There are two ways to use Visual Studio Code (VSCode) and Python Extension to debug with azureml-inference-server-http package.
- User starts the AzureML Inference Server in a command line and use VSCode + Python Extension to attach to the process.
- User sets up the
launch.json
in the VSCode and start the AzureML Inference Server within VSCode.
In both ways, user can set breakpoint and debug step by step.
After changing your scoring script (score.py
), stop the server with ctrl + c
. Then restart it with azmlinfsrv --entry_script score.py
.
The Azure Machine Learning inference server runs on Windows & Linux based operating systems.
- For more information on creating an entry script and deploying models, see How to deploy a model using Azure Machine Learning.
- Learn about Prebuilt docker images for inference