Skip to content

Commit de2df23

Browse files
committedMay 13, 2021
incorporating feedback
1 parent 4f27beb commit de2df23

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed
 

‎articles/machine-learning/how-to-azureml-inference-server-http.md renamed to ‎articles/machine-learning/how-to-inference-server-http.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ The following table contains the parameters accepted by the server:
123123
| port | False | 5001 | The serving port of the server.|
124124
| worker_count | False | 1 | The number of worker threads that will process concurrent requests. |
125125

126-
## Request Flow
126+
## Request flow
127127

128128
The following steps explain how the Azure Machine Learning inference HTTP server works handles incoming requests:
129129

@@ -135,7 +135,7 @@ The following steps explain how the Azure Machine Learning inference HTTP server
135135
1. The requests are then handled by a [Flask](https://flask.palletsprojects.com/) app, which loads the entry script & any dependencies.
136136
1. Finally, the request is sent to your entry script. The entry script then makes an inference call to the loaded model and returns a response.
137137
138-
:::image type="content" source="./media/how-to-azureml-inference-server-http/azureml-inference-server-arch.png" alt-text="Diagram of the HTTP server process":::
138+
:::image type="content" source="./media/how-to-inference-server-http/inference-server-architecture.png" alt-text="Diagram of the HTTP server process":::
139139
## Frequently asked questions
140140
141141
### Do I need to reload the server when changing the score script?

0 commit comments

Comments
 (0)
Please sign in to comment.