Skip to content

Commit 16c3124

Browse files
authored
[vision] Migrate vision projects to use snippets extraction (#33222)
### Packages impacted by this PR - @azure-rest/ai-vision-image-analysis ### Issues associated with this PR - #32416 ### Describe the problem that is addressed by this PR Updates all projects under `vision` to use snippets extraction. ### What are the possible designs available to address the problem? If there are more than one possible design, why was the one in this PR chosen? ### Are there test cases added in this PR? _(If not, why?)_ ### Provide a list of related PRs _(if any)_ ### Command used to generate this PR:**_(Applicable only to SDK release request PRs)_ ### Checklists - [ ] Added impacted package name to the issue description - [ ] Does this PR needs any fixes in the SDK Generator?** _(If so, create an Issue in the [Autorest/typescript](https://github.com/Azure/autorest.typescript) repository and link it here)_ - [ ] Added a changelog (if necessary)
1 parent af3ed50 commit 16c3124

18 files changed

+424
-129
lines changed

sdk/vision/ai-vision-image-analysis-rest/README.md

+176-74
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,13 @@
33
The Image Analysis service provides AI algorithms for processing images and returning information about their content. In a single service call, you can extract one or more visual features from the image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR) and detecting objects. For more information on the service and the supported visual features, see [Image Analysis overview][image_analysis_overview], and the [Concepts][image_analysis_concepts] page.
44

55
Use the Image Analysis client library to:
6-
* Authenticate against the service
7-
* Set what features you would like to extract
8-
* Upload an image for analysis, or send an image URL
9-
* Get the analysis result
106

11-
[Product documentation][image_analysis_overview]
7+
- Authenticate against the service
8+
- Set what features you would like to extract
9+
- Upload an image for analysis, or send an image URL
10+
- Get the analysis result
11+
12+
[Product documentation][image_analysis_overview]
1213
| [Samples](https://aka.ms/azsdk/image-analysis/samples/js)
1314
| [Vision Studio][vision_studio]
1415
| [API reference documentation](https://aka.ms/azsdk/image-analysis/ref-docs/js)
@@ -28,9 +29,9 @@ See our [support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUP
2829

2930
- An [Azure subscription](https://azure.microsoft.com/free).
3031
- A [Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) in your Azure subscription.
31-
* You will need the key and endpoint from this resource to authenticate against the service.
32-
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
33-
* Note that in order to run Image Analysis with the `Caption` or `Dense Captions` features, the Azure resource needs to be from one of the following GPU-supported regions: `East US`, `France Central`, `Korea Central`, `North Europe`, `Southeast Asia`, `West Europe`, or `West US`.
32+
- You will need the key and endpoint from this resource to authenticate against the service.
33+
- You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
34+
- Note that in order to run Image Analysis with the `Caption` or `Dense Captions` features, the Azure resource needs to be from one of the following GPU-supported regions: `East US`, `France Central`, `Korea Central`, `North Europe`, `Southeast Asia`, `West Europe`, or `West US`.
3435

3536
### Install the `@azure-rest/ai-vision-image-analysis` package
3637

@@ -64,9 +65,9 @@ For more information about these features, see [Image Analysis overview][image_a
6465

6566
Image Analysis works on images that meet the following requirements:
6667

67-
* The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
68-
* The file size of the image must be less than 20 megabytes (MB)
69-
* The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
68+
- The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
69+
- The file size of the image must be less than 20 megabytes (MB)
70+
- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
7071

7172
### ImageAnalysisClient
7273

@@ -78,26 +79,20 @@ The `ImageAnalysisClient` is the primary interface for developers interacting wi
7879

7980
Here's an example of how to create an `ImageAnalysisClient` instance using a key-based authentication.
8081

81-
82-
```javascript Snippet:const endpoint = "<your_endpoint>";
83-
const key = "<your_key>";
84-
const credential = new AzureKeyCredential(key);
85-
86-
const client = new ImageAnalysisClient(endpoint, credential);
87-
88-
const { ImageAnalysisClient } = require("@azure-rest/ai-vision-image-analysis");
89-
const { AzureKeyCredential } = require('@azure/core-auth');
82+
```ts snippet:ReadmeSampleCreateClient_KeyCredential
83+
import { AzureKeyCredential } from "@azure/core-auth";
84+
import ImageAnalysisClient from "@azure-rest/ai-vision-image-analysis";
9085

9186
const endpoint = "<your_endpoint>";
9287
const key = "<your_key>";
9388
const credential = new AzureKeyCredential(key);
94-
95-
const client = new ImageAnalysisClient(endpoint, credential);
89+
const client = ImageAnalysisClient(endpoint, credential);
9690
```
9791

9892
#### Create ImageAnalysisClient with a Microsoft Entra ID Credential
9993

10094
**Prerequisites for Entra ID Authentication**:
95+
10196
- The role `Cognitive Services User` assigned to you. Role assignment can be done via the "Access Control (IAM)" tab of your Computer Vision resource in the Azure portal.
10297
- [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) installed.
10398
- You are logged into your Azure account by running `az login`.
@@ -110,92 +105,199 @@ Client subscription key authentication is used in most of the examples in this g
110105
npm install @azure/identity
111106
```
112107

113-
```javascript Snippet:ImageAnalysisEntraIDAuth
108+
```ts snippet:ReadmeSampleCreateClient_DefaultAzureCredential
109+
import { DefaultAzureCredential } from "@azure/identity";
110+
import ImageAnalysisClient from "@azure-rest/ai-vision-image-analysis";
111+
114112
const endpoint = "<your_endpoint>";
115113
const credential = new DefaultAzureCredential();
116-
117-
const client = new ImageAnalysisClient(endpoint, credential);
114+
const client = ImageAnalysisClient(endpoint, credential);
118115
```
116+
119117
### Analyze an image from URL
120118

121119
The following example demonstrates how to analyze an image using the Image Analysis client library for JavaScript.
122120

123-
```javascript Snippet:ImageAnalysisFromUrl
121+
```ts snippet:ReadmeSampleAnalyzeImageFromUrl
122+
import { DefaultAzureCredential } from "@azure/identity";
123+
import ImageAnalysisClient, { isUnexpected } from "@azure-rest/ai-vision-image-analysis";
124+
125+
const endpoint = "<your_endpoint>";
126+
const credential = new DefaultAzureCredential();
127+
const client = ImageAnalysisClient(endpoint, credential);
128+
124129
const imageUrl = "https://example.com/image.jpg";
125130
const features = ["Caption", "DenseCaptions", "Objects", "People", "Read", "SmartCrops", "Tags"];
126131

127-
async function analyzeImageFromUrl() {
128-
const result = await client.path("/imageanalysis:analyze").post({
129-
body: {
130-
url: imageUrl,
131-
},
132-
queryParameters: {
133-
features: features,
134-
"smartCrops-aspect-ratios": [0.9, 1.33],
135-
},
136-
contentType: "application/json",
137-
});
132+
const result = await client.path("/imageanalysis:analyze").post({
133+
body: {
134+
url: imageUrl,
135+
},
136+
queryParameters: {
137+
features: features,
138+
"smartCrops-aspect-ratios": [0.9, 1.33],
139+
},
140+
contentType: "application/json",
141+
});
142+
if (isUnexpected(result)) {
143+
throw result.body.error;
144+
}
145+
146+
console.log(`Model Version: ${result.body.modelVersion}`);
147+
console.log(`Image Metadata: ${JSON.stringify(result.body.metadata)}`);
148+
149+
if (result.body.captionResult) {
150+
console.log(
151+
`Caption: ${result.body.captionResult.text} (confidence: ${result.body.captionResult.confidence})`,
152+
);
153+
}
154+
155+
if (result.body.denseCaptionsResult) {
156+
for (const denseCaption of result.body.denseCaptionsResult.values) {
157+
console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`);
158+
}
159+
}
160+
161+
if (result.body.objectsResult) {
162+
for (const object of result.body.objectsResult.values) {
163+
console.log(`Object: ${JSON.stringify(object)}`);
164+
}
165+
}
166+
167+
if (result.body.peopleResult) {
168+
for (const person of result.body.peopleResult.values) {
169+
console.log(`Person: ${JSON.stringify(person)}`);
170+
}
171+
}
172+
173+
if (result.body.readResult) {
174+
for (const block of result.body.readResult.blocks) {
175+
console.log(`Text Block: ${JSON.stringify(block)}`);
176+
}
177+
}
138178

139-
console.log("Image analysis result:", result.body);
179+
if (result.body.smartCropsResult) {
180+
for (const smartCrop of result.body.smartCropsResult.values) {
181+
console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`);
182+
}
140183
}
141184

142-
analyzeImageFromUrl();
185+
if (result.body.tagsResult) {
186+
for (const tag of result.body.tagsResult.values) {
187+
console.log(`Tag: ${JSON.stringify(tag)}`);
188+
}
189+
}
143190
```
144191

145192
### Analyze an image from a local file
146193

147194
In this example, we will analyze an image from a local file using the Image Analysis client library for JavaScript.
148195

149-
```javascript Snippet:ImageAnalysisFromLocalFile
150-
const fs = require("fs");
196+
```ts snippet:ReadmeSampleAnalyzeImageFromFile
197+
import { DefaultAzureCredential } from "@azure/identity";
198+
import ImageAnalysisClient, { isUnexpected } from "@azure-rest/ai-vision-image-analysis";
199+
import { readFileSync } from "node:fs";
200+
201+
const endpoint = "<your_endpoint>";
202+
const credential = new DefaultAzureCredential();
203+
const client = ImageAnalysisClient(endpoint, credential);
151204

152205
const imagePath = "./path/to/your/image.jpg";
153206
const features = ["Caption", "DenseCaptions", "Objects", "People", "Read", "SmartCrops", "Tags"];
154207

155-
async function analyzeImageFromFile() {
156-
const imageBuffer = fs.readFileSync(imagePath);
208+
const imageBuffer = readFileSync(imagePath);
209+
210+
const result = await client.path("/imageanalysis:analyze").post({
211+
body: imageBuffer,
212+
queryParameters: {
213+
features: features,
214+
"smartCrops-aspect-ratios": [0.9, 1.33],
215+
},
216+
contentType: "application/octet-stream",
217+
});
218+
if (isUnexpected(result)) {
219+
throw result.body.error;
220+
}
221+
222+
console.log(`Model Version: ${result.body.modelVersion}`);
223+
console.log(`Image Metadata: ${JSON.stringify(result.body.metadata)}`);
157224

158-
const result = await client.path("/imageanalysis:analyze").post({
159-
body: imageBuffer,
160-
queryParameters: {
161-
features: features,
162-
"smartCrops-aspect-ratios": [0.9, 1.33],
163-
},
164-
contentType: "application/octet-stream",
165-
});
225+
if (result.body.captionResult) {
226+
console.log(
227+
`Caption: ${result.body.captionResult.text} (confidence: ${result.body.captionResult.confidence})`,
228+
);
229+
}
230+
231+
if (result.body.denseCaptionsResult) {
232+
for (const denseCaption of result.body.denseCaptionsResult.values) {
233+
console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`);
234+
}
235+
}
166236

167-
console.log("Image analysis result:", result.body);
237+
if (result.body.objectsResult) {
238+
for (const object of result.body.objectsResult.values) {
239+
console.log(`Object: ${JSON.stringify(object)}`);
240+
}
241+
}
242+
243+
if (result.body.peopleResult) {
244+
for (const person of result.body.peopleResult.values) {
245+
console.log(`Person: ${JSON.stringify(person)}`);
246+
}
168247
}
169248

170-
analyzeImageFromFile();
249+
if (result.body.readResult) {
250+
for (const block of result.body.readResult.blocks) {
251+
console.log(`Text Block: ${JSON.stringify(block)}`);
252+
}
253+
}
254+
255+
if (result.body.smartCropsResult) {
256+
for (const smartCrop of result.body.smartCropsResult.values) {
257+
console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`);
258+
}
259+
}
260+
261+
if (result.body.tagsResult) {
262+
for (const tag of result.body.tagsResult.values) {
263+
console.log(`Tag: ${JSON.stringify(tag)}`);
264+
}
265+
}
171266
```
172267

173268
### Extract text from an image Url
269+
174270
This example demonstrates how to extract printed or hand-written text for the image file [sample.jpg](https://aka.ms/azsdk/image-analysis/sample.jpg) using the ImageAnalysisClient. The method call returns an ImageAnalysisResult object. The ReadResult property on the returned object includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.
175-
``` javascript Snippet:readmeText
176-
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
177271

178-
const features: string[] = [
179-
'Read'
180-
];
272+
```ts snippet:ReadmeSampleExtractTextFromImageUrl
273+
import { DefaultAzureCredential } from "@azure/identity";
274+
import ImageAnalysisClient, { isUnexpected } from "@azure-rest/ai-vision-image-analysis";
275+
276+
const endpoint = "<your_endpoint>";
277+
const credential = new DefaultAzureCredential();
278+
const client = ImageAnalysisClient(endpoint, credential);
181279

182-
const imageUrl: string = 'https://aka.ms/azsdk/image-analysis/sample.jpg';
280+
const features: string[] = ["Read"];
281+
const imageUrl: string = "https://aka.ms/azsdk/image-analysis/sample.jpg";
183282

184-
client.path('/imageanalysis:analyze').post({
283+
const result = await client.path("/imageanalysis:analyze").post({
185284
body: { url: imageUrl },
186285
queryParameters: { features: features },
187-
contentType: 'application/json'
188-
}).then(result => {
189-
const iaResult: ImageAnalysisResultOutput = result.body as ImageAnalysisResultOutput;
190-
191-
// Process the response
192-
if (iaResult.readResult && iaResult.readResult.blocks.length > 0) {
193-
iaResult.readResult.blocks.forEach(block => {
194-
console.log(`Detected text block: ${JSON.stringify(block)}`);
195-
});
196-
} else {
197-
console.log('No text blocks detected.');
286+
contentType: "application/json",
287+
});
288+
if (isUnexpected(result)) {
289+
throw result.body.error;
290+
}
291+
292+
// Process the response
293+
const imageAnalysisResult = result.body;
294+
if (imageAnalysisResult.readResult && imageAnalysisResult.readResult.blocks.length > 0) {
295+
for (const block of imageAnalysisResult.readResult.blocks) {
296+
console.log(`Detected text block: ${JSON.stringify(block)}`);
198297
}
298+
} else {
299+
console.log("No text blocks detected.");
300+
}
199301
```
200302

201303
## Troubleshooting
@@ -204,8 +306,8 @@ client.path('/imageanalysis:analyze').post({
204306

205307
Enabling logging may help uncover useful information about failures. In order to see a log of HTTP requests and responses, set the `AZURE_LOG_LEVEL` environment variable to `info`. Alternatively, logging can be enabled at runtime by calling `setLogLevel` in the `@azure/logger`:
206308

207-
```javascript
208-
const { setLogLevel } = require("@azure/logger");
309+
```ts snippet:SetLogLevel
310+
import { setLogLevel } from "@azure/logger";
209311

210312
setLogLevel("info");
211313
```
@@ -228,4 +330,4 @@ If you'd like to contribute to this library, please read the [contributing guide
228330
[image_analysis_concepts]: https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40
229331
[vision_studio]: https://aka.ms/vision-studio/image-analysis
230332
[azure_identity]: https://learn.microsoft.com/javascript/api/overview/azure/identity-readme
231-
[azure_identity_dac]: https://learn.microsoft.com/javascript/api/@azure/identity/defaultazurecredential
333+
[azure_identity_dac]: https://learn.microsoft.com/javascript/api/@azure/identity/defaultazurecredential

0 commit comments

Comments
 (0)