Skip to content

Commit fb048d3

Browse files
committedAug 18, 2021
cleanup on previous PR
1 parent f3d6d59 commit fb048d3

File tree

5 files changed

+32
-34
lines changed

5 files changed

+32
-34
lines changed
 

‎articles/batch/batch-docker-container-workloads.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Container workloads
33
description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks.
44
ms.topic: how-to
5-
ms.date: 08/13/2021
5+
ms.date: 08/18/2021
66
ms.custom: "seodec18, devx-track-csharp"
77
---
88
# Run container applications on Azure Batch
@@ -269,7 +269,7 @@ CloudPool pool = batchClient.PoolOperations.CreatePool(
269269

270270
### Managed Identity support for ACR
271271

272-
When accessing containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry), either a username/password or a Managed Identity can be used to authenticate with the service. To use a Managed Identity, first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the `AcrPull` role assigned for the container registry you wish to access. Then, simply tell Batch which identity to use when authenticating with ACR.
272+
When accessing containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry), either a username/password or a managed identity can be used to authenticate with the service. To use a managed identity, first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the `AcrPull` role assigned for the container registry you wish to access. Then, simply tell Batch which identity to use when authenticating with ACR.
273273

274274
```csharp
275275
ContainerRegistry containerRegistry = new ContainerRegistry(

‎articles/batch/batch-task-output-files.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Persist output data to Azure Storage with Batch service API
33
description: Learn how to use the Batch service API to persist Batch task and job output data to Azure Storage.
44
ms.topic: how-to
5-
ms.date: 07/30/2020
5+
ms.date: 08/18/2021
66
ms.custom: "seodec18, devx-track-csharp"
77

88
---
@@ -11,9 +11,9 @@ ms.custom: "seodec18, devx-track-csharp"
1111

1212
[!INCLUDE [batch-task-output-include](../../includes/batch-task-output-include.md)]
1313

14-
The Batch service API supports persisting output data to Azure Storage for tasks and job manager tasks that run on pools with the virtual machine configuration. When you add a task, you can specify a container in Azure Storage as the destination for the task's output. The Batch service then writes any output data to that container when the task is complete.
14+
The Batch service API supports persisting output data to Azure Storage for tasks and job manager tasks that run on pools with [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration). When you add a task, you can specify a container in Azure Storage as the destination for the task's output. The Batch service then writes any output data to that container when the task is complete.
1515

16-
An advantage to using the Batch service API to persist task output is that you do not need to modify the application that the task is running. Instead, with a few modifications to your client application, you can persist the task's output from within the same code that creates the task.
16+
When using the Batch service API to persist task output, you don't need to modify the application that the task is running. Instead, with a few modifications to your client application, you can persist the task's output from within the same code that creates the task.
1717

1818
> [!IMPORTANT]
1919
> Persisting task data to Azure Storage with the Batch service API does not work with pools created before [February 1, 2018](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md#1204).
@@ -27,11 +27,11 @@ Azure Batch provides more than one way to persist task output. Using the Batch s
2727
- You want to persist output to an Azure Storage container with an arbitrary name.
2828
- You want to persist output to an Azure Storage container named according to the [Batch File Conventions standard](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/batch/Microsoft.Azure.Batch.Conventions.Files).
2929

30-
If your scenario differs from those listed above, you may need to consider a different approach. For example, the Batch service API does not currently support streaming output to Azure Storage while the task is running. To stream output, consider using the Batch File Conventions library, available for .NET. For other languages, you'll need to implement your own solution. For information on other options for persisting task output, see [Persist job and task output to Azure Storage](batch-task-output.md).
30+
If your scenario differs from those listed above, you may need to consider a different approach. For example, the Batch service API does not currently support streaming output to Azure Storage while the task is running. To stream output, consider using the Batch File Conventions library, available for .NET. For other languages, you'll need to implement your own solution. For more information about other options, see [Persist job and task output to Azure Storage](batch-task-output.md).
3131

3232
## Create a container in Azure Storage
3333

34-
To persist task output to Azure Storage, you'll need to create a container that serves as the destination for your output files. Create the container before you run your task, preferably before you submit your job. To create the container, use the appropriate Azure Storage client library or SDK. For more information about Azure Storage APIs, see the [Azure Storage documentation](../storage/index.yml).
34+
To persist task output to Azure Storage, you'll need to create a container that serves as the destination for your output files. Create the container before you run your task, preferably before you submit your job, by using the appropriate Azure Storage client library or SDK. For more information about Azure Storage APIs, see the [Azure Storage documentation](../storage/index.yml).
3535

3636
For example, if you are writing your application in C#, use the [Azure Storage client library for .NET](https://www.nuget.org/packages/WindowsAzure.Storage/). The following example shows how to create a container:
3737

@@ -42,7 +42,7 @@ await container.CreateIfNotExists();
4242

4343
## Get a shared access signature for the container
4444

45-
After you create the container, get a shared access signature (SAS) with write access to the container. A SAS provides delegated access to the container. The SAS grants access with a specified set of permissions and over a specified time interval. The Batch service needs a SAS with write permissions to write task output to the container. For more information about SAS, see [Using shared access signatures \(SAS\) in Azure Storage](../storage/common/storage-sas-overview.md).
45+
After you create the container, get a shared access signature (SAS) with write access to the container. A SAS provides delegated access to the container. The SAS grants access with a specified set of permissions and over a specified time interval. The Batch service needs an SAS with write permissions to write task output to the container. For more information about SAS, see [Using shared access signatures \(SAS\) in Azure Storage](../storage/common/storage-sas-overview.md).
4646

4747
When you get a SAS using the Azure Storage APIs, the API returns a SAS token string. This token string includes all parameters of the SAS, including the permissions and the interval over which the SAS is valid. To use the SAS to access a container in Azure Storage, you need to append the SAS token string to the resource URI. The resource URI, together with the appended SAS token, provides authenticated access to Azure Storage.
4848

@@ -60,9 +60,9 @@ string containerSasUrl = container.Uri.AbsoluteUri + containerSasToken;
6060

6161
## Specify output files for task output
6262

63-
To specify output files for a task, create a collection of [OutputFile](/dotnet/api/microsoft.azure.batch.outputfile) objects and assign it to the [CloudTask.OutputFiles](/dotnet/api/microsoft.azure.batch.cloudtask.outputfiles#Microsoft_Azure_Batch_CloudTask_OutputFiles) property when you create the task.
63+
To specify output files for a task, create a collection of [OutputFile](/dotnet/api/microsoft.azure.batch.outputfile) objects and assign it to the [CloudTask.OutputFiles](/dotnet/api/microsoft.azure.batch.cloudtask.outputfiles) property when you create the task.
6464

65-
The following C# code example creates a task that writes random numbers to a file named `output.txt`. The example creates an output file for `output.txt` to be written to the container. The example also creates output files for any log files that match the file pattern `std*.txt` (_e.g._, `stdout.txt` and `stderr.txt`). The container URL requires the SAS that was created previously for the container. The Batch service uses the SAS to authenticate access to the container:
65+
The following C# code example creates a task that writes random numbers to a file named `output.txt`. The example creates an output file for `output.txt` to be written to the container. The example also creates output files for any log files that match the file pattern `std*.txt` (_e.g._, `stdout.txt` and `stderr.txt`). The container URL requires the SAS that was created previously for the container. The Batch service uses the SAS to authenticate access to the container.
6666

6767
```csharp
6868
new CloudTask(taskId, "cmd /v:ON /c \"echo off && set && (FOR /L %i IN (1,1,100000) DO (ECHO !RANDOM!)) > output.txt\"")
@@ -118,7 +118,7 @@ new CloudTask(taskId, "cmd /v:ON /c \"echo off && set && (FOR /L %i IN (1,1,1000
118118

119119
### Specify a file pattern for matching
120120

121-
When you specify an output file, you can use the [OutputFile.FilePattern](/dotnet/api/microsoft.azure.batch.outputfile.filepattern#Microsoft_Azure_Batch_OutputFile_FilePattern) property to specify a file pattern for matching. The file pattern may match zero files, a single file, or a set of files that are created by the task.
121+
When you specify an output file, you can use the [OutputFile.FilePattern](/dotnet/api/microsoft.azure.batch.outputfile.filepattern) property to specify a file pattern for matching. The file pattern may match zero files, a single file, or a set of files that are created by the task.
122122

123123
The **FilePattern** property supports standard filesystem wildcards such as `*` (for non-recursive matches) and `**` (for recursive matches). For example, the code sample above specifies the file pattern to match `std*.txt` non-recursively:
124124

@@ -130,7 +130,7 @@ To upload a single file, specify a file pattern with no wildcards. For example,
130130

131131
### Specify an upload condition
132132

133-
The [OutputFileUploadOptions.UploadCondition](/dotnet/api/microsoft.azure.batch.outputfileuploadoptions.uploadcondition#Microsoft_Azure_Batch_OutputFileUploadOptions_UploadCondition) property permits conditional uploading of output files. A common scenario is to upload one set of files if the task succeeds, and a different set of files if it fails. For example, you may want to upload verbose log files only when the task fails and exits with a nonzero exit code. Similarly, you may want to upload result files only if the task succeeds, as those files may be missing or incomplete if the task fails.
133+
The [OutputFileUploadOptions.UploadCondition](/dotnet/api/microsoft.azure.batch.outputfileuploadoptions.uploadcondition) property permits conditional uploading of output files. A common scenario is to upload one set of files if the task succeeds, and a different set of files if it fails. For example, you may want to upload verbose log files only when the task fails and exits with a nonzero exit code. Similarly, you may want to upload result files only if the task succeeds, as those files may be missing or incomplete if the task fails.
134134

135135
The code sample above sets the **UploadCondition** property to **TaskCompletion**. This setting specifies that the file is to be uploaded after the tasks completes, regardless of the value of the exit code.
136136

@@ -142,7 +142,7 @@ For other settings, see the [Output​File​Upload​Condition](/dotnet/api/mic
142142

143143
The tasks in a job may produce files that have the same name. For example, `stdout.txt` and `stderr.txt` are created for every task that runs in a job. Because each task runs in its own context, these files don't conflict on the node's file system. However, when you upload files from multiple tasks to a shared container, you'll need to disambiguate files with the same name.
144144

145-
The [OutputFileBlobContainerDestination.​Path](/dotnet/api/microsoft.azure.batch.outputfileblobcontainerdestination.path#Microsoft_Azure_Batch_OutputFileBlobContainerDestination_Path) property specifies the destination blob or virtual directory for output files. You can use the **Path** property to name the blob or virtual directory in such a way that output files with the same name are uniquely named in Azure Storage. Using the task ID in the path is a good way to ensure unique names and easily identify files.
145+
The [OutputFileBlobContainerDestination.​Path](/dotnet/api/microsoft.azure.batch.outputfileblobcontainerdestination.path) property specifies the destination blob or virtual directory for output files. You can use the **Path** property to name the blob or virtual directory in such a way that output files with the same name are uniquely named in Azure Storage. Using the task ID in the path is a good way to ensure unique names and easily identify files.
146146

147147
If the **FilePattern** property is set to a wildcard expression, then all files that match the pattern are uploaded to the virtual directory specified by the **Path** property. For example, if the container is `mycontainer`, the task ID is `mytask`, and the file pattern is `..\std*.txt`, then the absolute URIs to the output files in Azure Storage will be similar to:
148148

@@ -166,7 +166,7 @@ For more information about virtual directories in Azure Storage, see [List the b
166166

167167
## Diagnose file upload errors
168168

169-
If uploading output files to Azure Storage fails, then the task moves to the **Completed** state and the [TaskExecutionInformation.​FailureInformation](/dotnet/api/microsoft.azure.batch.taskexecutioninformation.failureinformation#Microsoft_Azure_Batch_TaskExecutionInformation_FailureInformation) property is set. Examine the **FailureInformation** property to determine what error occurred. For example, here is an error that occurs on file upload if the container cannot be found:
169+
If uploading output files to Azure Storage fails, then the task moves to the **Completed** state and the [TaskExecutionInformation.​FailureInformation](/dotnet/api/microsoft.azure.batch.taskexecutioninformation.failureinformation) property is set. Examine the **FailureInformation** property to determine what error occurred. For example, here is an error that occurs on file upload if the container cannot be found:
170170

171171
```
172172
Category: UserError

‎articles/batch/managed-identity-pools.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Configure managed identities in Batch pools
33
description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes.
44
ms.topic: conceptual
5-
ms.date: 05/25/2021
5+
ms.date: 08/18/2021
66

77
---
88
# Configure managed identities in Batch pools
@@ -66,13 +66,12 @@ var pool = await managementClient.Pool.CreateWithHttpMessagesAsync(
6666

6767
## Use user-assigned managed identities in Batch nodes
6868

69-
Many Azure Batch technologies which access other Azure resources, such as Azure Storage or Azure Container Registries, support using Managed Identities.
70-
Please refer to the following for more information on using Managed Identity with Azure Batch:
69+
Many Azure Batch technologies which access other Azure resources, such as Azure Storage or Azure Container Registry, support managed identities. For more information on using managed identities with Azure Batch, see the following links:
7170

72-
- [Resource Files](resource-files.md)
73-
- [Output Files](batch-task-output-files.md#specify-output-files-using-managed-identity)
74-
- [Azure Container Registries](batch-docker-container-workloads.md#managed-identity-support-for-acr)
75-
- [Azure Blob Container Filesystem](virtual-file-mount.md#azure-blob-container)
71+
- [Resource files](resource-files.md)
72+
- [Output files](batch-task-output-files.md#specify-output-files-using-managed-identity)
73+
- [Azure Container Registry](batch-docker-container-workloads.md#managed-identity-support-for-acr)
74+
- [Azure Blob container file system](virtual-file-mount.md#azure-blob-container)
7675

7776
You can also manually configure your tasks so that the managed identities can directly access [Azure resources that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
7877

‎articles/batch/resource-files.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Creating and using resource files
33
description: Learn how to create Batch resource files from various input sources. This article covers a few common methods on how to create and place them on a VM.
4-
ms.date: 05/25/2021
4+
ms.date: 08/18/2021
55
ms.topic: how-to
66
---
77

@@ -28,8 +28,7 @@ There are a few different options available to generate resource files, each wit
2828

2929
Using a storage container URL means, with the correct permissions, you can access files in any storage container in Azure.
3030

31-
In this C# example, the files have already been uploaded to an Azure storage container as blob storage. To access the data needed to create a resource file, we first need to get access to the storage container.
32-
31+
In this C# example, the files have already been uploaded to an Azure storage container as blob storage. To access the data needed to create a resource file, we first need to get access to the storage container. This can be done in several ways.
3332

3433
#### Shared Access Signature
3534

@@ -63,19 +62,19 @@ If desired, you can use the [blobPrefix](/dotnet/api/microsoft.azure.batch.resou
6362
ResourceFile inputFile = ResourceFile.FromStorageContainerUrl(containerSasUrl, blobPrefix = yourPrefix);
6463
```
6564

66-
#### Managed Identity
65+
#### Managed identity
6766

68-
Create a User Assigned Managed Identity and assign it the `Storage Blob Data Reader` role for your Azure Storage container. Next, [assign the managed identity to your pool](managed-identity-pools.md) so that your VMs can access the identity. Finally, you can access the files in your container by specifying the identity for Batch to use.
67+
Create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) and assign it the `Storage Blob Data Reader` role for your Azure Storage container. Next, [assign the managed identity to your pool](managed-identity-pools.md) so that your VMs can access the identity. Finally, you can access the files in your container by specifying the identity for Batch to use.
6968

7069
```csharp
7170
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
7271

7372
ResourceFile inputFile = ResourceFile.FromStorageContainerUrl(container.Uri, identityReference: new ComputeNodeIdentityReference() { ResourceId = "/subscriptions/SUB/resourceGroups/RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity-name" });
7473
```
7574

76-
#### Public Access
75+
#### Public access
7776

78-
An alternative to generating a SAS URL or using a Managed Identity is to enable anonymous, public read-access to a container and its blobs in Azure Blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a SAS. Public read-access is typically used for scenarios where you want certain blobs to be always available for anonymous read-access. If this scenario suits your solution, see the [Anonymous access to blobs](../storage/blobs/anonymous-read-access-configure.md) article to learn more about managing access to your blob data.
77+
An alternative to generating a SAS URL or using a managed identity is to enable anonymous, public read-access to a container and its blobs in Azure Blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a SAS. Public access is typically used for scenarios where you want certain blobs to be always available for anonymous read-access. If this scenario suits your solution, see [Configure anonymous public read access for containers and blobs](../storage/blobs/anonymous-read-access-configure.md) to learn more about managing access to your blob data.
7978

8079
### Storage container name (autostorage)
8180

@@ -111,7 +110,7 @@ You can also use a string that you define as a URL (or a combination of strings
111110
ResourceFile inputFile = ResourceFile.FromUrl(yourDomain + yourFile, filePath);
112111
```
113112

114-
If your file is in Azure Storage, you can use a Managed Identity instead of generating a Shared Access Signature for the resource file.
113+
If your file is in Azure Storage, you can use a managed identity instead of generating a Shared Access Signature for the resource file.
115114

116115
```csharp
117116
ResourceFile inputFile = ResourceFile.FromUrl(yourURLFromAzureStorage,
@@ -121,7 +120,7 @@ ResourceFile inputFile = ResourceFile.FromUrl(yourURLFromAzureStorage,
121120
```
122121

123122
> [!Note]
124-
> Managed Identity authentication will only work with files in Azure Storage. The Managed Identity needs the `Storage Blob Data Reader` role assignment for the container the file is in and it must also be [assigned to the Batch pool](managed-identity-pools.md).
123+
> Managed identity authentication will only work with files in Azure Storage. The nanaged identity needs the `Storage Blob Data Reader` role assignment for the container the file is in, and it must also be [assigned to the Batch pool](managed-identity-pools.md).
125124
126125
## Tips and suggestions
127126

‎articles/batch/virtual-file-mount.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Mount a virtual file system on a pool
33
description: Learn how to mount a virtual file system on a Batch pool.
44
ms.topic: how-to
55
ms.custom: devx-track-csharp
6-
ms.date: 03/26/2021
6+
ms.date: 08/18/2021
77
---
88

99
# Mount a virtual file system on a Batch pool
@@ -73,7 +73,7 @@ new PoolAddParameter
7373

7474
### Azure Blob container
7575

76-
Another option is to use Azure Blob storage via [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Mounting a blob file system requires an `AccountKey`, `SasKey` or `Managed Identity` with access to your storage account. For information on getting these keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md), [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md) or [Assigning Managed Identities to pools](managed-identity-pools.md). For more information and tips on using blobfuse, see the [blobfuse project](https://github.com/Azure/azure-storage-fuse).
76+
Another option is to use Azure Blob storage via [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md). Mounting a blob file system requires an `AccountKey`, `SasKey` or `Managed Identity` with access to your storage account. For information on getting these keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md), [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md), and [Configure managed identities in Batch pools](managed-identity-pools.md). For more information and tips on using blobfuse, see the [blobfuse project](https://github.com/Azure/azure-storage-fuse).
7777

7878
To get default access to the blobfuse mounted directory, run the task as an **Administrator**. Blobfuse mounts the directory at the user space, and at pool creation it is mounted as root. In Linux all **Administrator** tasks are root. All options for the FUSE module are described in the [FUSE reference page](https://manpages.ubuntu.com/manpages/xenial/man8/mount.fuse.8.html).
7979

@@ -106,7 +106,7 @@ new PoolAddParameter
106106
> `AccountKey`, `SasKey` and `IdentityReference` are mutually exclusive, only one can be specified.
107107
108108
> [!NOTE]
109-
>If using a Managed Identity, ensure that the Identity has been assigned to the pool so that it is available on the VM doing the mounting. The identity will need to have the `Storage Blob Data Contributor` role assigned to function properly.
109+
>If using a managed identity, ensure that the identity has been [assigned to the pool](managed-identity-pools.md) so that it is available on the VM doing the mounting. The identity will need to have the `Storage Blob Data Contributor` role in order to function properly.
110110
111111
### Network File System
112112

@@ -162,7 +162,7 @@ If a mount configuration fails, the compute node in the pool will fail and the n
162162

163163
To get the log files for debugging, use [OutputFiles](batch-task-output-files.md) to upload the `*.log` files. The `*.log` files contain information about the file system mount at the `AZ_BATCH_NODE_MOUNTS_DIR` location. Mount log files have the format: `<type>-<mountDirOrDrive>.log` for each mount. For example, a `cifs` mount at a mount directory named `test` will have a mount log file named: `cifs-test.log`.
164164

165-
## Support Matrix
165+
## Support matrix
166166

167167
Azure Batch supports the following virtual file system types for node agents produced for their respective publisher and offer.
168168

0 commit comments

Comments
 (0)
Please sign in to comment.