title | description | services | documentationcenter | author | manager | editor | ms.assetid | ms.service | ms.workload | ms.tgt_pltfrm | ms.topic | ms.date | ms.author |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Azure NetApp Files performance benchmarks for Linux | Microsoft Docs |
Describes performance benchmarks Azure NetApp Files delivers for Linux. |
azure-netapp-files |
b-hchen |
azure-netapp-files |
storage |
na |
conceptual |
09/29/2021 |
anfdocs |
This article describes performance benchmarks Azure NetApp Files delivers for Linux.
This section describes performance benchmarks of Linux workload throughput and workload IOPS.
The graph below represents a 64-kibibyte (KiB) sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files volume can handle between ~1,600 MiB/s pure sequential writes and ~4,500 MiB/s pure sequential reads.
The graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
The following graph represents a 4-kibibyte (KiB) random workload and a 1 TiB working set. The graph shows that an Azure NetApp Files volume can handle between ~130,000 pure random writes and ~460,000 pure random reads.
This graph illustrates decreases in 10% at a time, from pure read to pure write. It demonstrates what you can expect when using varying read/write ratios (100%:0%, 90%:10%, 80%:20%, and so on).
The graphs in this section show the validation testing results for the client-side mount option with NFSv3. For more information, see nconnect
section of Linux mount options.
The graphs compare the advantages of nconnect
to a non-connected
mounted volume. In the graphs, FIO generated the workload from a single D32s_v4 instance in the us-west2 Azure region using a 64-KiB sequential workload – the largest I/O size supported by Azure NetApp Files at the time of the testing represented here. Azure NetApp Files now supports larger I/O sizes. For more details, see rsize
and wsize
section of Linux mount options.
The following graphs show 64-KiB sequential reads of ~3,500 MiB/s reads with nconnect
, roughly 2.3X non-nconnect
.
The following graphs show sequential writes. They indicate that nconnect
has no noticeable benefit for sequential writes. 1,500 MiB/s is roughly both the sequential write volume upper limit and the D32s_v4 instance egress limit.
The following graphs show 4-KiB random reads of ~200,000 read IOPS with nconnect
, roughly 3X non-nconnect
.
The following graphs show 4-KiB random writes of ~135,000 write IOPS with nconnect
, roughly 3X non-nconnect
.