You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/sentinel/migration-ingestion-target-platform.md
+29-23Lines changed: 29 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,6 @@ author: limwainstein
5
5
ms.author: lwainstein
6
6
ms.topic: how-to
7
7
ms.date: 05/03/2022
8
-
ms.custom: ignite-fall-2021
9
8
---
10
9
11
10
# Select a target Azure platform to host the exported historical data
@@ -14,17 +13,17 @@ One of the important decisions you make during your migration process is where t
14
13
15
14
This article reviews a set of Azure platforms you can use to host your historical logs, and compares them in terms of performance, cost, usability and management overhead.
16
15
17
-
||[Basic Logs/Archive](../azure-monitor/logs/basic-logs-configure.md)|[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview)|[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md)|[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md)|
16
+
||[Basic Logs/Archive](../azure-monitor/logs/basic-logs-configure.md)|[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview)|[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md)|[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md)|
|**Capabilities**: |- Leverage most of the existing Azure Monitor Logs experiences at a lower cost.<br>- Basic Logs are retained for 8 days, and are then automatically transferred to the archive (according to the original retention period).<br>- Use [search jobs](/azure/azure-monitor/logs/search-jobs) to search across petabytes of data and find specific events.<br>- For deep investigations on a specific time range, [restore data from the archive](/azure/azure-monitor/logs/restore). The data is then available in the hot cache for further analytics. |- Both ADX and Microsoft Sentinel use the Kusto Query Language (KQL), allowing you to query, aggregate, or correlate data in both platforms. For example, you can run a KQL query from Microsoft Sentinel to [join data stored in ADX with data stored in Log Analytics](/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy).<br>- With ADX, you have substantial control over the cluster size and configuration. For example, you can create a larger cluster to achieve higher ingestion throughput, or create a smaller cluster to control your costs. |- Data is stored in a blob storage, which is low in costs.<br> - You use ADX to query the data in KQL, allowing you to easily access the data. [Learn how to query Azure Monitor data with ADX](/azure/azure-monitor/logs/azure-data-explorer-query-storage) |- Blob storage is optimized for storing massive amounts of unstructured data.<br>- Offers very competitive costs.<br>- Should be considered in use cases where your organization does not prioritize accessibility or performance, such as when there the organization must align with compliance or audit requirements. |
20
-
|**Usability**: |**Great**<br><br>The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data is not immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**<br><br>Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy). |**Fair**<br><br>While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. |**Poor**<br><br>With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |
21
-
|**Management overhead**: |**Fully managed**<br><br>The search and archive options are fully managed and do not add management overhead. |**High**<br><br>ADX is external to Microsoft Sentinel, which requires monitoring and maintenance. |**Medium**<br><br>With this option, you maintain and monitor ADX and Azure Blob Storage, both of which are external components to Microsoft Sentinel. While ADX can be shut down at times, we recommend to consider the additional management with this option. |**Low**<br><br>While this platform requires very little maintenance, selecting this platform adds monitoring and configuration tasks, such as setting up lifecycle management. |
22
-
|**Performance**: |**Medium**<br><br>You typically interact with basic logs within the archive using [search jobs](/azure/azure-monitor/logs/search-jobs), which are suitable when you want to maintain access to the data, but do not need immediate access to the data. |**High to low**<br><br>- The query performance of an ADX cluster depend on multiple factors, including the number of nodes in the cluster, the cluster virtual machine SKU, data partitioning, and more.<br>- As you add nodes to the cluster, the performance improves, together with the cost.<br>- If you use ADX, we recommend that you configure your cluster size to balance performance and cost. This depends on your organizations needs, including how fast your migration needs to complete, how often the data is accessed, and the expected response time. |**Low**<br><br>Because the data resides in the Blob Storage, the performance is limited by that platform. |**Low**<br><br>offers two performance tiers: Premium or Standard. Although both tiers are an option for long-term storage, Standard is more cost-efficient. Learn about performance and scalability limits. |
23
-
|**Cost**: |**Highest**<br><br>The cost is comprised of two components: - **Ingestion cost**: Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).<br> - **Archival cost**: This is the cost for data in the archive tier, and sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/).<br>In addition to these two cost components, if you need frequent access to the data, take into account additional costs when you access data via search jobs. |**High to low**<br><br>- Because ADX is a cluster of virtual machines, you are charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost.<br>- ADX also offers autoscaling capabilities to adapt to workload on demand. On top of this, ADX can benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/en-us/pricing/calculator/). |**Low**<br><br>The cluster size does not affect the cost, because ADX only acts as a proxy. In addition, you need to run the cluster only when you need quick and simple access to the data. |**Low**<br><br>With optimal setup, this is the option with the lowest costs. In addition, the data works in an automatic lifecycle, so older blobs move into lower-cost access tiers. |
24
-
|**How to access data**: |[Search jobs](search-jobs.md)|Direct KQL queries |Modified KQL data |[externaldata](/azure/data-explorer/kusto/query/externaldata-operator)|
25
-
|**Scenario**: |**Occasional access**<br><br>Relevant in scenarios where you don’t need to run heavy analytics or trigger analytics rules. |**Frequent access**<br><br>Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Occasional access**<br><br>Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |**Compliance/audit**<br><br>- Optimal for storing massive amounts of unstructured data.<br>- Relevant in scenarios where you do not need quick access to the data or high performance, such as for compliance or audits. |
|**Capabilities**: |• Leverage most of the existing Azure Monitor Logs experiences at a lower cost.<br>• Basic Logs are retained for 8 days, and are then automatically transferred to the archive (according to the original retention period).<br>• Use [search jobs](../azure-monitor/logs/search-jobs.md) to search across petabytes of data and find specific events.<br>• For deep investigations on a specific time range, [restore data from the archive](../azure-monitor/logs/restore.md). The data is then available in the hot cache for further analytics. |• Both ADX and Microsoft Sentinel use the Kusto Query Language (KQL), allowing you to query, aggregate, or correlate data in both platforms. For example, you can run a KQL query from Microsoft Sentinel to [join data stored in ADX with data stored in Log Analytics](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).<br>• With ADX, you have substantial control over the cluster size and configuration. For example, you can create a larger cluster to achieve higher ingestion throughput, or create a smaller cluster to control your costs. |• Blob storage is optimized for storing massive amounts of unstructured data.<br>• Offers very competitive costs.<br>• Suitable for a scenario where your organization does not prioritize accessibility or performance, such as when there the organization must align with compliance or audit requirements. |• Data is stored in a blob storage, which is low in costs.<br>• You use ADX to query the data in KQL, allowing you to easily access the data. [Learn how to query Azure Monitor data with ADX](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
19
+
|**Usability**: |**Great**<br><br>The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data is not immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**<br><br>Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). |**Poor**<br><br>With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |**Fair**<br><br>While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. |
20
+
|**Management overhead**: |**Fully managed**<br><br>The search and archive options are fully managed and do not add management overhead. |**High**<br><br>ADX is external to Microsoft Sentinel, which requires monitoring and maintenance. |**Low**<br><br>While this platform requires very little maintenance, selecting this platform adds monitoring and configuration tasks, such as setting up lifecycle management. |**Medium**<br><br>With this option, you maintain and monitor ADX and Azure Blob Storage, both of which are external components to Microsoft Sentinel. While ADX can be shut down at times, consider the additional management overhead with this option. |
21
+
|**Performance**: |**Medium**<br><br>You typically interact with basic logs within the archive using [search jobs](../azure-monitor/logs/search-jobs.md), which are suitable when you want to maintain access to the data, but do not need immediate access to the data. |**High to low**<br><br>• The query performance of an ADX cluster depends on multiple factors, including the number of nodes in the cluster, the cluster virtual machine SKU, data partitioning, and more.<br>• As you add nodes to the cluster, the performance improves, with added cost.<br>• If you use ADX, we recommend that you configure your cluster size to balance performance and cost. This depends on your organization's needs, including how fast your migration needs to complete, how often the data is accessed, and the expected response time. |**Low**<br><br>Offers two performance tiers: Premium or Standard. Although both tiers are an option for long-term storage, Standard is more cost-efficient. Learn about [performance and scalability limits](../storage/common/scalability-targets-standard-account.md). |**Low**<br><br>Because the data resides in the Blob Storage, the performance is limited by that platform. |
22
+
|**Cost**: |**Highest**<br><br>The cost is comprised of two components:<br>• **Ingestion cost**. Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).<br>• **Archival cost**. This is the cost for data in the archive tier, and sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/).<br>In addition to these two cost components, if you need frequent access to the data, additional costs apply when you access data via search jobs. |**High to low**<br><br>• Because ADX is a cluster of virtual machines, you are charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost.<br>• ADX also offers autoscaling capabilities to adapt to workload on demand. On top of this, ADX can benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). |**Low**<br><br>With optimal setup, this is the option with the lowest costs. In addition, the data works in an automatic lifecycle, so older blobs move into lower-cost access tiers. |**Low**<br><br>The cluster size does not affect the cost, because ADX only acts as a proxy. In addition, you need to run the cluster only when you need quick and simple access to the data. |
|**Scenario**: |**Occasional access**<br><br>Relevant in scenarios where you don’t need to run heavy analytics or trigger analytics rules. |**Frequent access**<br><br>Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Compliance/audit**<br><br>• Optimal for storing massive amounts of unstructured data.<br>• Relevant in scenarios where you do not need quick access to the data or high performance, such as for compliance or audits. |**Occasional access**<br><br>Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |
25
+
|**Complexity**: |Very low |Medium |Low |High |
26
+
|**Readiness**: |Public Preview |GA|GA|GA|
28
27
29
28
## General considerations
30
29
@@ -35,7 +34,7 @@ Now that you know more about the available target platforms, review these main f
35
34
-[What is the amount of data to ingest?](#amount-of-data)
36
35
- What are the estimated migration costs, during and after migration? See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to compare the costs.
37
36
38
-
#####Use of ingested logs
37
+
### Use of ingested logs
39
38
40
39
Define how your organization will use the ingested logs to guide your selection of the ingestion platform.
41
40
@@ -47,7 +46,7 @@ Consider these three general scenarios:
47
46
48
47
See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to understand which platform suits each of these scenarios.
49
48
50
-
####Migration speed
49
+
### Migration speed
51
50
52
51
In some scenarios, you might need to meet a tight deadline, for example, your organization might need to urgently move from the previous SIEM due to a license expiration event.
53
52
@@ -56,29 +55,36 @@ Review the components and factors that determine the speed of your migration.
56
55
-[Compute power](#compute-power)
57
56
-[Target platform](#target-platform)
58
57
59
-
#####Data source
58
+
#### Data source
60
59
61
-
The data source is typically a local filesystem or cloud storage, for example, S3. A server's storage performance depends on multiple factors, such as disk technology (SSD vs HDD), the nature of the IO requests, and the size of each request.
60
+
The data source is typically a local file system or cloud storage, for example, S3. A server's storage performance depends on multiple factors, such as disk technology (SSD vs HDD), the nature of the IO requests, and the size of each request.
62
61
63
-
For example, Azure virtual machine performance ranges from 30MB per second on smaller VM SKUs, to 20GB per second for some of the storage-optimized SKUs using NVM Express (NVMe) disks. Learn how to [design your Azure VM for high storage performance](/azure/virtual-machines/premium-storage-performance). Most concepts can also be applied to premises servers.
62
+
For example, Azure virtual machine performance ranges from 30MB per second on smaller VM SKUs, to 20GB per second for some of the storage-optimized SKUs using NVM Express (NVMe) disks. Learn how to [design your Azure VM for high storage performance](/azure/virtual-machines/premium-storage-performance). You can also apply most concepts to om-premises servers.
64
63
65
64
#### Compute power
66
65
67
66
In some cases, even if your disk is capable of copying your data quickly, compute power is the bottleneck in the copy process. In these cases, you can choose one these scaling options:
68
67
69
-
- Scale vertically: You increase the power of a single server by adding more CPUs, or increasing the CPU speed
70
-
- Scale horizontally: Add more servers that can increase the parallelism of the copy process.
68
+
-**Scale vertically**. You increase the power of a single server by adding more CPUs, or increase the CPU speed.
69
+
-**Scale horizontally**. You add more servers, which increases the parallelism of the copy process.
71
70
72
-
#####Target platform
71
+
#### Target platform
73
72
74
73
Each of the target platforms discussed in this section has a different performance profile.
75
74
76
-
-**Azure Monitor Basic logs**. By default, Basic logs can be pushed to Azure Monitor at a rate of approximately 1GB per minute. This allows you to ingest approximately 1.5TB per day or 43TB per month.
77
-
-**Azure Data Explorer**, Ingestion performance varies, depending on the size of the cluster you provision, and the batching settings you apply. [Learn about ingestion best practices](/azure/data-explorer/kusto/management/ingestion-faq), including performance and monitoring.
75
+
-**Azure Monitor Basic logs**. By default, Basic logs can be pushed to Azure Monitor at a rate of approximately 1 GB per minute. This allows you to ingest approximately 1.5 TB per day or 43 TB per month.
76
+
-**Azure Data Explorer**. Ingestion performance varies, depending on the size of the cluster you provision, and the batching settings you apply. [Learn about ingestion best practices](/azure/data-explorer/kusto/management/ingestion-faq), including performance and monitoring.
78
77
-**Azure Blob Storage**. The performance of an Azure Blob Storage account can greatly vary depending on the number and size of the files, job size, concurrency, and so in. [Learn how to optimize AzCopy performance with Azure Storage](https://docs.microsoft.com/azure/data-explorer/kusto/management/ingestion-faq).
79
78
80
-
####Amount of data
79
+
### Amount of data
81
80
82
81
The amount of data is the main factor that affects the duration of the migration process. You should therefore consider how to set up your environment depending on your data set.
83
82
84
-
To determine the minimum duration of the migration and where the bottleneck could be, consider the amount of data and the ingestion speed of the target platform. For example, if you select a target platform that can ingest 1 GB per second, and you have to migrate 100 TB, your migration will take a minimum of 100000 GB, multiplied by the 1 GB per second speed. Divide the result by 3600, which calculates to 27 hours. This is correct if the rest of the components in the pipeline, such as the local disk, the network, and the virtual machines, can perform at a speed of 1 GB per second.
83
+
To determine the minimum duration of the migration and where the bottleneck could be, consider the amount of data and the ingestion speed of the target platform. For example, if you select a target platform that can ingest 1 GB per second, and you have to migrate 100 TB, your migration will take a minimum of 100000 GB, multiplied by the 1 GB per second speed. Divide the result by 3600, which calculates to 27 hours. This is correct if the rest of the components in the pipeline, such as the local disk, the network, and the virtual machines, can perform at a speed of 1 GB per second.
84
+
85
+
## Next steps
86
+
87
+
In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel.
88
+
89
+
> [!div class="nextstepaction"]
90
+
> [Select a data ingestion tool](migration-ingestion-tool.md)
Copy file name to clipboardExpand all lines: articles/sentinel/migration-qradar-automation.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Here’s what you need to think about when migrating SOAR use cases from IBM Sec
32
32
33
33
This section shows how key SOAR concepts in IBM Security QRadar SOAR translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
34
34
35
-
:::image type="content" source="media/migration-qradar-automation/qradar-sentinel-soar-workflow.png" alt-text="Diagram displaying the QRadar and Microsoft Sentinel SOAR workflows." lightbox="media/migration-qradar-automation/qradar-sentinel-soar-workflow.png":::
35
+
:::image type="content" source="media/migration-qradar-automation/qradar-sentinel-soar-workflow-new.png" alt-text="Diagram displaying the QRadar and Microsoft Sentinel SOAR workflows." lightbox="media/migration-qradar-automation/qradar-sentinel-soar-workflow-new.png":::
36
36
37
37
|Step (in diagram) |IBM Security QRadar SOAR |Microsoft Sentinel |
Copy file name to clipboardExpand all lines: articles/sentinel/migration-splunk-automation.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Here’s what you need to think about when migrating SOAR use cases from Splunk.
32
32
33
33
This section shows how key SOAR concepts in Splunk translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
34
34
35
-
:::image type="content" source="media/migration-splunk-automation/splunk-sentinel-soar-workflow.png" alt-text="Diagram displaying the Splunk and Microsoft Sentinel SOAR workflows." lightbox="media/migration-splunk-automation/splunk-sentinel-soar-workflow.png":::
35
+
:::image type="content" source="media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png" alt-text="Diagram displaying the Splunk and Microsoft Sentinel SOAR workflows." lightbox="media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png":::
0 commit comments