Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit d54f3ea

Browse files
committedMar 23, 2021
Fix images
1 parent 3f69884 commit d54f3ea

30 files changed

+124
-124
lines changed
 

‎articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Two Azure resources are defined in the template:
6060
> [!NOTE]
6161
> The values you provide must be unique and should follow the naming guidelines. The template does not perform validation checks. If the values you provide are already in use, or do not follow the guidelines, you get an error after you have submitted the template.
6262
63-
![HDInsight Linux gets started Resource Manager template on portal](./media/apache-hadoop-linux-tutorial-get-started/hdinsight-linux-get-started-arm-template-on-portal.png "Deploy Hadoop cluster in HDInsight using the Azure portal and a resource group manager template")
63+
:::image type="content" source="./media/apache-hadoop-linux-tutorial-get-started/hdinsight-linux-get-started-arm-template-on-portal.png " alt-text="HDInsight Linux gets started Resource Manager template on portal" border="true":::
6464

6565
1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You'll receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
6666

@@ -80,7 +80,7 @@ After you complete the quickstart, you may want to delete the cluster. With HDIn
8080
8181
From the Azure portal, navigate to your cluster, and select **Delete**.
8282

83-
![HDInsight delete cluster from portal](./media/apache-hadoop-linux-tutorial-get-started/hdinsight-delete-cluster.png "HDInsight delete cluster from portal")
83+
:::image type="content" source="./media/apache-hadoop-linux-tutorial-get-started/hdinsight-delete-cluster.png " alt-text="HDInsight delete cluster from portal" border="true":::
8484

8585
You can also select the resource group name to open the resource group page, and then select **Delete resource group**. By deleting the resource group, you delete both the HDInsight cluster, and the default storage account.
8686

‎articles/hdinsight/interactive-query/apache-hive-query-odbc-driver-powershell.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ The following steps show you how to create an Apache Hive ODBC data source.
3939

4040
1. From Windows, navigate to **Start** > **Windows Administrative Tools** > **ODBC Data Sources (32-bit)/(64-bit)**. An **ODBC Data Source Administrator** window opens.
4141

42-
![OBDC data source administrator](./media/apache-hive-query-odbc-driver-powershell/hive-odbc-driver-dsn-setup.png "Configure a DSN using ODBC Data Source Administrator")
42+
:::image type="content" source="./media/apache-hive-query-odbc-driver-powershell/hive-odbc-driver-dsn-setup.png " alt-text="OBDC data source administrator" border="true":::
4343

4444
1. From the **User DSN** tab, select **Add** to open the **Create New Data Source** window.
4545

@@ -65,7 +65,7 @@ The following steps show you how to create an Apache Hive ODBC data source.
6565
| Rows fetched per block |When fetching a large number of records, tuning this parameter may be required to ensure optimal performances. |
6666
| Default string column length, Binary column length, Decimal column scale |The data type lengths and precisions may affect how data is returned. They cause incorrect information to be returned because of loss of precision and truncation. |
6767

68-
![Advanced DSN configuration options](./media/apache-hive-query-odbc-driver-powershell/odbc-data-source-advanced-options.png "Advanced DSN configuration options")
68+
:::image type="content" source="./media/apache-hive-query-odbc-driver-powershell/odbc-data-source-advanced-options.png " alt-text="Advanced DSN configuration options" border="true":::
6969

7070
1. Select **Test** to test the data source. When the data source is configured correctly, the test result shows **SUCCESS**.
7171

‎articles/hdinsight/interactive-query/gateway-best-practices.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ The HDInsight gateway is the only part of an HDInsight cluster that is publicly
1616

1717
The following diagram provides a rough illustration of how the Gateway provides an abstraction in front of all the different host resolution possibilities within HDInsight.
1818

19-
![Host Resolution Diagram](./media/gateway-best-practices/host-resolution-diagram.png "Host Resolution Diagram")
19+
:::image type="content" source="./media/gateway-best-practices/host-resolution-diagram.png " alt-text="Host Resolution Diagram" border="true":::
2020

2121
## Motivation
2222

@@ -34,7 +34,7 @@ The Gateway's performance degradation around queries of a large size is because
3434

3535
The following diagram illustrates the steps involved in a SELECT query.
3636

37-
![Result Diagram](./media/gateway-best-practices/result-retrieval-diagram.png "Result Diagram")
37+
:::image type="content" source="./media/gateway-best-practices/result-retrieval-diagram.png " alt-text="Result Diagram" border="true":::
3838

3939
Apache Hive is a relational abstraction on top of an HDFS-compatible filesystem. This abstraction means **SELECT** statements in Hive correspond to **READ** operations on the filesystem. The **READ** operations are translated into the appropriate schema before reported to the user. The latency of this process increases with data size and total hops required to reach the end user.
4040

‎articles/hdinsight/interactive-query/hdinsight-grafana.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ See [Create Apache Hadoop clusters using the Azure portal](../hdinsight-hadoop-c
2626

2727
1. The Grafana dashboard appears and looks like this example:
2828

29-
![HDInsight Grafana web dashboard](./media/hdinsight-grafana/hdinsight-grafana-dashboard.png "HDInsight Grafana dashboard")
29+
:::image type="content" source="./media/hdinsight-grafana/hdinsight-grafana-dashboard.png " alt-text="HDInsight Grafana web dashboard" border="true":::
3030

3131
## Clean up resources
3232

‎articles/hdinsight/interactive-query/hdinsight-security-options-for-hive.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 10/02/2020
1111

1212
This document describes the recommended security options for Hive in HDInsight. These options can be configured through Ambari.
1313

14-
![`Security Options for Hive`](./media/hdinsight-security-options-for-hive/security-options-hive.png "Security Options for Hive")
14+
:::image type="content" source="./media/hdinsight-security-options-for-hive/security-options-hive.png " alt-text="`Security Options for Hive`" border="true":::
1515

1616
## HiveServer2 authentication
1717

‎articles/hdinsight/interactive-query/hive-llap-sizing-guide.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ specific tuning.
4343

4444
### **LLAP Architecture/Components:**
4545

46-
![`LLAP Architecture/Components`](./media/hive-llap-sizing-guide/LLAP_architecture_sizing_guide.png "LLAP Architecture/Components")
46+
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_architecture_sizing_guide.png " alt-text="`LLAP Architecture/Components`" border="true":::
4747

4848
### **LLAP Daemon size estimations:**
4949

@@ -77,7 +77,7 @@ Default HDInsight cluster has four LLAP daemons running on four worker nodes, so
7777

7878
**Ambari UI slider for Hive config variable `hive.server2.tez.sessions.per.default.queue`:**
7979

80-
![`LLAP maximum concurrent queries`](./media/hive-llap-sizing-guide/LLAP_sizing_guide_max_concurrent_queries.png "LLAP maximum number of concurrent queries")
80+
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_max_concurrent_queries.png " alt-text="`LLAP maximum concurrent queries`" border="true":::
8181

8282
#### **5. Tez Container and Tez Application Master size**
8383
Configuration: ***tez.am.resource.memory.mb, hive.tez.container.size***
@@ -163,7 +163,7 @@ For D14 v2, this value is 19 x 3 GB = **57 GB**
163163

164164
`Ambari environment variable for LLAP heap size:`
165165

166-
![`LLAP heap size`](./media/hive-llap-sizing-guide/LLAP_sizing_guide_llap_heap_size.png "LLAP heap size")
166+
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_llap_heap_size.png " alt-text="`LLAP heap size`" border="true":::
167167

168168
When SSD cache is disabled, the in-memory cache is amount of memory that is left after taking out Headroom size and Heap size from the LLAP daemon container size.
169169

@@ -196,11 +196,11 @@ Ambari environment variables: ***num_llap_nodes, num_llap_nodes_for_llap_daemons
196196

197197
**num_llap_nodes** - specifies number of nodes used by Hive LLAP service, this includes nodes running LLAP daemon, LLAP Service Master, and Tez Application Master(Tez AM).
198198

199-
![`Number of Nodes for LLAP service`](./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes.png "Number of Nodes for LLAP service")
199+
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes.png " alt-text="`Number of Nodes for LLAP service`" border="true":::
200200

201201
**num_llap_nodes_for_llap_daemons** - specified number of nodes used only for LLAP daemons. LLAP daemon container sizes are set to max fit node, so it will result in one llap daemon on each node.
202202

203-
![`Number of Nodes for LLAP daemons`](./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes_for_llap_daemons.png "Number of Nodes for LLAP daemons")
203+
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes_for_llap_daemons.png " alt-text="`Number of Nodes for LLAP daemons`" border="true":::
204204

205205
It's recommended to keep both values same as number of worker nodes in Interactive Query cluster.
206206

‎articles/hdinsight/kafka/apache-kafka-get-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ To create an Apache Kafka cluster on HDInsight, use the following steps:
6767
|Primary storage account|Use the drop-down list to select an existing storage account, or select **Create new**. If you create a new account, the name must be between 3 and 24 characters in length, and can include numbers and lowercase letters only|
6868
|Container|Use the auto-populated value.|
6969

70-
![HDInsight Linux get started provide cluster storage values](./media/apache-kafka-get-started/azure-portal-cluster-storage.png "Provide storage values for creating an HDInsight cluster")
70+
:::image type="content" source="./media/apache-kafka-get-started/azure-portal-cluster-storage.png " alt-text="HDInsight Linux get started provide cluster storage values" border="true":::
7171

7272
Select the **Security + networking** tab.
7373

‎articles/hdinsight/spark/apache-azure-spark-history-server.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The Spark History Server is the web UI for completed and running Spark applicati
2020
1. From the [Azure portal](https://portal.azure.com/), open the Spark cluster. For more information, see [List and show clusters](../hdinsight-administer-use-portal-linux.md#showClusters).
2121
2. From **Cluster dashboards**, select **Spark history server**. When prompted, enter the admin credentials for the Spark cluster.
2222

23-
![Launch the Spark History Server from the Azure portal.](./media/apache-azure-spark-history-server/azure-portal-dashboard-spark-history.png "Spark History Server")
23+
:::image type="content" source="./media/apache-azure-spark-history-server/azure-portal-dashboard-spark-history.png " alt-text="Launch the Spark History Server from the Azure portal." border="true":::the Azure portal." border="true":::
2424

2525
### Open the Spark History Server web UI by URL
2626

‎articles/hdinsight/spark/apache-spark-connect-to-sql-database.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Start by creating a Jupyter Notebook associated with the Spark cluster. You use
3030
1. From the [Azure portal](https://portal.azure.com/), open your cluster.
3131
1. Select **Jupyter Notebook** underneath **Cluster dashboards** on the right side. If you don't see **Cluster dashboards**, select **Overview** from the left menu. If prompted, enter the admin credentials for the cluster.
3232

33-
![Jupyter Notebook on Apache Spark](./media/apache-spark-connect-to-sql-database/hdinsight-spark-cluster-dashboard-jupyter-notebook.png "Jupyter Notebook on Spark")
33+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/hdinsight-spark-cluster-dashboard-jupyter-notebook.png " alt-text="Jupyter Notebook on Apache Spark" border="true":::
3434

3535
> [!NOTE]
3636
> You can also access the Jupyter Notebook on Spark cluster by opening the following URL in your browser. Replace **CLUSTERNAME** with the name of your cluster:
@@ -39,7 +39,7 @@ Start by creating a Jupyter Notebook associated with the Spark cluster. You use
3939
4040
1. In the Jupyter Notebook, from the top-right corner, click **New**, and then click **Spark** to create a Scala notebook. Jupyter Notebooks on HDInsight Spark cluster also provide the **PySpark** kernel for Python2 applications, and the **PySpark3** kernel for Python3 applications. For this article, we create a Scala notebook.
4141

42-
![Kernels for Jupyter Notebook on Spark](./media/apache-spark-connect-to-sql-database/kernel-jupyter-notebook-on-spark.png "Kernels for Jupyter Notebook on Spark")
42+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/kernel-jupyter-notebook-on-spark.png " alt-text="Kernels for Jupyter Notebook on Spark" border="true":::
4343

4444
For more information about the kernels, see [Use Jupyter Notebook kernels with Apache Spark clusters in HDInsight](apache-spark-jupyter-notebook-kernels.md).
4545

@@ -48,7 +48,7 @@ Start by creating a Jupyter Notebook associated with the Spark cluster. You use
4848
4949
1. A new notebook opens with a default name, **Untitled**. Click the notebook name and enter a name of your choice.
5050

51-
![Provide a name for the notebook](./media/apache-spark-connect-to-sql-database/hdinsight-spark-jupyter-notebook-name.png "Provide a name for the notebook")
51+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/hdinsight-spark-jupyter-notebook-name.png " alt-text="Provide a name for the notebook" border="true":::
5252

5353
You can now start creating your application.
5454

@@ -95,7 +95,7 @@ In this section, you read data from a table (for example, **SalesLT.Address**) t
9595

9696
You see an output similar to the following image:
9797

98-
![schema output](./media/apache-spark-connect-to-sql-database/read-from-sql-schema-output.png "schema output")
98+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/read-from-sql-schema-output.png " alt-text="schema output" border="true":::
9999

100100
1. You can also do operations like, retrieve the top 10 rows.
101101

@@ -162,11 +162,11 @@ In this section, we use a sample CSV file available on the cluster to create a t
162162

163163
a. Start SSMS and connect to the Azure SQL Database by providing connection details as shown in the screenshot below.
164164

165-
![Connect to SQL Database using SSMS1](./media/apache-spark-connect-to-sql-database/connect-to-sql-db-ssms.png "Connect to SQL Database using SSMS1")
165+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/connect-to-sql-db-ssms.png " alt-text="Connect to SQL Database using SSMS1" border="true":::
166166

167167
b. From **Object Explorer**, expand the database and the table node to see the **dbo.hvactable** created.
168168

169-
![Connect to SQL Database using SSMS2](./media/apache-spark-connect-to-sql-database/connect-to-sql-db-ssms-locate-table.png "Connect to SQL Database using SSMS2")
169+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/connect-to-sql-db-ssms-locate-table.png " alt-text="Connect to SQL Database using SSMS2" border="true":::
170170

171171
1. Run a query in SSMS to see the columns in the table.
172172

@@ -204,7 +204,7 @@ In this section, we stream data into the `hvactable` that you created in the pre
204204

205205
1. The output shows the schema of **HVAC.csv**. The `hvactable` has the same schema as well. The output lists the columns in the table.
206206

207-
![`hdinsight Apache Spark schema table`](./media/apache-spark-connect-to-sql-database/hdinsight-schema-table.png "Schema of table")
207+
:::image type="content" source="./media/apache-spark-connect-to-sql-database/hdinsight-schema-table.png " alt-text="`hdinsight Apache Spark schema table`" border="true":::
208208

209209
1. Finally, use the following snippet to read data from the HVAC.csv and stream it into the `hvactable` in your database. Paste the snippet in a code cell, replace the placeholder values with the values for your database, and then press **SHIFT + ENTER** to run.
210210

‎articles/hdinsight/spark/apache-spark-create-standalone-application.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ If you're not going to continue to use this application, delete the cluster that
223223

224224
1. Select **Delete**. Select **Yes**.
225225

226-
![`HDInsight azure portal delete cluster`](./media/apache-spark-create-standalone-application/hdinsight-azure-portal-delete-cluster.png "Delete HDInsight cluster")
226+
:::image type="content" source="./media/apache-spark-create-standalone-application/hdinsight-azure-portal-delete-cluster.png " alt-text="`HDInsight azure portal delete cluster`" border="true":::lete cluster`" border="true":::
227227

228228
## Next step
229229

‎articles/hdinsight/spark/apache-spark-custom-library-website-log-analysis.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,11 @@ Once your data is saved as an Apache Hive table, in the next section we'll conne
2525

2626
1. Create a new notebook. Select **New**, and then **PySpark**.
2727

28-
![Create a new Apache Jupyter Notebook](./media/apache-spark-custom-library-website-log-analysis/hdinsight-create-jupyter-notebook.png "Create a new Jupyter Notebook")
28+
:::image type="content" source="./media/apache-spark-custom-library-website-log-analysis/hdinsight-create-jupyter-notebook.png " alt-text="Create a new Apache Jupyter Notebook" border="true":::Notebook" border="true":::
2929

3030
1. A new notebook is created and opened with the name Untitled.pynb. Select the notebook name at the top, and enter a friendly name.
3131

32-
![Provide a name for the notebook](./media/apache-spark-custom-library-website-log-analysis/hdinsight-name-jupyter-notebook.png "Provide a name for the notebook")
32+
:::image type="content" source="./media/apache-spark-custom-library-website-log-analysis/hdinsight-name-jupyter-notebook.png " alt-text="Provide a name for the notebook" border="true":::tebook" border="true":::
3333

3434
1. Because you created a notebook using the PySpark kernel, you don't need to create any contexts explicitly. The Spark and Hive contexts will be automatically created for you when you run the first code cell. You can start by importing the types that are required for this scenario. Paste the following snippet in an empty cell, and then press **Shift + Enter**.
3535

@@ -168,7 +168,7 @@ Once your data is saved as an Apache Hive table, in the next section we'll conne
168168
169169
You should see an output like the following image:
170170
171-
![hdinsight jupyter sql query output](./media/apache-spark-custom-library-website-log-analysis/hdinsight-jupyter-sql-qyery-output.png "SQL query output")
171+
:::image type="content" source="./media/apache-spark-custom-library-website-log-analysis/hdinsight-jupyter-sql-qyery-output.png " alt-text="hdinsight jupyter sql query output" border="true":::yter sql query output" border="true":::
172172
173173
For more information about the `%%sql` magic, see [Parameters supported with the %%sql magic](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
174174
@@ -186,7 +186,7 @@ Once your data is saved as an Apache Hive table, in the next section we'll conne
186186
187187
You should see an output like the following image:
188188
189-
![apache spark web log analysis plot](./media/apache-spark-custom-library-website-log-analysis/hdinsight-apache-spark-web-log-analysis-plot.png "Matplotlib output")
189+
:::image type="content" source="./media/apache-spark-custom-library-website-log-analysis/hdinsight-apache-spark-web-log-analysis-plot.png " alt-text="apache spark web log analysis plot" border="true":::eb log analysis plot" border="true":::
190190
191191
1. After you have finished running the application, you should shut down the notebook to release the resources. To do so, from the **File** menu on the notebook, select **Close and Halt**. This action will shut down and close the notebook.
192192

‎articles/hdinsight/spark/apache-spark-eclipse-tool-plugin.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -315,9 +315,9 @@ When using **Link A Cluster**, I would suggest you to provide credential of stor
315315

316316
There are two modes to submit the jobs. If storage credential is provided, batch mode will be used to submit the job. Otherwise, interactive mode will be used. If the cluster is busy, you might get the error below.
317317

318-
![eclipse get error when cluster busy](./media/apache-spark-eclipse-tool-plugin/eclipse-interactive-cluster-busy-upload.png "eclipse get error when cluster busy")
318+
:::image type="content" source="./media/apache-spark-eclipse-tool-plugin/eclipse-interactive-cluster-busy-upload.png " alt-text="eclipse get error when cluster busy" border="true":::
319319

320-
![eclipse get error when cluster busy yarn](./media/apache-spark-eclipse-tool-plugin/eclipse-interactive-cluster-busy-submit.png "eclipse get error when cluster busy yarn")
320+
:::image type="content" source="./media/apache-spark-eclipse-tool-plugin/eclipse-interactive-cluster-busy-submit.png " alt-text="eclipse get error when cluster busy yarn" border="true":::
321321

322322
## See also
323323

0 commit comments

Comments
 (0)