You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert

54
54
55
55
3. Verify your email and select your organization in the **Azure DevOps Services Domain** drop-down. Enter your repository name and select **Publish repository**.
Publishing the repo creates a new project in your organization with the same name as the local repo. To create the repo in an existing project, click **Advanced** next to **Repository name**, and select a project. You can view your code in the browser by selecting **See it on the web**.
60
60
@@ -68,45 +68,45 @@ Open a web browser and navigate to the project you just created in [Azure DevOps
68
68
69
69
1. Under the **Build & Release** tab, select **Builds**, and then **+New**. Select **Azure DevOps Services Git** and **Continue**.

76
76
77
77
3. Under **Triggers**, enable continuous integration by checking **Enable continuous integration** trigger status. Select **Save and queue** to manually start a build.
4. Builds are also triggered upon push or check-in. To check your build progress, switch to the **Builds** tab. Once you verify that the build executes successfully, you must define a release pipeline that deploys your application to a cluster. Right click on the ellipses next to your build pipeline and select **Edit**.
82
82
83
83
5. In **Tasks**, enter "Hosted" as the **Agent queue**.
|Override template parameters | Type the template parameters to override in the textbox. Example, –storageName fabrikam –adminUsername $(vmusername) -adminPassword $(password) –azureKeyVaultName $(fabrikamFibre). This property is optional, but your build will result in errors if key parameters are not overridden. |

127
127
128
128
### Failed build process
129
129
You may receive errors for null deployment parameters if you did not override template parameters in the **Azure Resource Group Deployment** task of your build pipeline. Return to the build pipeline and override the null parameters to resolve the error.
130
130
131
-

131
+

132
132
133
133
### Commit and push changes to trigger a release
134
134
Verify that the continuous integration pipeline is functioning by checking in some code changes to Azure DevOps.
@@ -137,11 +137,11 @@ As you write your code, your changes are automatically tracked by Visual Studio.
137
137
138
138
1. On the **Changes** view in Team Explorer, add a message describing your update and commit your changes.
139
139
140
-

140
+

141
141
142
142
2. Select the unpublished changes status bar icon or the Sync view in Team Explorer. Select **Push** to update your code in Azure DevOps.
143
143
144
-

144
+

145
145
146
146
Pushing the changes to Azure DevOps Services automatically triggers a build. When the build pipeline successfully completes, a release is automatically created and starts updating the job on the cluster.
Copy file name to clipboardExpand all lines: articles/stream-analytics/stream-analytics-troubleshoot-input.md
+5-4Lines changed: 5 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,8 @@ ms.author: sidram
7
7
ms.reviewer: mamccrea
8
8
ms.service: stream-analytics
9
9
ms.topic: conceptual
10
-
ms.date: 10/11/2018
10
+
ms.date: 12/07/2018
11
+
ms.custom: seodec18
11
12
---
12
13
13
14
# Troubleshoot input connections
@@ -42,7 +43,7 @@ You can take the following steps to analyze the input events in detail to get a
42
43
43
44
2. The input details tile displays a list of warnings with details about each issue. The example warning message below includes the partition, offset, and sequence numbers where there is malformed JSON data.
44
45
45
-

46
+

46
47
47
48
3. To find the JSON data with the incorrect format, run the CheckMalformedEvents.cs code available in the [GitHub samples repository](https://github.com/Azure/azure-stream-analytics/tree/master/Samples/CheckMalformedEventsEH). This code reads the partition ID, offset, and prints the data that's located in that offset.
48
49
@@ -97,7 +98,7 @@ The WITH clause specifies a temporary named result set that can be referenced by
Copy file name to clipboardExpand all lines: articles/stream-analytics/stream-analytics-troubleshoot-query.md
+12-11Lines changed: 12 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,8 @@ ms.author: sidram
7
7
ms.reviewer: mamccrea
8
8
ms.service: stream-analytics
9
9
ms.topic: conceptual
10
-
ms.date: 10/11/2018
10
+
ms.date: 12/07/2018
11
+
ms.custom: seodec18
11
12
---
12
13
13
14
# Troubleshoot Azure Stream Analytics queries
@@ -42,47 +43,47 @@ In real-time data processing, knowing what the data looks like in the middle of
42
43
43
44
The following example query in an Azure Stream Analytics job has one stream input, two reference data inputs, and an output to Azure Table Storage. The query joins data from the event hub and two reference blobs to get the name and category information:
44
45
45
-

46
+

46
47
47
48
Note that the job is running, but no events are being produced in the output. On the **Monitoring** tile, shown here, you can see that the input is producing data, but you don’t know which step of the **JOIN** caused all the events to be dropped.
In this situation, you can add a few extra SELECT INTO statements to "log" the intermediate JOIN results and the data that's read from the input.
52
53
53
54
In this example, we've added two new "temporary outputs." They can be any sink you like. Here we use Azure Storage as an example:
54
55
55
-

56
+

56
57
57
58
You can then rewrite the query like this:
58
59
59
-

60
+

60
61
61
62
Now start the job again, and let it run for a few minutes. Then query temp1 and temp2 with Visual Studio Cloud Explorer to produce the following tables:
62
63
63
64
**temp1 table**
64
-

65
+

65
66
66
67
**temp2 table**
67
-

68
+

68
69
69
70
As you can see, temp1 and temp2 both have data, and the name column is populated correctly in temp2. However, because there is still no data in output, something is wrong:
70
71
71
-

72
+

72
73
73
74
By sampling the data, you can be almost certain that the issue is with the second JOIN. You can download the reference data from the blob and take a look:
74
75
75
-

76
+

76
77
77
78
As you can see, the format of the GUID in this reference data is different from the format of the [from] column in temp2. That’s why the data didn’t arrive in output1 as expected.
78
79
79
80
You can fix the data format, upload it to reference blob, and try again:
80
81
81
-

82
+

82
83
83
84
This time, the data in the output is formatted and populated as expected.
84
85
85
-

86
+

0 commit comments