Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit a658384

Browse files
committedFeb 21, 2020
Revisions & updated image
1 parent d42f0ea commit a658384

File tree

2 files changed

+98
-107
lines changed

2 files changed

+98
-107
lines changed
 
Loading

‎articles/kinect-dk/multi-camera-sync.md

Lines changed: 98 additions & 107 deletions
Original file line numberDiff line numberDiff line change
@@ -1,147 +1,151 @@
11
---
2-
title: Synchronization across multiple Azure Kinect DKs
3-
description: In this article, we will explore the benefits of multi device synchronization as well as all the insides how it is performed.
2+
title: Synchronize multiple Azure Kinect DK devices
3+
description: This article explores the benefits of multi-device synchronization as well as how to set up the devices to synchronize.
44
author: tesych
55
ms.author: tesych
66
ms.prod: kinect-dk
7-
ms.date: 01/10/2020
7+
ms.date: 02/20/2020
88
ms.topic: article
9-
keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, microphone, array, depth, multi, synchronization
9+
keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, array, depth, multi, synchronization
1010
---
1111

12-
# Synchronization across multiple Azure Kinect DK devices
12+
# Synchronize multiple Azure Kinect DK devices
13+
14+
Each Azure Kinect DK device includes 3.5-mm synchronization ports (**Sync in** and **Sync out**) that you can use to link multiple units together. When linked, cameras can coordinate the trigger timing of multiple depth cameras and RGB cameras.
1315

1416
In this article, we will explore the benefits of multi device synchronization and its details.
1517

16-
Before you start, make sure to review [Azure Kinect DK Hardware specification](hardware-specification.md).
18+
## Why use multiple Azure Kinect DK devices?
1719

18-
There are a few important things to consider before starting your multi-camera setup.
20+
There are many reasons to use multiple Azure Kinect DK devices. Some examples are
21+
- Fill in occlusions.
22+
*Occlusion* means that there is something you want to see, but can't see it due to some interference. In our case Azure Kinect DK device has two cameras (depth and color cameras) that do not share the same origin, so one camera can see part of an object that other cannot. Therefore, when transforming depth to color image, you may see a shadow around an object.
23+
On the image below, the left camera sees the grey pixel P2, but the ray from the right camera to P2 hits the white foreground object. As a result the right camera cannot see P2.
24+
![Occlusion](./media/occlusion.png)
25+
Using additional Azure Kinect DK devices will solve this issue and fill out an occlusion problem.
1926

20-
- We recommend using a manual exposure setting if you want to control the precise timing of each device. Automatic exposure allows each color camera to dynamically change exposure, as a result it is impossible for the timing between the two devices to stay exactly the same.
21-
- The device timestamp reported for images changes meaning to ‘Start of Frame’ from ‘Center of Frame’ when using master or subordinate modes.
22-
- Avoid IR camera interference between different cameras. Use ```depth_delay_off_color_usec``` or ```subordinate_delay_off_master_usec``` to ensure each IR laser fires in its own 160 microsecond window or has a different field of view.
23-
- Do ensure you are using the most recent firmware version.
24-
- Do not repeatedly set the same exposure setting in the image capture loop.
25-
- Do set the exposure when needed, just call the API once.
27+
- Scan objects in three dimensions.
28+
- Increase the effective frame rate to something higher than the 30 FPS
29+
- Capture multiple 4K color images of the same scene, all aligned within 100 microseconds of the start of exposure.
30+
- Increase camera coverage within the space.
2631

32+
## Plan your multi-device configuration
2733

28-
## Why use multiple Azure Kinect DK devices?
34+
Before you start, make sure to review [Azure Kinect DK Hardware specification](hardware-specification.md).
2935

30-
There are many reasons to use multiple Azure Kinect DK devices. Some examples are
31-
- Fill in occlusions
32-
- 3D object scanning
33-
- Increase the effective frame rate to something larger than the 30 FPS
34-
- Multiple 4K color images capture of the same scene, all aligned at the start of exposure within 100 microseconds.
35-
- Increase camera coverage within the space
36+
### Select a camera configuration
3637

37-
### Solve for occlusion
38+
You can use two different approaches for your camera configuration:
3839

39-
Occlusion means that there is something you want to see, but can't see it due to some interference. In our case Azure Kinect DK device has two cameras (depth and color cameras) that do not share the same origin, so one camera can see part of an object that other cannot. Therefore, when transforming depth to color image, you may see a shadow around an object.
40-
On the image below, the left camera sees the grey pixel P2, but the ray from the right camera to P2 hits the white foreground object. As a result the right camera cannot see P2.
40+
- **Daisy chain configuration**. Synchronize one master device and up to eight subordinate devices.
41+
![Diagram that shows how to connect Azure Kinect DK devices in a daisy chain configuration.](./media/multicam-sync-daisychain.png)
42+
- **Star configuration**. Synchronize one master device with up to two subordinate devices.
43+
![Diagram that shows how to set up multiple Azure DK devices in a star configuration.](./media/multicam-sync-star.png)
4144

42-
![Occlusion](./media/occlusion.png)
45+
### Plan your camera settings and software configuration
4346

44-
Using additional Azure Kinect DK devices will solve this issue and fill out an occlusion problem.
47+
#### Exposure
48+
We recommend using a manual exposure setting if you want to control the precise timing of each device. Automatic exposure allows each color camera to dynamically change exposure, as a result it is impossible for the timing between the two devices to stay exactly the same.
4549

46-
## Set up multiple Azure Kinect DK devices
50+
Do not repeatedly set the same exposure setting in the image capture loop.
4751

48-
### Before you start
52+
Do set the exposure when needed, just call the API once.
4953

50-
You might need additional hardware before you can begin syncing devices:
54+
#### Timestamps
55+
The device timestamp reported for images changes meaning to ‘Start of Frame’ from ‘Center of Frame’ when using master or subordinate modes.
5156

52-
- Additional Azure Kinect devices with the latest firmware. For more info about updating your devices, go to [Update Azure Kinect DK]().
53-
- Host PC for each Azure Kinect DK. A dedicated host controller can be used, but it depends on how you're using the device and the amount of data being transferred over USB.
54-
- Azure Kinect Sensor SDK installed on each host PC. For more info on installing Sensor SDK, go to [Set up Azure Kinect DK]().
55-
- 3.5mm audio cables less than 10m in length (not included). Mono or stereo cables can be used.
56-
- One headphone splitter (star configuration only).
57+
#### Interference between IR cameras
5758

58-
### Set up your devices
59+
Avoid IR camera interference between different cameras.
5960

60-
Once you have all the hardware to sync your devices, you’ll need to connect the devices and configure syncing. There are two different ways to connect your hardware.
61+
Interference happens when the depth sensor's ToF lasers are on at the same time as another depth camera.
62+
To avoid it, cameras that have overlapping areas of interest need to have their timing shifted by the "laser on time" so they are not on at the same time. For each capture, the laser turns on nine times and is active for only 125us and is then idle for 1450us or 2390us depending on the mode of operation. As a result, depth cameras need their "laser on time" shifted by a minimum of 125us and that on time needs to fall into the idle time of the other depth sensors in use.
6163

62-
[Make sure to remove the Azure Kinect DK cover to reveal the sync ports!]()
64+
Due to the differences in the clock used by the firmware and the clock used by the camera, 125μs cannot be used directly. Instead the software setting required to ensure sure there is no camera interference is 160μs. It allows nine more depth cameras to be scheduled into the 1450μs of idle time of NFOV. The exact timing changes based on the depth mode you are using.
6365

64-
#### Daisy chain configuration
66+
Using the [depth sensor raw timing table](hardware-specification.md) the exposure time can be calculated as:
6567

66-
Sync up to 9 additional devices with the daisy chain configuration.
68+
> *Exposure Time* = (*IR Pulses* × *Pulse Width*) + (*Idle Periods* × *Idle Time*)
6769
68-
![Diagram that shows how to connect Azure Kinect DK devices in a daisy chain configuration.](./media/multicam-sync-daisychain.png)
70+
In your software, use ```depth_delay_off_color_usec``` or ```subordinate_delay_off_master_usec``` to make sure that each IR laser fires in its own 160μs window or has a different field of view.
6971

70-
1. Connect each Azure Kinect DK to power, then connect one device to one host PC.
71-
1. Connect the Azure Kinect DK devices to each other using a 3.5mm audio cable. Here's how:
72-
- **On the master device**
73-
Plug in one end of a 3.5mm cable into the sync out port on the first Azure Kinect DK—this is the master device.
74-
- **On the subordinate device**
75-
Plug the other end of the 3.5mm cable into the sync in port of the second Azure Kinect DK—this is a subordinate device.
72+
When using multiple depth cameras in synchronized captures, depth camera captures should be offset from one another by 160μs or more to avoid depth cameras interference.
7673

77-
To connect more subordinate devices, do the following:
74+
#### Using an external sync trigger
7875

79-
1. Take another 3.5mm cable and plug one end into the sync out port of the subordinate device.
80-
1. Plug the other end of the cable into the sync in port of the next Azure Kinect DK.
81-
1. Continue using 3.5mm audio cables to connect Azure Kinect DK devices until you have one device left. The last Azure Kinect DK should only have one 3.5mm cable plugged into the sync in port.
76+
A custom sync source can be used to replace the master Azure Kinect DK. It is helpful when the image captures need to be synchronized with other equipment. The custom trigger must create a sync signal, similar to the master device, via the 3.5-mm port.
8277

83-
#### Star configuration
78+
- The SYNC signals are active high and pulse width should be greater than 8us.
79+
- Frequency support must be precisely 30 fps, 15 fps, and 5 fps, the frequency of color camera's master VSYNC signal.
80+
- SYNC signal from the board should be 5 V TTL/CMOS with maximum driving capacity no less than 8 mA.
81+
- All styles of 3.5-mm port can be used with Kinect DK, including "mono", that is not pictured. All sleeves and rings are shorted together inside Kinect DK and they are connected to ground of the master Azure Kinect DK. The tip is the sync signal.
8482

85-
Sync up to 3 devices with the star configuration.
83+
![Camera trigger signal externally](./media/resources/camera-trigger-signal.jpg)
8684

87-
![Diagram that shows how to set up multiple Azure DK devices in a star configuration.](./media/multicam-sync-star.png)
85+
## Prepare your devices and other hardware
8886

89-
1. Connect each Azure Kinect DK to power, then connect one device to one host PC.
90-
1. Connect the devices using the headphone splitter and 3.5mm cables. Here’s how:
91-
- **On the master device**
92-
Plug in the headphone splitter into the sync out port on the first Azure Kinect DK—this is the master device.
93-
- **On the subordinate device**
94-
Plug one end of a 3.5mm cable into the sync in port of the second Azure Kinect DK, then the other end into the headphone splitter—this is a subordinate device.
87+
### Azure Kinect DK devices
9588

96-
To connect more subordinate devices, do the following:
89+
For each of the Azure Kinect DK devices that you want to synchronize, do the following:
9790

98-
1. Take another 3.5mm cable and plug one end into the sync out port of the subordinate device.
99-
1. Plug the other end of the cable into the headphone splitter connected to the master device.
91+
- Ensure that the latest firmware is installed on the device. For more info about updating your devices, go to [Update Azure Kinect DK]().
92+
- Remove the device cover to reveal the sync ports.
93+
- Note the serial number for each device. You will use this number later in the setup process.
10094

101-
### Set up synchronized triggering
95+
### Host computers
10296

103-
Once you've set up your hardware for synchronized triggering, you'll need to set up the software. For more info on setting this up, go to the [Azure Kinect developer documentation]() (English only).
97+
- Host PC for each Azure Kinect DK. A dedicated host controller can be used, but it depends on how you're using the device and the amount of data being transferred over USB.
98+
- Azure Kinect Sensor SDK installed on each host PC. For more info on installing Sensor SDK, go to [Set up Azure Kinect DK]().
10499

105-
## Synchronize multiple Azure Kinect DK devices
100+
#### Linux computers: USB Memory on Ubuntu
106101

107-
### Synchronization cables
102+
If you are setting up multi-camera synchronization on Linux, by default the USB controller is only allocated 16 MB of kernel memory for handling of USB transfers. It is typically enough to support a single Azure Kinect DK, however more memory is needed to support multiple devices. To increase the memory, follow these steps:
108103

109-
Azure Kinect DK includes 3.5-mm synchronization ports that can be used to link multiple units together. When linked, cameras can coordinate the timing of Depth and RGB camera triggering. There are specific sync-in and sync-out ports on the device, enabling easy daisy chaining. A compatible cable isn't included in box and must be purchased separately.
104+
1. Edit /**etc/default/grub**.
105+
1. Replace the following line:
106+
```cmd
107+
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
108+
```
109+
with this line:
110+
```cmd
111+
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.usbfs_memory_mb=32"
112+
```
113+
In this example, we set the USB memory to 32 MB twice that of the default, however it can be set much larger. Choose a value that is right for your solution.
114+
1. Run **sudo update-grub**.
115+
1. Restart the computer.
110116

111-
Cable requirements:
117+
### Cables
112118

113-
- 3.5-mm male-to-male cable ("3.5-mm audio cable")
114-
- Maximum cable length should be less than 10 meters
115-
- Both stereo and mono cable types are supported
119+
To connect the cameras to each other and to the host computers, you need 3.5-mm male-to-male cables (also known as 3.5-mm audio cable). The cables should be less than 10 meters long, and may be stereo or mono.
116120

117-
When using multiple depth cameras in synchronized captures, depth camera captures should be offset from one another by 160μs or more to avoid depth cameras interference.
121+
The number of cables that you need depends on the number of cameras you are using as well as the specific configuration. The Azure Kinect DK box does not include cables—you must purchase them separately.
118122

119-
> [!NOTE]
120-
> Make sure to remove the cover in order to reveal the sync ports.
123+
If you are connecting the cameras in the star configuration, you also need one headphone splitter.
121124

122-
### Cross-device calibration
125+
## Set up multiple Azure Kinect DK devices
123126

124-
In a single device depth and RGB cameras are factory calibrated. However, when multiple devices are used, new calibration requirements need to be considered to determine how to transform an image from the domain of the camera it was captured in, to the domain of the camera you want to process images in.
125-
There are multiple options for cross-calibrating devices, but in the GitHub green screen code sample we are using OpenCV methods
126-
There are multiple options for cross-calibrating devices, but in the [GitHub green screen code sample](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/green_screen) we are using OpenCV method.
127+
### Connect your devices
127128

128-
### USB Memory on Ubuntu
129+
#### Daisy chain configuration
129130

130-
If you are setting up multi-camera synchronization on Linux, by default the USB controller is only allocated 16 MB of kernel memory for handling of USB transfers. It is typically enough to support a single Azure Kinect DK, however more memory is needed to support multiple devices. To increase the memory, follow the below steps:
131+
1. Connect each Azure Kinect DK to power.
132+
1. Connect one device to one host PC.
133+
1. Select one device to be the master device, and plug a 3.5-mm audio cable into its **Sync out** port.
134+
1. Plug the other end of the cable into the **Sync in** port of the first subordinate device.
135+
1. To connect another device, plug another cable into the **Sync out** port of the first subordinate device, and plug the other end of that cable into the **Sync in** port of the next device.
136+
1. Repeat the previous step until all of the devices are connected. The last device should have one cable plugged into its **Sync in** port, and its **Sync out** port should be empty.
131137

132-
1. Edit /etc/default/grub
133-
1. Replace the line that says GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" with GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.usbfs_memory_mb=32". In this example, we set the USB memory to 32 MB twice that of the default, however to can be set much larger. Choose a value that is right for your solution.
134-
1. Run sudo update-grub
135-
1. Restart the computer
138+
#### Star configuration
136139

137-
### Verify two Azure Kinect DKs' synchronization
140+
1. Connect each Azure Kinect DK to power.
141+
1. Connect one device to one host PC.
142+
1. Select one device to be the master device, and plug the single end of the headphone splitter into its **Sync out** port.
143+
1. Connect 3.5-mm audio cables to the "split" ends of the headphone splitter.
144+
1. Plug the other end of each cable into the **Sync in** port of one of the subordinate devices.
138145

139-
After setting up the hardware and connecting the sync out port of the master to sync in of the subordinate, we can use the [Azure Kinect Viewer](azure-kinect-viewer.md) to validate the devices setup. It also can be done for more than two devices.
146+
## Verify that the devices are connected and communicating
140147

141-
> [!NOTE]
142-
> The Subordinate device is the one that connected to "Sync In" pin.
143-
>
144-
> The master is the one connected "Synch Out".
148+
Use the [Azure Kinect Viewer](azure-kinect-viewer.md) to validate the device setup.
145149

146150
1. Get the serial number for each device.
147151
2. Open two instances of [Azure Kinect Viewer](azure-kinect-viewer.md)
@@ -161,28 +165,15 @@ After setting up the hardware and connecting the sync out port of the master to
161165
162166
When the master Azure Kinect Device is started, the synchronized image from both of the Azure Kinect devices should appear.
163167

164-
### Avoiding interference from other depth cameras
165-
166-
Interference happens when the depth sensor's ToF lasers are on at the same time as another depth camera.
167-
To avoid it, cameras that have overlapping areas of interest need to have their timing shifted by the "laser on time" so they are not on at the same time. For each capture, the laser turns on nine times and is active for only 125us and is then idle for 1450us or 2390us depending on the mode of operation. As a result, depth cameras need their "laser on time" shifted by a minimum of 125us and that on time needs to fall into the idle time of the other depth sensors in use.
168-
169-
Due to the differences in the clock used by the firmware and the clock used by the camera, 125us cannot be used directly. Instead the software setting required to ensure sure there is no camera interference is 160us. It allows nine more depth cameras to be scheduled into the 1450us of idle time of NFOV. The exact timing changes based on the depth mode you are using.
168+
### Configure the software control settings
170169

171-
Using the [depth sensor raw timing table](hardware-specification.md) the exposure time can be calculated as:
172-
173-
> [!NOTE]
174-
> *Exposure Time* = (*IR Pulses* × *Pulse Width*) + (*Idle Periods* × *Idle Time*)
175-
176-
## Triggering with custom source
170+
Once you've set up your hardware for synchronized triggering, you'll need to set up the software. For more info on setting this up, go to the [Azure Kinect developer documentation]() (English only).
177171

178-
A custom sync source can be used to replace the master Azure Kinect DK. It is helpful when the image captures need to be synchronized with other equipment. The custom trigger must create a sync signal, similar to the master device, via the 3.5-mm port.
172+
### Calibrate the devices as a synchronized set
179173

180-
- The SYNC signals are active high and pulse width should be greater than 8us.
181-
- Frequency support must be precisely 30 fps, 15 fps, and 5 fps, the frequency of color camera's master VSYNC signal.
182-
- SYNC signal from the board should be 5 V TTL/CMOS with maximum driving capacity no less than 8 mA.
183-
- All styles of 3.5-mm port can be used with Kinect DK, including "mono", that is not pictured. All sleeves and rings are shorted together inside Kinect DK and they are connected to ground of the master Azure Kinect DK. The tip is the sync signal.
174+
In a single device depth and RGB cameras are factory calibrated. However, when multiple devices are used, new calibration requirements need to be considered to determine how to transform an image from the domain of the camera it was captured in, to the domain of the camera you want to process images in.
184175

185-
![Camera trigger signal externally](./media/resources/camera-trigger-signal.jpg)
176+
There are multiple options for cross-calibrating devices, but in the [GitHub green screen code sample](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/green_screen) we are using OpenCV method. The Readme file for this code sample provides more details and instructions for calibrating the devices.
186177

187178
## Next steps
188179

0 commit comments

Comments
 (0)
Please sign in to comment.