|
| 1 | +--- |
| 2 | +title: Synchronization across multiple Azure Kinect DKs |
| 3 | +description: In this article, we will explore the benefits of multi device synchronization as well as all the insides how it is performed. |
| 4 | +author: tesych |
| 5 | +ms.author: tesych |
| 6 | +ms.prod: kinect-dk |
| 7 | +ms.date: 01/10/2020 |
| 8 | +ms.topic: article |
| 9 | +keywords: azure, kinect, specs, hardware, DK, capabilities, depth, color, RGB, IMU, microphone, array, depth, multi, synchronization |
| 10 | +--- |
| 11 | + |
| 12 | +# Synchronization across multiple Azure Kinect DK devices |
| 13 | + |
| 14 | +In this article, we will explore the benefits of multi device synchronization and its details. |
| 15 | + |
| 16 | +Before you start, make sure to review [Azure Kinect DK Hardware specification](hardware-specification.md) and the [the multi-camera hardware set up](https://support.microsoft.com/help/4494429). |
| 17 | + |
| 18 | +There are a few important things to consider before starting your multi-camera setup. |
| 19 | + |
| 20 | +- We recommend using a manual exposure setting if you want to control the precise timing of each device. Automatic exposure allows each color camera to dynamically change exposure, as a result it is impossible for the timing between the two devices to stay exactly the same. |
| 21 | +- The device timestamp reported for images changes meaning to ‘Start of Frame’ from ‘Center of Frame’ when using master or subordinate modes. |
| 22 | +- Avoid IR camera interference between different cameras. Use ```depth_delay_off_color_usec``` or ```subordinate_delay_off_master_usec``` to ensure each IR laser fires in its own 160us window or has a different field of view. |
| 23 | +- Do ensure you are using the most recent firmware version. |
| 24 | +- Do not repeatedly set the same exposure setting in the image capture loop. |
| 25 | +- Do set the exposure when needed, just call the API once. |
| 26 | + |
| 27 | + |
| 28 | +## Why to use multiple Azure Kinect DK devices? |
| 29 | + |
| 30 | +There are many reasons to use multiple Azure Kinect DK devices. Some examples are |
| 31 | +- Fill in occlusions |
| 32 | +- 3D object scanning |
| 33 | +- Increase the effective frame rate to something larger than the 30 FPS |
| 34 | +- Multiple 4K color images capture of the same scene, all aligned at the start of exposure within 100 us |
| 35 | +- Increase camera coverage within the space |
| 36 | + |
| 37 | +### Solve for occlusion |
| 38 | + |
| 39 | +Occlusion means that there is something you want to see, but can't see it due to some interference. In our case Azure Kinect DK device has two cameras (depth and color cameras) that do not share the same origin, so one camera can see part of an object that other cannot. Therefore, when transforming depth to color image, you may see a shadow around an object. |
| 40 | +On the image below, the left camera sees the grey pixel P2, but the ray from the right camera to P2 hits the white foreground object. As a result the right camera cannot see P2. |
| 41 | + |
| 42 | +  |
| 43 | + |
| 44 | +Using additional Azure Kinect DK devices will solve this issue and fill out an occlusion problem. |
| 45 | + |
| 46 | +## Set up multiple Azure Kinect DK devices |
| 47 | + |
| 48 | +Make sure to review [the multi-camera hardware setup article](https://support.microsoft.com/help/4494429) that describes different options for hardware setup. |
| 49 | + |
| 50 | +### Synchronization cables |
| 51 | + |
| 52 | +Azure Kinect DK includes 3.5-mm synchronization ports that can be used to link multiple units together. When linked, cameras can coordinate the timing of Depth and RGB camera triggering. There are specific sync-in and sync-out ports on the device, enabling easy daisy chaining. A compatible cable isn't included in box and must be purchased separately. |
| 53 | + |
| 54 | +Cable requirements: |
| 55 | + |
| 56 | +- 3.5-mm male-to-male cable ("3.5-mm audio cable") |
| 57 | +- Maximum cable length should be less than 10 meters |
| 58 | +- Both stereo and mono cable types are supported |
| 59 | + |
| 60 | +When using multiple depth cameras in synchronized captures, depth camera captures should be offset from one another by 160μs or more to avoid depth cameras interference. |
| 61 | + |
| 62 | +> [!NOTE] |
| 63 | +> Make sure to remove the cover in order to reveal the sync ports. |
| 64 | +
|
| 65 | +### Cross-device calibration |
| 66 | + |
| 67 | +In a single device depth and RGB cameras factory calibrated. However, when multiple devices are used, new calibration requirements need to be considered to determine how to transform an image from the domain of the camera it was captured in, to the domain of the camera you want to process images in. |
| 68 | +There are multiple options for cross-calibrating devices, but in the GitHub green screen code sample we are using OpenCV methods |
| 69 | +There are multiple options for cross-calibrating devices, but in the [GitHub green screen code sample](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/green_screen) we are using OpenCV method. |
| 70 | + |
| 71 | +### USB Memory on Ubuntu |
| 72 | + |
| 73 | +If you are setting up multi-camera synchronization on Linux, by default the USB controller is only allocated 16 MB of kernel memory for handling of USB transfers. It is typically enough to support a single Azure Kinect DK, however more memory is needed to support multiple devices. To increase the memory, follow the below steps: |
| 74 | +- Edit /etc/default/grub |
| 75 | +- Replace the line that says GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" with GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.usbfs_memory_mb=32". In this example, we set the USB memory to 32 MB twice that of the default, however to can be set much larger. Choose a value that is right for your solution. |
| 76 | +- Run sudo update-grub |
| 77 | +- Restart the computer |
| 78 | + |
| 79 | +### Verify two Azure Kinect DKs' synchronization |
| 80 | + |
| 81 | +After setting up the hardware and connecting the sync out port of the master to sync in of the subordinate, we can use the [Azure Kinect Viewer](azure-kinect-viewer.md) to validate the devices setup. It also can be done for more than two devices. |
| 82 | + |
| 83 | +> [!NOTE] |
| 84 | +> The Subordinate device is the one that connected to "Sync In" pin. |
| 85 | +> The master is the one connected "Synch Out". |
| 86 | +
|
| 87 | +1. Get the serial number for each device. |
| 88 | +2. Open two instances of [Azure Kinect Viewer](azure-kinect-viewer.md) |
| 89 | +3. Open subordinate Azure Kinect DK device first. Navigate to Azure Kinect viewer, and in the Open Device section choose subordinate device: |
| 90 | + |
| 91 | + |
| 92 | +  |
| 93 | + |
| 94 | +4. In the section "External Sync", choose option "Sub" and start the device. Images will not be sent to the subordinate after hitting start due to the device waiting for the sync pulse from the master device. |
| 95 | + |
| 96 | + |
| 97 | +  |
| 98 | + |
| 99 | +5. Navigate to another instance of the Azure Kinect viewer and open the master Azure Kinect DK device. |
| 100 | +6. In the section "External Sync", choose option "Master" and start the device. |
| 101 | + |
| 102 | +> [!NOTE] |
| 103 | +> The master device must always be started last the get precise image capture alignment between all devices. |
| 104 | +
|
| 105 | +When the master Azure Kinect Device is started, the synchronized image from both of the Azure Kinect devices should appear. |
| 106 | + |
| 107 | +### Avoiding interference from other depth cameras |
| 108 | + |
| 109 | +Interference happens when the depth sensor's ToF lasers are on at the same time as another depth camera. |
| 110 | +To avoid it, cameras that have overlapping areas of interest need to have their timing shifted by the "laser on time" so they are not on at the same time. For each capture, the laser turns on nine times and is active for only 125us and is then idle for 1450us or 2390us depending on the mode of operation. As a result, depth cameras need their "laser on time" shifted by a minimum of 125us and that on time needs to fall into the idle time of the other depth sensors in use. |
| 111 | + |
| 112 | +Due to the differences in the clock used by the firmware and the clock used by the camera, 125us cannot be used directly. Instead the software setting required to ensure sure there is no camera interference is 160us. It allows nine more depth cameras to be scheduled into the 1450us of idle time of NFOV. The exact timing changes based on the depth mode you are using. |
| 113 | + |
| 114 | +Using the [depth sensor raw timing table](hardware-specification.md) the exposure time can be calculated as: |
| 115 | + |
| 116 | +> [!NOTE] |
| 117 | +> Exposure Time = (IR Pulses * Pulse Width) + (Idle Periods * Idle Time) |
| 118 | +
|
| 119 | +## Triggering with custom source |
| 120 | + |
| 121 | +A custom sync source can be used to replace the master Azure Kinect DK. It is helpful when the image captures need to be synchronized with other equipment. The custom trigger must create a sync signal, similar to the master device, via the 3.5-mm port. |
| 122 | + |
| 123 | +- The SYNC signals are active high and pulse width should be greater than 8us. |
| 124 | +- Frequency support must be precisely 30 fps, 15 fps, and 5 fps, the frequency of color camera's master VSYNC signal. |
| 125 | +- SYNC signal from the board should be 5 V TTL/CMOS with maximum driving capacity no less than 8 mA. |
| 126 | +- All styles of 3.5-mm port can be used with Kinect DK, including "mono", that is not pictured. All sleeves and rings are shorted together inside Kinect DK and they are connected to ground of the master Azure Kinect DK. The tip is the sync signal. |
| 127 | + |
| 128 | + |
| 129 | + |
| 130 | +## Next steps |
| 131 | + |
| 132 | +- [Use Azure Kinect Sensor SDK](about-sensor-sdk.md) |
| 133 | +- [Capture Azure Kinect device synchronization](capture-device-synchronization.md) |
| 134 | +- [Set up hardware](set-up-azure-kinect-dk.md) |
| 135 | + |
| 136 | + |
0 commit comments