The Best Robotic Depth Capture Solution for Your Business
Updated: Jun 24, 2022
Depth-sensing technology and robotics depth capture have achieved major importance in businesses across many different fields.
Essentially, 3D depth-sensing technology allows machines and devices to sense their surroundings via process optimization, robotics, autonomous vehicles, and automation. Depth-sensing applications include a wide variety of physical and technological methods for generating depth information for different industrial environments.
In order to choose a proper 3D depth sensing solution for your business, it is important to get familiar with the potential of 3D technology for modern business along with its up-to-date capturing methods, applications, and ways to implement the technology.
In this article, you will find the answers to your fundamental questions about depth-sensing technology, a comparison of its most common types, and tips on assessing which method is the right one for your brand’s needs.
Structured Light Depth-Sensing Technology: The Power of Robotic Depth Capture
The structured light depth-sensing system functions via a common light pattern. This pattern gets projected onto a scene or object and is recorded by at least one depth-sensing camera. This projection process involves stripes, dots, color-coded patterns, and time-coded patterns.
The cameras are placed at standard angles for the projector and are combined in a single device so that it can pick up the distorted pattern. The system then calculates the difference between the projected pattern and the distorted one perceived by the camera.
This way, robotics depth capture can reconstruct the scene depth and present it in the form of a depth map.
The structured light method can fail in situations involving long ranges, transparent objects, and highly reflective surfaces. Another factor that affects depth reconstruction is multiple cameras covering a scene with intersecting fields of view or external light sources operating in the same wavelength that causes interferences in projected patterns.
Structured light-sensing technology offers high levels of accuracy in short ranges, and it usually comes with more affordable costs than other robotics depth capture technologies.

Time of Flight Depth Sensors
Time of Flight, also known as ToF, denotes the computer vision system that measures the time of light traveling a particular distance. With the speed of light noted, the system can easily calculate the distance between the emitter and the receiver - all parts of a single device - due to the required time being proportional to the distance.
Similar to structured light, depth-sensing cameras that employ the ToF principles are prone to external effects of cameras or other light sources with the same wavelength. With multi-camera setups, their synchronization can resolve this issue. Essentially, ToF systems offer high accuracy, the ability to restore depth information from surfaces with minimal textures, and independence from external light sources.
This process usually involves a laser in the infrared spectrum or LED that emits the light. There are several possible variations of the Time of Flight cameras, with the most common being direct and indirect ToF systems.
Direct and Indirect ToF
Direct ToF (dToF) is the emission of a single pulse and approximation of the distance according to the time difference between the emitted pulse and its reflection.
On the other hand, indirect ToF (iToF) bases its function on a continuous coded light stream.
The system then calculates the distance by estimating the difference between emitted and received reflected light. While direct ToF is mostly used with scanning-based LiDARs, indirect ToF is commonly used as the main principle in flash-based cameras.
Compared to direct ToF, iToF brings you higher accuracy without needing the meticulous sampling rates of the laser light pulse. Therefore, it enables higher resolution captures and field of view with high frame rates at budget-friendly prices.
In general, flash-based ToF cameras function with modulated light and classic image sensor arrays that allow complete illumination of the whole visible scene. Only one shot is enough for constructing the depth-sensing scene.
This way, you can establish an optimal operating range for short and medium distances with the maximum range depending on the power of the light source. These cameras are an excellent solution for a lot of applications, including package weighing and dimensioning in logistics, due to their high frame rates and affordable costs.