Structured-Light Imaging: What’s in It?

clock-icon-white  4 min read

The demand and gradual shift from 2D to 3D imaging in various fields is increasingly a good indicator that there is a demand for 3D objects. There are two main groups when it comes to the usage of 3D imaging techniques: active and passive. Active algorithms are based on different types of interaction with measurable objects that are more precise and robust, unlike passive ones.

There are three major technologies used in modern 3D cameras: time-of-flight, active stereoscopy, and structured light. Let’s take a closer look at each one of these.

Time-of-Flight Cameras (ToF camera)

ToFs are based on direct depth measurements and are usually applied in real-time measurement systems with simple-to-use software. A ToFs performance depends on a sensor’s speed. These cameras work at short- and long-range distances, mostly indoors, and with different lighting conditions. The main drawbacks are high costs associated with illumination and acquisition components, and only short-range pure performance.

Active Stereoscopy

These utilize triangulation algorithms to estimate depth from multiple 2D images of the same object, but from different angles. Stereoscopic technology is available to everyone as it can be used with cost-effective consumer electronics, such as smartphones, cameras, or tablets that allow robustly acquire 3D shape—both indoors and outdoors. However, stereoscopic technology requires a high software complexity with no high accuracy and fast response.

Structured Light

Structured light is the perfect solution for a short (and very short) active scanning range. The algorithm has an impressive accuracy helping reach 100 μm indoors. This technology can not  be used for dark or transparent objects, and for objects with high reflectivity and variable lighting conditions.

Structured light is a specially designed 2D pattern that can be represented as an image. There are many pattern types, and the choice depends on the application. The most basic approach is to use Moiré fringe patterns—multiple images of sine patterns shifted by different phases.

The simplest setup of a structured light system is shown below. The pattern is projected on the object, the superposition of which is captured by the camera. The key step of the algorithm is to recover the 3D shape of the object from camera images. The algorithm is not limited to the light spectrum, though infrared or visible light is preferable.

structured-light-imaging

Use cases

With a high-quality, real-time 3D scanning, real-world objects can be quickly captured and accurately reconstructed—3D imagery only increases the wide range of applications for such imaging technologies. Imaging technologies are used ubiquitously from cameras in multiple interface devices for video chats, to doctors using endoscope for human cell and tissue observation, and collection.

Structured-light 3D technology opens a whole world of opportunities:

  • Medicine: facial anthropometry, cardiac mechanics analysis, and 3D endoscopy imaging
  • Security: biometric identification, manufacturing defect detection, and motion-activated security cameras
  • Entertainment and HCI: hand gesture detection, expression detection, gaming, and 3D avatars
  • Communication and collaboration: remote object transmission and 3D video conferencing
  • Preservation of historical artifacts
  • Manufacturing: 3D optical scanning, reverse engineering of objects to produce CAD data, volume measurement of intricate engineering parts, and more
  • Retail: skin surface measurement for cosmetics research and development, and wrinkle measurement on various fabrics
  • Automotive: obstacle detection systems
  • Guidance for the industrial robots

Our approach

Structured-light approach inspects the surface or sides of an object. Our goal is to acquire an object’s full shape by capturing it from different perspectives and building the whole picture accordingly.

We started with generated images using the Unity engine and created the whole 3D recognition pipeline. The process can be mapped to real-world applications easily and quickly. Currently, we are working on the hardware part to recognize real objects. Our method consists of six main steps:

  1. Calibrate camera-projector pair
  2. Project multiple Moiré patterns onto the object from different views (Fig. 2a)
  3. Compute depth information of the object for each view (Fig. 2b)
  4. Remove background and extract point cloud of object parts
  5. Register point clouds from multiple views into one (Fig. 2c)
  6. Convert point cloud into a mesh (Fig. 2d)
structured-light-rabbits

 

Fig. 2: Images from structured light pipeline: on the right (2a) projected pattern on the rabbit; on the left (2b) reconstructed part of the rabbit

 

pc-rabbit mesh-rabbit

 

Fig. 2: Images from structured light pipeline: top (2c) point cloud of the rabbit, and bottom (2d) mesh of the rabbit

 

structured-light-imaging-process

Our goal is to create an algorithm with Compute Unified Device Architecture (CUDA)’s support and port it on NVIDIA® Jetson Nano™ Developer Kit. We can apply neural networks at each step to the improve performance, accuracy, or remove limitations of the algorithm. Contact SoftServe today to learn more.