Thermal camera auto calibration improves 3D depth perception – day or night

LinkedIn +

Automotive vision systems innovator Foresight, part of the Thermal by FLIR program, already offers a cost-effective 3D perception system through QuadSight, a multi-spectral vision solution that combines data from two FLIR Boson thermal imaging cameras and two visible-light cameras, each as stereo pairs, respectively. The data from these four cameras merges the strengths of each vision system technology to create a comprehensive 3D scene. The two wavelengths of light provide effective vision in harsh weather and in poor lighting, 24 hours a day, according to the company.

And now Foresight has further refined the technology using a new, patent-pending automatic calibration system, which enables stereo cameras across the visible and infrared spectrums to remain calibrated at all times. This ensures the accuracy of the images generated for effective 3D perception and distance measurement of the surrounding environment.

The company claims automatic calibration will provide vehicle manufacturers with the flexibility to mount stereoscopic cameras in both visible or thermal pairs on a single base or asymmetrically as separate units, offering more options for embedding stereo camera systems within a vehicle’s design – all without sacrificing perception or detection capabilities.

What is stereoscopic vision?

Autonomous vehicles and advanced driver assistance systems (ADAS) rely on a variety of sensors, including thermal imaging, to create an accurate 3D perception of the environment. That 3D perception is created by combining the images of two cameras together as a stereo pair, known as stereoscopic vision. Those stereo pair cameras can be visible or thermal or otherwise, to capture depth perception through the ability to measure distances by triangulating the cameras with objects in the field of view.

Measuring distance and creating a 3D model of the environment represents a necessary component for machine vision systems to comprehend the scene. That comprehension, combined with a convolutional neural network, enables the system to both identify objects and their respective distances to make the most appropriate decisions for the situation, in conjunction with other sensor modalities such as lidar and radar.

One of the key hurdles to successfully leveraging stereoscopic imaging in the automotive context, is to account for the naturally occurring dynamic changes that all vehicles experience on the road, including small vibrations, bumps and temperature changes. These phenomena can cause a reduction in accuracy and potentially a complete loss of calibration altogether.

These minute changes require stereoscopic vision systems to continuously calibrate the cameras in order to create clear and accurate stereoscopic 3D perception, including the ability to account for parallax issues that can arise in a rapidly changing scene. A miscalibration may lead to inaccurate depth estimation that can affect the decision-making mechanism of any automated system, especially ADAS and autonomous vehicles.

With this technology, the company claims auto makers and autonomous vehicle developers now have access to a redundant system and an additional layer of 3D perception data to further refine and improve ADAS and autonomous systems for improved safety.

Share this story:

About Author


With over 20 years experience in editorial management and content creation for multiple, market-leading titles at UKi Media & Events (publisher of Autonomous Vehicle International), Anthony has written articles and news covering everything from aircraft, airports and cars, to cruise ships, trains, trucks and even tires!

Comments are closed.