Cost effective vehicle perception technology debuts

0

According to AI specialists Cambridge Consultants, the viability of mass-market autonomous vehicles hangs in the balance. It states that the global market is expected to decline by more than 3% during 2020 as a result of the Covid-19 outbreak, and may take years to recover. Due to high technology costs, the automotive industry is struggling to introduce advanced driver-assistance systems (ADAS) beyond luxury vehicles and into the mass market.

Meanwhile, what it terms an arms race, the racking up of millions of driven miles to capture real-world training data, favors a small group of early leaders, blocking new entrants. Against this background, the company has developed EnfuseNet, which it claims is the first low-cost, high-resolution vehicle perception technology. It hopes the system will help vehicle manufacturers and mobility technology providers to realize a critical element of a self-driving system at a much lower cost, and to deliver autonomy to new and larger segments of the automotive industry.

Accurate and detailed depth point cloud data is critical for the autonomous decision making process. Today’s autonomous vehicles resolve depth data using two-dimensional camera inputs combined with lidar or radar. Lidar remains the most accurate approach but with unit costs for mechanical spinning lidar devices in the thousands of dollars, the technology is prohibitively expensive beyond the luxury market.

Radar is lower cost but does not provide enough depth points to build a high-resolution image. According to the Cambridge Consultants, EnfuseNet takes data from a standard RGB camera and low-resolution depth sensors, which cost just tens of dollars per device, and applies a neural network to predict depth at a much greater resolution than the original input. This depth information is per image pixel, enabling the system to provide depth data and a confidence prediction for every single object in an image.

The system was trained with synthetic data in a virtual learning environment, and the company said it performed impressively when tested with real-world data. This, Cambridge Consultants said, will enable OEMs and automotive suppliers to overcome the time, complexity and cost constraints of collecting real-world data to train their ADAS perception algorithms.

Importantly, generating high-quality depth point clouds, with confidence down to the pixel level, means that its system improves explainability and traceability, reducing the risk of ‘black box’ decision making in a safety-critical application. The underlying system model is based on a novel architecture that fuses Convolutional Neural Networks (CNNs), Fully Convolutional Neural Networks (FCNs), pretrained elements, transfer and multi-objective learning and other approaches to optimize depth prediction performance.

Share.

About Author

Comments are closed.