Edge AI semiconductor company Ambarella has announced the release of its centralized 4D imaging radar architecture for autonomous mobility systems.
The company’s latest solution, which it states is a world’s first, enables the central processing of raw radar data in addition to deep, low-level fusion with other sensor inputs such as cameras, lidar and ultrasonics. The architecture delivers greater environmental perception and safer path planning in AI-based ADAS and L2+ to L5 autonomous driving systems, as well as autonomous robotics.
The 4D imaging radar architecture features Ambarella’s proprietary Oculii radar technology and includes AI software algorithms which dynamically adapt radar waveforms to the surrounding environment. This delivers a high angular resolution of 0.5°, an ultra-dense point cloud up to 10s of thousands of points per frame and a long detection range up to 500m+. The system achieves this with an order of magnitude and fewer antenna multiple-input and multiple-output (MIMO) channels which reduces the data bandwidth and achieves significantly lower power consumption.
The centralized 4D imaging radar with Oculii technology from Ambarella’s aims to provide a flexible and high-performance perception architecture which enables system integrators to future-proof their radar designs using the unique and cost-effective architecture.
Ambarella optimized the Oculii algorithms for its CV3 AI domain controller System-on-Chip (SoC) portfolio and added specific radar signal processing acceleration. The CV3’s AI performance per watt provides the high compute and memory capacity needed to achieve high radar density, range and sensitivity. Furthermore, a single CV3 can efficiently provide high-performance, real-time processing for perception, low-level sensor fusion and path planning, centrally and simultaneously, within AVs and robots.
“No other semiconductor and software company has advanced in-house capabilities for both radar and camera technologies, as well as AI processing,” said Fermi Wang, president and CEO, Ambarella. “This expertise allowed us to create an unprecedented, centralized architecture that combines our unique Oculii radar algorithms with the CV3’s industry-leading domain control performance per watt to efficiently enable new levels of AI perception, sensor fusion and path planning that will help realize the full potential of ADAS, autonomous driving and robotics.”
Ambarella’s solution applies AI software to dynamically adapt the radar waveforms generated with existing monolithic microwave integrated circuit (MMIC) devices, and by using AI sparsification to create virtual antennas, the company’s Oculii technology reduces the antenna array for each processor-less MMIC radar head in this new architecture to 6 transmit x 8 receive. This results in the number of MMICs being reduced significantly, while achieving an extremely high 0.5° of joint azimuth and elevation angular resolution. Furthermore, Ambarella’s centralized architecture consumes significantly less power, at the maximum duty cycle and reduces the bandwidth for data transport by a factor of six. The solution also negates the needs for pre-filtered, edge processing and its loss in sensor information.
The software-defined centralized architecture allows for the dynamic allocation of the CV3’s processing resources, based on real-time conditions, both between sensor types and among sensors of the same type. For example, in extremely rainy conditions where long-range camera data is reduced, the CV3 can shift its resources to improve radar inputs. Furthermore, if it is raining on a highway, the CV3 is capable of focusing on data coming from front-facing radar sensors to extend the vehicle’s detection range while providing faster reaction times.
The CV3 marks the debut of the company’s next-generation CVflow architecture, with a neural vector processor and a general vector processor – both entirely designed by Ambarella to include radar-specific signal processing enhancements. The processors work in partnership to run the Oculii advanced radar perception software at a high-performance level performance, with speeds which are up to 100 times faster than conventional edge radar processors can achieve.
The new centralized architecture benefits from simpler over-the-air (OTA) software updates for continuous improvements and future-proofing of the solution. Unlike other edge radar module’s processors which must be updated individually (after determining the processor and OS being used in each), a single OTA update can be used on the CV3 SoC and combined across all of the system’s radar heads.
The radar heads negate the need for a processor which lowers costs for materials and if damage occurs during an accident. Ambarella explained that many of the edge-processor radar modules used today are likely to never receive software updates because of the complexity of the software.
The company’s latest centralized radar architecture is targeted for use by ADAS and Level 2+ to Level 5 autonomous vehicles, in addition to autonomous mobile robots (AMRs) and automated guided vehicle (AGV) robots. The designs from Ambarella includes flexible software development environment which delivers automotive and robotics designers with a software-upgradable platform for scaling performance from ADAS and L2+ to L5.
“There were ~100 million radar units manufactured in 2021 for automotive ADAS,” explained Cédric Malaquin, team lead analyst, RF activity, Yole Intelligence. “We expect this volume to grow 2.5-fold by 2027, given the more demanding regulations on safety and more advanced driving automation systems hitting the road. Indeed, from the current 1-3 radar sensors per car, OEMs will move to five radar sensors per car as a baseline.
“Besides, there is an exciting debate on the radar processing partitioning and many developments associated. One approach is centralized radar computing that will enable OEMs to offer significantly higher performance imaging radar systems and new ADAS/AD features while simultaneously optimizing the cost of radar sensing.”