HMI

Exclusive interview: How new technology is reshaping the cockpit, according to Qualcomm’s Thomas Dannemann

LinkedIn +

Qualcomm’s Thomas Dannemann, product marketing director, shares his views on the critical trends in automotive, and how they will shape future mobility and onboard systems, with software updates to improve existing features and add new ones.

 

How is the automotive sector changing right now?

We are currently seeing four major trends transform automotive: connectivity – having a car that is always connected to the Internet of Things; digitalization – more displays and more digital content in the cockpit; electrification – a move away from combustion engines toward fully battery-electric vehicles; and the advent of autonomous driving – starting from partially autonomous, up to fully autonomous driving.

Every time you get into a new car, more of the mobile [consumer electronics]experience surrounds you, including multi-gigabit LTE and 5G connectivity, electronic clusters, high-resolution touchscreen displays, streaming media and high-definition video.

In the short term, the cockpit will continue to deliver information traditionally used by the driver: speed, charge or fuel remaining, engine temperature, oil pressure, engine check. In time, video from external cameras is expected to appear on the display, replacing external mirrors and resulting in lower wind resistance and better aerodynamics.

The main instrument cluster may be user-configurable so drivers can see the information they want, where they want to see it. Combining information from outside with real scenes of the car ride will create a new way of driving cars: augmented reality will enhance the driver’s view like that observed by a front-facing camera and overlaid with route guidance and surrounding information on a single screen either combined in the cluster display or projected on a head-up display [HUD].

Then, with more progress toward autonomous driving, the car could become an extension of the driver’s digital life, in both business and leisure. Drivers and passengers may grow to expect the same user experiences [apps, media, assets, content, etc] in the car that they get on their other mobile devices. Displays in the digital cockpit are expected to feature the kinds of information and communication now associated with smartphones, tablets and PCs, as well as high-resolution displays to accommodate television content and movies.

How does the Snapdragon Ride platform enable OEMs to react to market trends?

We offer scalability from 4G LTE to 5G, and vehicle-to-everything [V2X] technology that enables car-to-car communication, as well as car-to-infrastructure and/or car-to-everything communication. This could enable a car to share information about a new construction site that it has just passed, for example, with other users who then adjust their driving accordingly. The result of all this intercommunication is a kind of cooperative perception and maneuvering in which vehicles accumulate information from their surroundings and share it with others, greatly increasing mobility and safety, in addition to ushering in a new world of cooperative automation.

We also have a system-on-a-chip [SOC] roadmap to deliver a truly digital cockpit, capable of driving both the central and additional cluster displays, from one signal processor. It’s very flexible – it can power a one-screen navigation system up to a super-high-end premium solution featuring 8-10 displays, with shared content and multiple operating systems all running on the same platform.

It also allows partitioning between automotive-centric cluster information – everything that is needed to get information and provide feedback from the car – and entertainment functions. For example, it can run Android and deliver multimedia content, including live video streaming, so passengers can participate in video conferences, etc.

As a completely open platform, Qualcomm’s Snapdragon Ride platform allows automotive manufacturers to apply their own software assets that reflect their brand values and differentiate the driving experience from competitors. For example, they can choose between a sporty ride or opt for handling more suitable for family life. It can also support over-the-air [OTA] software updates, to improve existing features and add new features. On the hardware side, it can work with any number and type of sensor and configuration.

It’s also scalable, enabling customers to take their first steps into the autonomous driving world, but also capable of supporting L4 or L5 operation in the future.

What are the KPIs that have shaped the platform?

Power consumption is certainly one of the most important KPIs, as well as performance – if you want to run 3D-animated content on your cluster, as well as your central display, a very powerful and efficient GPU is required. You also need a very power-efficient AI accelerator to deliver natural voice and behavioral recognition, so the car can control comfort functions accordingly, as well as the road sign recognition and object classification needed for autonomous driving.

Snapdragon Ride, based on the Snapdragon family of automotive SoCs and accelerator, is built on scalable and modular heterogeneous high-performance multi-core CPUs, energy-efficient AI and computer vision [CV] engines, and industry-leading GPU. It offers thermal efficiency, from 30 tera operations per second [TOPS] for L1/L2 applications to over 700 TOPS at 130W for L4/L5 driving, resulting in designs that can be passively or air-cooled, thereby reducing cost and increasing reliability, avoiding the need for expensive liquid-cooled systems as well as enabling simpler vehicle designs while extending the driving range for electric vehicles.

The Euro NCAP safety standard for new cars delivered from 2024 has also had an influence – in the future, manufacturers will have to provide certain safety features inside the car, including a driver monitoring system to ensure they are not distracted or drowsy. This can be integrated as part
of the digital cockpit, using our platform.

Video conferencing is another feature we can support – and if your car is driving fully autonomously on the highway, this function could even be available to the ‘driver’.

What about ‘human–like’ driving? How important is this?

As part of Snapdragon Ride, Qualcomm provides behavioral planning software to replicate how a human drives a car. These ‘humanized driving algorithms’ will guide how the car is going to steer and maneuver in the future.

For example, as humans, when we want to change lanes, we typically tend to signal our intention by moving to the border of that lane, which alerts other drivers and warns them to be more cautious and keep their distance. If an autonomous vehicle performed a hard cut in and suddenly veered across the lane without this initial signaling, it could prove very unnerving. So, we have to make sure that the car is telling the driver, via acoustic and visual prompts, that it is about to conduct a lane change or overtaking maneuver, for example.

This article was first published in the May 2021 issue of Autonomous Vehicle International magazine – click here to apply for your free subscription! 

Share this story:

About Author

mm

With over 20 years experience in editorial management and content creation for multiple, market-leading titles at UKi Media & Events (publisher of Autonomous Vehicle International), Anthony has written articles and news covering everything from aircraft, airports and cars, to cruise ships, trains, trucks and even tires!

Comments are closed.