EXPO NEWS | Day 2: Synthetic data-generation platform to speed AI development

LinkedIn +

First-time exhibitor and startup Parallel Domain is at ADAS & Autonomous Vehicle Technology Expo to showcase its eponymous data-generation platform that enables AI developers to generate synthetic data for training and testing perception models at a scale, speed and level of control that is impossible with data collected from the real world.

The company says the wide range of data that users can generate within the Parallel Domain platform better prepares their models for the unpredictability and variety of the physical world. The standard method of collecting and manually labeling data from the real world is prohibitively slow and costly, with developers often waiting weeks or months to obtain new data for improving their models. Even then, human labeling errors, class imbalances, edge cases and restrictions around privacy further hinder ML developers in getting their systems to market.

“We’re here to show how you can use synthetic data to train your ADAS models to perform better,” explained Alan Doucet (pictured above), head of sales at Parallel Domain. “With Parallel Domain, you’re limited only by your imagination. Any outdoor scenario can be crafted using our API – you can specify the location, dictate vehicle density, set the weather and time of day, sprinkle in debris and so much more. The API harnesses generative AI and 3D simulation to create complex scenes, corner cases, specific behaviors and rare objects – anything the customer can dream of.

“Take pedestrian detection…,” Doucet continued. “Most companies have really good models based on lots of real data collected on bright, sunny days. But what about at night or if it’s raining? Do you have enough images of a person crossing the street with a stroller that might be occluding the pedestrian’s legs? Are you confident that your model is going to see that and know that that is something they need to detect and avoid, or is it going to get confused? We can generate hundreds of thousands of these scenes in just a couple of hours, using your specific sensor configuration for a camera, lidar or radar, to match whatever edge cases you need.”

Examples of use cases include jaywalking pedestrians in various weather conditions, emergency vehicles with sirens on in specifying only night scenes; and increasing the density of cyclists to help augment their detection.

Speaking on the first morning of the show, Doucet said the company had already had a good response: “We’ve already had some people stop by from Porsche, so it’s been a great start!”

Find out more at Booth 6625.

Share this story:

About Author


With over 20 years experience in editorial management and content creation for multiple, market-leading titles at UKi Media & Events (publisher of Autonomous Vehicle International), Anthony has written articles and news covering everything from aircraft, airports and cars, to cruise ships, trains, trucks and even tires!

Comments are closed.