Oxbotica is developing ‘deepfake’ technology to accelerate virtual training of its modular autonomous driving platform, producing near-infinite variations of surrounding objects, lighting and weather conditions within a much shorter timespan.
Best known for viral videos of face-swapped celebrities, deepfake technology uses deep-learning artificial intelligence to produce manipulated photorealistic images. Oxbotica said this could be used to generate thousands of variations of a scene within minutes, including reversing road signage, turning trees into buildings and replicating adverse weather or poor illumination. Details added include variations in shadows based on lighting conditions and rendered raindrops on lenses.
Paul Newman, the company’s co-founder and CTO, explained more about the technology: “Using deepfakes is an incredible opportunity for us to increase the speed and efficiency of safely bringing autonomy to any vehicle in any environment – a central focus of our Universal Autonomy vision. We are training our AI to produce a syllabus for other AIs to learn from. It’s the equivalent of giving someone a fishing rod rather than a fish and offers remarkable scaling opportunities.
“There is no substitute for real-world testing, but the AV industry has become concerned with the number of miles traveled as a synonym for safety. And yet, you cannot guarantee the vehicle will confront every eventuality, you’re relying on chance encounter. The use of deepfakes enables us to test countless scenarios, which will not only enable us to scale our real-world testing exponentially, but will also be safer.”
Development is ongoing; the company is using co-evolving artificial intelligence to ensure the scenarios are indistinguishable from original images. One is learning to create more convincing fakes, and the other is learning to detect which of the images are real and reproduced. Once the detection mechanism can’t differentiate between the two, the software will be ready to offer at-scale teaching of AI for autonomous vehicles, Oxbotica said.
Yuan Zhang, associate director of KPMG’s Mobility 2030 strategy, believes improvements to the simulation process will be a particularly important asset for system development: “Human-annotated training data, such as driving videos, can be expensive and takes a long time to collect, so a lot of artificial intelligence research over the past few decades has focused on increasingly unsupervised learning systems, which can ultimately be cheaper to implement and possibly more effective, given the larger volume of training data.”
Based in Oxford, Oxbotica’s Universal Autonomy platform underpins modular, low compute power autonomous driving solutions that can be tailored for on- and off-road use. Fully cloud-managed and provided with installation and operational tools, they require no external infrastructure or third-party maps, and can operate without GPS.
The company is using virtual environments developed with input of employees who previously worked in gaming, including on flight simulators and racing titles. On-road trials of the software stack are also underway in London and Oxford, using modified Ford Mondeo Hybrids.