ADAS & Autonomous Vehicle International
  • News
    • A-L
      • ADAS
      • AI & Sensor Fusion
      • Business
      • Connectivity
      • Cybersecurity
      • Expo
      • HMI
      • Last-mile delivery
      • Legislation & Standards
      • Localization/GNSS
    • M-Z
      • Mapping
      • Off-Highway
      • Robo-Taxis
      • Sensors
      • Shared Mobility
      • Safety
      • Simulation
      • Testing
      • Trucks
      • V2X
  • Features
  • Online Magazines
    • January 2025
    • September 2024
    • April 2024
    • January 2024
    • Subscribe
  • Opinion
  • Videos
  • Supplier Spotlight
  • Events
LinkedIn Facebook Twitter
  • Automotive Interiors
  • Automotive Testing
  • Automotive Powertrain
  • Professional Motorsport
  • Tire Technology
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
LinkedIn Facebook
Subscribe
ADAS & Autonomous Vehicle International
  • News
      • ADAS
      • AI & Sensor Fusion
      • Business
      • Connectivity
      • Cybersecurity
      • Expo
      • HMI
      • Last-mile delivery
      • Legislation & Standards
      • Localization/GNSS
      • Mapping
      • Off-Highway
      • Robo-Taxis
      • Sensors
      • Shared Mobility
      • Safety
      • Simulation
      • Testing
      • Trucks
      • V2X
  • Features
  • Online Magazines
    1. April 2025
    2. January 2025
    3. September 2024
    4. April 2024
    5. January 2024
    6. Subscribe
    Featured
    April 15, 2025

    In this Issue – April 2025

    Online Magazines By Web Team
    Recent

    In this Issue – April 2025

    April 15, 2025

    In this Issue – January 2025

    November 29, 2024

    In this Issue – September 2024

    July 23, 2024
  • Opinion
  • Videos
  • Supplier Spotlight
  • Events
  • Awards
    • About
    • 2025 winners
    • Judges
  • Webinars
LinkedIn Facebook
Subscribe
ADAS & Autonomous Vehicle International
Simulation

MIT researchers release open-source photorealistic simulator for autonomous driving

Anthony JamesBy Anthony JamesJune 24, 20225 Mins Read
VISTA 2.0 is an open-source simulation engine that can make realistic environments for training and testing self-driving cars. (Image courtesy of MIT CSAIL)
Share
LinkedIn Twitter Facebook Email
MIT scientists have unveiled the first open-source simulation engine capable of constructing realistic environments for deployable training and testing of autonomous vehicles.
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created ‘VISTA 2.0’, a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What’s more, all of the code is being open-sourced to the public.  

“Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary,” explained MIT Professor and CSAIL director Daniela Rus, senior author on a paper about the research. “With this release, the research community will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving.”

Building off of the team’s previous model, VISTA, MIT claims Vista 2.0 is fundamentally different from existing AV simulators since it’s data-driven – meaning it was built and photorealistically rendered from real-world data – thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized.

Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data.

“This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity,” said Alexander Amini, CSAIL PhD student and co-lead author on two new papers, together with fellow PhD student Tsun-Hsuan Wang. “VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D lidars with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well.”

The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following and negotiating, including multi-agent scenarios in highly photorealistic environments.

Recently, there’s been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this paradigm shift, a key question has emerged: can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, be more accurately synthesized?

Lidar sensor data is much harder to interpret in a data-driven world – you’re effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that sensory information back into the frame of view of this new virtual vehicle, with the help of neural networks.

Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time – making it possible to train neural nets offline, but also test online on the car in augmented reality set-ups for safe evaluations. “The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question,” continued Amini.

VISTA 2.0 allows the vehicle to move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren’t even in the original data. It can test for lane following, lane turning, car following and more risky scenarios such as static and dynamic overtaking (seeing obstacles and moving around so you don’t collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way, according to the team at MIT.

Taking their full-scale car out into the ‘wild’ – Devens, Massachusetts to be precise – the team saw  immediate transferability of results, with both failures and successes. “The central algorithm of this research is how we can take a data set and build a completely synthetic world for learning and autonomy,” concluded Amini. “It’s a platform that I believe one day could extend in many different axes across robotics. Not just autonomous driving, but many areas that rely on vision and complex behaviors. We’re excited to release VISTA 2.0 to help enable the community to collect their own data sets and convert them into virtual worlds where they can directly simulate their own virtual autonomous vehicles, drive around these virtual terrains, train autonomous vehicles in these worlds, and then can directly transfer them to full-sized, real self-driving cars.”

Amini and Wang wrote the paper alongside Zhijian Liu, MIT CSAIL PhD student; Igor Gilitschenski, assistant professor in computer science at the University of Toronto; Wilko Schwarting, AI research scientist and MIT CSAIL PhD ’20; Song Han, associate professor at MIT’s Department of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics at MIT; and Daniela Rus, MIT professor and CSAIL director. The researchers presented the work at the IEEE International Conference on Robotics and Automation (ICRA) in Philadelphia, Pennsylvania.

This work was supported by the National Science Foundation and Toyota Research Institute. The team acknowledges the support of Nvidia with the donation of the Drive AGX Pegasus.

Share. Twitter LinkedIn Facebook Email
Previous ArticleCornell algorithms help self-driving cars learn from own memories
Next Article Long-term autonomous public transportation project starts north of the Arctic Circle

Related Posts

Videos

WATCH NOW: Driving performance, efficiency and reliability – material solutions for vehicle domain controllers

June 13, 20252 Mins Read
AI & Sensor Fusion

NEW WEBINAR: Driving performance, efficiency and reliability – material solutions for vehicle domain controllers

May 30, 20252 Mins Read
Simulation

Foretellix integrates Foretify data automation toolchain with Nvidia Omniverse Blueprint and Cosmos

May 22, 20254 Mins Read
Latest News

WeRide collaborates with RTA and Uber to launch pilot operations

June 16, 2025

Aurrigo founder David Keene receives MBE for the decarbonization of airports

June 13, 2025

WATCH NOW: Driving performance, efficiency and reliability – material solutions for vehicle domain controllers

June 13, 2025
FREE WEEKLY E-NEWSLETTER

Receive breaking stories and features in your inbox each week, for free


Enter your email address:


Our Social Channels
  • Facebook
  • LinkedIn
Getting in Touch
  • Free Weekly E-Newsletters
  • Meet the Editors
  • Contact Us
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
RELATED UKI TOPICS
  • Automotive Interiors
  • Automotive Testing
  • Automotive Powertrain
  • Professional Motorsport
  • Tire Technology
  • Media Pack
    • 2026 Media Pack
    • 2025 Media Pack
© 2025 UKi Media & Events a division of UKIP Media & Events Ltd
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • Notice & Takedown Policy
  • Site FAQs

Type above and press Enter to search. Press Esc to cancel.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.

CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

SAVE & ACCEPT