Open-Source Simulation Engine Using Photorealistic Simulator for Self-Driving AI Training Released

The open-source VISTA 2.0 allows autonomous vehicles to train with complex situations like overtaking, following, negotiating, and multiagent scenarios

Open-Source Simulation Engine Using Photorealistic Simulator for Self-Driving AI Training Released

Photo Credit: Unsplash/Dan Gold

VISTA 2.0 built with real-world data while still being photorealistic.

Highlights
  • Self-driving AI can be trained using the VISTA 2.0 simulation engine
  • VISTA 2.0 was created by MIT's CSAIL
  • The open source software is available to all researchers
Advertisement

A photorealistic simulator has been developed by a group of reseachers that is capable of creating highly realistic environments that can be used to train autonomous vehicles. The VISTA 2.0 engine has been released in an open-source format by scientists at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), allowing other researchers to also teach their autonomous vehicles how to drive on their own in real-world scenarios, without the limitations of a real-world data set. 

The simulation engine developed by the researchers at CSAIL, known as VISTA 2.0, is not the first hyper-realistic driving simulation trainer for AI. “Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary,” said Daniela Rus, MIT Professor and CSAIL Director said.  

“We're excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds…” said Alexander Amini, CSAIL PhD student. 

Rus added that with the release of VISTA 2.0, other researchers will finally have access to a powerful new tool for the research and development of autonomous driving vehicles. But unlike other similar models, VISTA 2.0 has a distinctive advantage – it's built with real-world data while still being photorealistic.

The team of scientists used the foundations of their previous engine, VISTA, and mapped a photo-realistic simulation using the data available to them. This allowed them to enjoy the benefits of real data points but also create photo-realistic simulations for more complex training. 

It also helped the AV AI to train in a variety of complex situations such as overtaking, following, negotiating, and multiagent scenarios. All of this was done in a photo-realistic environment and in real-time. The hard work did show immediate results. AVs trained using VISTA 2.0 were far more robust than those trained on previous models that just used real-world data.


The Chromecast with Google TV that runs on Android TV is here. When will Google learn how to name products? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

New Mars Water Map Shows Traces of Ancient Water, Might Help Locate Landing Sites For Future Missions
Poco M5 Officially Teased, Said to Feature MediaTek Helio G99 SoC: Report
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »