Life Buzz News

New physics sim trains robots 430,000 times faster than reality


New physics sim trains robots 430,000 times faster than reality

On Thursday, a large group of university and private industry researchers unveiled Genesis, a new open source computer simulation system that lets robots practice tasks in simulated reality 430,000 times faster than in the real world. Researchers can also use an AI agent to generate 3D physics simulations from text prompts.

The accelerated simulation means a neural network for piloting robots can spend the virtual equivalent of decades learning to pick up objects, walk, or manipulate tools during just hours of real computer time.

"One hour of compute time gives a robot 10 years of training experience. That's how Neo was able to learn martial arts in a blink of an eye in the Matrix Dojo," wrote Genesis paper co-author Jim Fan on X, who says he played a "minor part" in the research. Fan has previously worked on several robotics simulation projects for Nvidia.

Genesis arrives as robotics researchers hunt for better tools to test and train robots in virtual environments before deploying them in the real world. Fast, accurate simulation helps robots learn complex tasks more quickly while reducing the need for expensive physical testing.

The Genesis platform, developed by a group led by Zhou Xian of Carnegie Mellon University, processes physics calculations up to 80 times faster than existing robot simulators ( like Nvidia's Isaac Gym). It uses graphics cards similar to those that power video games to run up to 100,000 copies of a simulation at once. That's important when it comes to training the neural networks that will control future real-world robots.

"If an AI can control 1,000 robots to perform 1 million skills in 1 billion different simulations, then it may 'just work' in our real world, which is simply another point in the vast space of possible realities," wrote Fan in his X post. "This is the fundamental principle behind why simulation works so effectively for robotics."

The team also announced the ability to generate what it calls "4D dynamic worlds" -- perhaps using "4D" because they can simulate a 3D world in motion over time. The system uses vision-language models (VLMs) to generate complete virtual environments from text descriptions (similar to "prompts" in other AI models), utilizing Genesis's own simulation infrastructure APIs to create the worlds.

Previous articleNext article

POPULAR CATEGORY

corporate

10424

tech

11384

entertainment

12756

research

5856

misc

13691

wellness

10169

athletics

13537