Get the latest tech news

Meta’s new world model lets robots manipulate objects in environments they’ve never encountered before


A robot powered by V-JEPA 2 can be deployed in a new environment and successfully manipulate objects it has never encountered before.

V-JEPA is composed of an encoder and a predictor (source: Meta blog)This architecture is the latest evolution of the JEPA framework, which was first applied to images with I-JEPA and now advances to video, demonstrating a consistent approach to building world models. Because the model learns general physics from public video and only needs a few dozen hours of task-specific footage, enterprises can slash the data-collection cycle that typically drags down pilot projects. In practical terms, you can prototype a pick-and-place robot on an affordable desktop arm, then roll the same policy onto an industrial rig on the factory floor without gathering thousands of fresh samples or writing custom motion scripts.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Meta

Meta

Photo of environments

environments

Photo of new world model

new world model

Related news:

News photo

The Meta AI App Lets You ‘Discover’ People’s Bizarrely Personal Chats

News photo

Meta is helping to fund geothermal energy projects in New Mexico

News photo

Meta's AI memorised books verbatim – that could cost it billions