Get the latest tech news
Meta’s new world model lets robots manipulate objects in environments they’ve never encountered before
A robot powered by V-JEPA 2 can be deployed in a new environment and successfully manipulate objects it has never encountered before.
V-JEPA is composed of an encoder and a predictor (source: Meta blog)This architecture is the latest evolution of the JEPA framework, which was first applied to images with I-JEPA and now advances to video, demonstrating a consistent approach to building world models. Because the model learns general physics from public video and only needs a few dozen hours of task-specific footage, enterprises can slash the data-collection cycle that typically drags down pilot projects. In practical terms, you can prototype a pick-and-place robot on an affordable desktop arm, then roll the same policy onto an industrial rig on the factory floor without gathering thousands of fresh samples or writing custom motion scripts.
Or read this on Venture Beat