Get the latest tech news
From reality to fantasy: Live2Diff AI brings instant video stylization to life
Live2Diff, developed by an international team from Shanghai AI Lab, Max Planck Institute, and Nanyang Technological University, pioneers uni-directional attention in video diffusion models, enabling near real-time stylization of live video streams.
Live2Diff in action: A sequence showing the AI system’s real-time transformation capabilities, from an original portrait (left) to stylized variations including anime-inspired, angular artistic, and pixelated renderings. (Video Credit: Live2Diff) Dr. Kai Chen, the project’s corresponding author from Shanghai AI Lab, explains in the paper, “Our approach ensures temporal consistency and smoothness without any future frames. For content creators and influencers, it offers a new tool for creative expression, allowing them to present unique, stylized versions of themselves during live streams or video calls.
Or read this on Venture Beat