Get the latest tech news
Microsoft’s latest model pushes the bar in AI video with trajectory-based generation
While AI enthusiasts are excited about the development, with many calling it a big leap in creative AI, it remains to be seen how Microsoft DragNUWA performs in the real world.
This allows the user to strictly define the desired text, image and trajectory in the input to control aspects like camera movements, including zoom-in or zoom-out effects, or object motion in the output video. In the early 1.5 version of the DragNUWA, which has just been released on Hugging Face, Microsoft has tapped Stability AI’s Stable Video Diffusion model to animate an image or its object according to a specific path. Just recently, Pika Labs made headlines by opening access to its text-to-video interface that works just like ChatGPT and produces high-quality short videos with a range of customizations on offer.
Or read this on Venture Beat