News

Alibaba’s Wan2.7-Video AI lets you make clips with prompts

Alibaba has rolled out Wan2.7-Video, a new AI model that gives creators more control over video production. The release comes right after Wan2.7-Image, showing Alibaba’s fast move into multimedia AI.

Wan2.7-Video comes with four models: Text-to-Video, Image-to-Video, Reference-to-Video, and Video Editing. It works with text, images, video, and audio, letting users generate, edit, reshape, and extend clips. Videos can run from 2 to 15 seconds in 720p or 1080p, with enterprise APIs available for batch use.

Editing is handled through simple text prompts. You can tweak actions, dialogue, appearance, scenes, and styles. The system syncs lip movements, keeps vocal signatures intact, and maintains consistent lighting. It also supports up to five characters with customizable voices and identities.

A built-in storytelling engine turns prompts into full storyboards. It can handle cinematic shots like FPV drone dives, 360-degree pans, and context-aware lighting. A continuation feature ensures smooth transitions between frames.

Wan2.7-Image, released earlier, focuses on personalization, color accuracy, and text rendering in 12 languages. It also supports batch workflows and pixel-level editing.

Also Read: OpenAI shuts down Sora AI video generator after six months

Both models are available through Alibaba Cloud’s Model Studio, the Wan website, and the Qwen App.

Bryan Rilloraza has been a fixture in the local tech scene for over a decade, sharing his perspective as a tech enthusiast and industry veteran. Backed by an MBA from De La Salle University, a Bachelor’s Degree from the University of the Philippines, and 20 years of corporate experience in the telecommunications and banking sectors, Bryan provides a practical, real-world analysis of how technology serves the consumer.

Write A Comment