ByteDance Unveils Seedance 2.0, Multimodal AI Video Generator

ByteDance Unveils Seedance 2.0, Multimodal AI Video Generator
The Verge

Key Points

  • ByteDance launched Seedance 2.0, a multimodal AI model for video generation.
  • The model accepts combined prompts of text, up to nine images, three video clips, and three audio clips.
  • It can produce 15‑second videos with audio, handling camera movement and visual effects.
  • Demonstrations include synchronized figure‑skating routines and celebrity‑lookalike fight scenes.
  • Seedance 2.0 is currently available via the Dreamina AI platform and the Doubao assistant.
  • No confirmed plans to integrate the tool into TikTok at this time.

ByteDance announced Seedance 2.0, a next‑generation AI model that can create short video clips from combined text, image, audio, and video prompts. The system supports up to nine images, three video clips, and three audio clips per request and can produce 15‑second videos that respect camera movement, visual effects, and physical laws. Demonstrations include synchronized figure‑skating routines, anime‑style scenes, and celebrity‑lookalike cinematic fights. Seedance 2.0 is currently available through ByteDance’s Dreamina AI platform and the Doubao assistant, with no clear plan for TikTok integration.

ByteDance Introduces Seedance 2.0

ByteDance, the company behind TikTok, released a new AI model named Seedance 2.0. In a blog post the firm described the model as a substantial leap in generation quality, capable of handling prompts that blend text, images, video, and audio. Users can refine a single request with up to nine images, three video clips, and three audio clips, allowing the system to synthesize complex scenes with multiple subjects.

Video Creation Capabilities

Seedance 2.0 can generate video clips up to 15 seconds long, complete with audio. The model accounts for camera movement, visual effects, and motion, and can follow text‑based storyboards. In a showcase, the AI reproduced a figure‑skating routine that featured synchronized takeoffs, mid‑air spins, and precise ice landings while adhering to real‑world physics.

Public Demonstrations

Social‑media users have already posted examples. One video combined the likenesses of Brad Pitt and Tom Cruise in a cinematic fight, prompting a comment from writer Rhett Reese. Other clips displayed anime‑style animation, cartoon sequences, sci‑fi scenes, and content that appears as if created by a human creator. Some demonstrations included characters from popular franchises, highlighting the model’s ability to generate recognizable styles.

Availability and Outlook

For now, Seedance 2.0 is accessible through ByteDance’s Dreamina AI platform and its AI assistant Doubao. It is unclear whether the technology will be integrated into TikTok, especially given recent changes in the app’s U.S. ownership. The rollout marks another step in the rapid advancement of AI‑driven video generation, joining efforts from Google, OpenAI, Runway, and other industry players.

#Artificial Intelligence#Video Generation#Multimodal AI#ByteDance#TikTok#Dreamina#Content Creation#Machine Learning#Technology#AI Model
Generated with  News Factory -  Source: The Verge

Also available in:

ByteDance Unveils Seedance 2.0, Multimodal AI Video Generator | AI News