ByteDance Adds Watermarking and IP Guardrails to Seedance 2.0 for Cautious Global Rollout

ByteDance Adds Watermarking and IP Guardrails to Seedance 2.0 for Cautious Global Rollout
The Next Web

Key Points

  • ByteDance re‑launches Seedance 2.0 with new IP and deepfake safeguards.
  • Model now blocks generation from real faces and copyrighted characters.
  • All output includes visible watermarks and embedded C2PA Content Credentials.
  • An advanced invisible watermark can track content after it leaves the platform.
  • Initial rollout targets paid users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand and Vietnam.
  • United States and India are excluded from the first wave pending regulatory clarity.
  • Safeguards aim to meet upcoming EU AI Act transparency requirements.
  • Red‑team testing shows some ability to bypass filters via creative prompting.
  • ByteDance’s vertical integration gives it control over generation, editing and distribution.
  • The move contrasts with OpenAI’s recent shutdown of its AI video tool, Sora.

ByteDance is re‑launching its AI video model, Seedance 2.0, after a backlash over deepfake content. The company has partnered with a third‑party red‑team to embed visible watermarks, C2PA Content Credentials, and an advanced invisible watermark that can track content even after it leaves the platform. New safeguards block generation from real faces and copyrighted characters, addressing concerns raised by Hollywood studios and the Motion Picture Association. The rollout will start with paid users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand and Vietnam, while the United States and India are omitted pending regulatory clarity.

Background and Controversy

Six weeks ago a viral video showed a fabricated fight between two major Hollywood actors. The clip was produced by Seedance 2.0, ByteDance’s AI video model, and sparked cease‑and‑desist letters from six major studios, a formal denunciation from the Motion Picture Association, and criticism from SAG‑AFTRA over unauthorized use of performers’ likenesses. The incident highlighted the model’s ability to create realistic deepfakes that could infringe on intellectual property and personal rights.

New Safeguards and Transparency Measures

In response, ByteDance’s global safety and intellectual‑property teams, working with a third‑party red‑team, have added several guardrails before the model’s international release through CapCut, the company’s video‑editing platform used by more than 400 million monthly active users. The updated Seedance 2.0 now blocks video generation from images or videos that contain real faces, directly addressing the deepfake controversy. It also prevents the unauthorized creation of copyrighted characters such as Shrek, SpongeBob, Darth Vader, and Deadpool, which were cited in the Motion Picture Association’s complaint.

On the transparency front, every output will carry visible watermarks and embedded C2PA Content Credentials, an industry‑standard protocol for labeling AI‑generated media. ByteDance is also deploying an “advanced invisible watermarking” technology designed to identify content made with the model even after it has been shared or altered off‑platform. The company says it will conduct proactive monitoring for IP violations.

Rollout Strategy

The rollout is deliberately cautious. CapCut will initially make Seedance 2.0 available to paid users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand and Vietnam. The United States and India—ByteDance’s most complex regulatory markets—are absent from the first wave. Europe, Africa, South America and Southeast Asia are expected to follow, though no firm timeline has been offered for the United States.

Regulatory Context

The timing coincides with heightened regulatory scrutiny. The EU AI Act’s transparency requirements, which take effect in August 2026, will require providers of generative AI systems to mark output in machine‑readable formats and disclose the artificial origin of deepfakes. ByteDance’s adoption of C2PA watermarks and invisible marking appears to anticipate these obligations, though whether the safeguards will satisfy European regulators remains uncertain.

Red‑team testing indicates the guardrails are not impenetrable; creative prompting can still produce “likeness‑adjacent” characters that evoke real persons or copyrighted figures without directly reproducing them. This gap between policy and model behavior is a common challenge in AI governance.

Competitive Landscape

ByteDance’s move contrasts with OpenAI’s recent decision to shut down its own AI video tool, Sora, after a 45 percent drop in downloads and a collapsed licensing deal with Disney. While OpenAI retreats, ByteDance pushes forward, leveraging its vertical integration—owning the AI model, the editing platform and TikTok, the dominant short‑form video distribution channel—to potentially enforce IP protections across the entire content pipeline.

Outlook

The added safeguards represent a first step toward commercializing AI video generation at scale without drowning in litigation. Hollywood, regulators and policymakers across multiple jurisdictions will be watching closely to determine whether ByteDance’s measures are sufficient to address deepfake concerns and intellectual‑property rights.

#AI video generation#deepfake#watermarking#intellectual property#ByteDance#CapCut#global rollout#regulation#EU AI Act#content credentials
Generated with  News Factory -  Source: The Next Web

Also available in:

ByteDance Adds Watermarking and IP Guardrails to Seedance 2.0 for Cautious Global Rollout | AI News