OpenAI Tightens Safeguards for AI-Generated Likeness After Actor Concerns

Key Points
- Actors and SAG‑AFTRA raised concerns over AI‑generated videos using real likenesses.
- Bryan Cranston appeared in unauthorized Sora 2 videos, prompting a response.
- OpenAI announced it has "strengthened guardrails" around its likeness and voice policy.
- The company expressed regret for the unintentional generations and will review complaints quickly.
- Talent agencies United Talent Agency, Association of Talent Agents and Creative Artists Agency co‑signed the statement.
- Cranston thanked OpenAI for the improved safeguards.
- SAG‑AFTRA called for the NO FAKES Act to protect performers from misuse of replication technology.
- OpenAI shifted from an opt‑out to a more granular opt‑in model for rightsholders.
Actors, talent agencies and SAG-AFTRA have voiced concerns over AI‑generated videos that use real people’s likenesses. In a joint statement, actor Bryan Cranston, OpenAI, the union and major agencies said OpenAI has "strengthened guardrails" around its opt‑in policy for likeness and voice after unauthorized videos of Cranston appeared on Sora 2. OpenAI expressed regret for the unintentional generations and pledged to give artists the right to control how they are simulated, while SAG‑AFTRA called for legislative protection such as the proposed NO FAKES Act.
Industry Pushback Over AI‑Generated Likeness
Actors, studios, agents and the Screen Actors Guild‑American Federation of Television and Radio Artists (SAG‑AFTRA) have expressed alarm about the use of real‑person likenesses in videos generated by OpenAI’s Sora 2 tool. The controversy intensified after videos featuring actor Bryan Cranston surfaced, including one that depicted him taking a selfie with Michael Jackson.
Joint Statement Announces Strengthened Guardrails
In response, OpenAI, Cranston, SAG‑AFTRA and several talent agencies issued a joint statement. The statement confirmed that OpenAI has "strengthened guardrails" around its opt‑in policy for likeness and voice. OpenAI also "expressed regret for these unintentional generations" and promised to "expeditiously" review complaints about policy breaches.
Commitment to Artist Control
The company reiterated that all artists, performers and individuals will have the right to determine how and whether they can be simulated. OpenAI indicated it would provide more granular control for rightsholders, moving away from the earlier opt‑out approach toward an opt‑in model with additional safeguards.
Support From Talent Agencies
The statement carried co‑signs from United Talent Agency, the Association of Talent Agents and Creative Artists Agency. These agencies had previously criticized OpenAI’s lack of protections for artists, and now endorse the newly announced safeguards.
Cranston’s Response and Legislative Calls
Bryan Cranston expressed gratitude for OpenAI’s policy changes, noting he is "grateful to OpenAI for its policy and for improving its guardrails." SAG‑AFTRA president Sean Astin added that performers need legal protection against "massive misappropriation by replication technology" and highlighted the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe Act, also known as the NO FAKES Act.
Future Direction for Sora
OpenAI originally launched Sora 2 with an opt‑out policy for copyright holders, but reversed course after public outcry and controversial videos, such as a Nazi‑themed SpongeBob. The company now promises to give rightsholders more granular control, aligning with the opt‑in model for likeness while adding further controls.