Sora Adds User Controls for AI-Generated Video Appearances

Key Points
- Sora now offers controls to limit AI cameo appearances in videos.
- Users can block political content, specific language, or certain visual contexts.
- Customization includes optional visual elements like specific hats.
- Updates are part of broader efforts to stabilize the platform.
- Critics warn that past AI tools have been bypassed for misuse.
- OpenAI is working to improve the existing watermark protection.
- The company pledges further enhancements to user control features.
OpenAI's Sora app, described as a "TikTok for deepfakes," now lets users limit how AI-generated versions of themselves appear in videos. The update introduces preferences that can block cameo appearances in political content, restrict specific language, or prevent certain visual contexts. OpenAI says the changes are part of broader weekend updates aimed at stabilizing the platform and addressing safety concerns. While the new tools give creators more say over their digital likenesses, critics note that past AI tools have been bypassed, and the watermark remains weak. OpenAI pledges further refinements.
Expanded User Controls
OpenAI has rolled out a set of controls for its Sora app, which it likens to a "TikTok for deepfakes." The new features let users dictate how AI‑generated versions of themselves—referred to as "cameos"—can be used in short videos. Users can now prevent their AI double from appearing in political videos, block certain words from being spoken, or stop the likeness from showing up in specific visual scenarios, such as near a particular condiment.
Customization Options
The platform also allows more playful preferences. For example, a user could require their AI self to wear a specific hat in every video. These settings are intended to give individuals greater agency over how their digital representations are presented.
Safety and Stability Efforts
The control update is part of a broader batch of weekend changes aimed at stabilizing Sora and managing the influx of AI‑generated content that OpenAI calls "slop." OpenAI staff have indicated that the company is working on improving the existing watermark, which some users have already found ways to circumvent.
Ongoing Concerns
Critics point out that prior AI tools, such as ChatGPT and Claude, have been exploited for illicit advice, suggesting that determined actors may find ways around Sora's new safeguards. The platform has already faced challenges with users bypassing its watermark, raising questions about the robustness of the current protections.
Future Directions
OpenAI’s team says the company will continue to "hill‑climb" on making restrictions more robust and will add additional ways for users to stay in control. The rollout reflects OpenAI’s response to user concerns while acknowledging that the fight against misuse is ongoing.