OpenAI's Sora App Fuels Rise of AI-Generated Videos and Deepfake Concerns

Real or AI? It's Harder Than Ever to Spot AI Videos. These Tips Can Help
CNET

Key Points

  • OpenAI's Sora app creates AI‑generated videos for a TikTok‑style feed.
  • Every Sora video includes a moving white cloud watermark.
  • Videos embed C2PA metadata that identifies OpenAI as the issuer.
  • Social platforms are adding AI‑content labels, but they are not infallible.
  • Experts warn the technology could boost deepfake creation and misinformation.
  • Users can verify authenticity via watermarks, metadata tools, and creator disclosures.
  • Vigilance and critical viewing remain essential to spot AI‑generated content.

OpenAI's Sora app lets anyone create realistic AI‑generated videos that appear on a TikTok‑style platform. Every video includes a moving white Sora logo watermark and embedded C2PA metadata that disclose its AI origin. While the tool showcases impressive visual quality, experts warn it could accelerate the spread of deepfakes and misinformation. Social platforms are beginning to label AI content, but users are urged to remain vigilant and check watermarks, metadata, and disclosures to verify authenticity.

Proliferation of AI‑Generated Video

Artificial‑intelligence video generators have become commonplace, producing everything from celebrity deepfakes to viral novelty clips. OpenAI’s Sora app, available on iOS, adds a new dimension by offering a TikTok‑like feed where every clip is AI‑generated.

Sora’s Built‑In Transparency Features

Each Sora video is automatically watermarked with a moving white cloud‑shaped logo that bounces around the frame’s edges. In addition, the videos embed content‑provenance metadata from the Coalition for Content Provenance and Authenticity (C2PA). When run through the Content Authenticity Initiative’s verification tool, the metadata indicates the video was "issued by OpenAI" and confirms its AI origin.

Detection and Labeling Methods

Beyond the built‑in watermark and metadata, major social platforms such as Meta, TikTok, and YouTube are implementing internal systems that flag and label AI‑generated content. However, these labels are not foolproof, and creators can also add their own disclosures in captions or posts.

Industry Concerns

Experts express unease about the ease with which realistic deepfakes can be produced, noting potential risks for public figures and the broader spread of misinformation. While OpenAI participates in industry efforts to improve content provenance, the rapid advancement of tools like Sora underscores the need for continued vigilance among users and platforms.

Practical Guidance for Users

To assess a video’s authenticity, users should look for the Sora watermark, examine embedded metadata with verification tools, and note any platform‑provided AI labels or creator disclosures. Remaining skeptical of content that feels “off” and checking for visual anomalies are recommended habits.

#OpenAI#Sora#AI-generated video#deepfake#watermark#metadata#C2PA#Content Authenticity Initiative#misinformation#social media labeling
Generated with  News Factory -  Source: CNET

Also available in: