AI Image Generator Startup’s Database Exposes Millions of Non‑Consensual Nude Images

Huge Trove of Nude Images Leaked by AI Image Generator Startup’s Exposed Database
Wired AI

Key Points

  • Security researcher Jeremiah Fowler found a misconfigured cloud bucket exposing over one million AI‑generated images and videos.
  • The majority of the content was explicit, including non‑consensual "nudified" depictions and AI‑generated images of minors.
  • DreamX, the operator of MagicEdit and DreamPal, closed public access, suspended its products and started an internal investigation.
  • Related brands BoostInsider and SocialBook were clarified as separate entities not responsible for the exposed storage.
  • Both MagicEdit and DreamPal apps were removed from Apple’s App Store and Google Play due to policy violations.
  • The incident highlights ongoing challenges in moderating AI‑generated sexual content and protecting children online.
  • Advocacy groups warn that rapid AI startup growth often outpaces trust‑and‑safety safeguards.

A security flaw left an AI image‑generation startup’s cloud storage publicly accessible, revealing over one million images and videos, most of them explicit and many featuring non‑consensual or underage subjects. The breach involved the companies MagicEdit, DreamPal and their parent entity DreamX, as well as related brands BoostInsider and SocialBook. Security researcher Jeremiah Fowler reported the exposure, prompting the firm to shut down access, suspend its products and launch an internal investigation. The incident highlights ongoing challenges around AI‑generated sexual content, child protection and trust‑and‑safety practices in emerging tech companies.

Background

Security researcher Jeremiah Fowler discovered that a cloud storage bucket used by an AI image‑generation startup was misconfigured, allowing anyone on the internet to access its contents. The bucket housed assets for the company’s consumer‑facing services MagicEdit and DreamPal, which are operated under the DreamX brand. Fowler’s investigation, first reported on the ExpressVPN blog, revealed the extent of the exposure and prompted immediate outreach to the company.

Scope of the Exposure

The publicly accessible bucket contained 1,099,985 records, the overwhelming majority of which were pornographic images or videos. The collection included "nudified" depictions of real individuals, face‑swapped content, and AI‑generated representations of minors. Fowler noted that at the time of discovery roughly 10,000 new images were being added each day, indicating an active pipeline of content generation. The data set spanned a range of styles, from anime‑style graphics to hyper‑realistic depictions that appeared to be based on actual people.

Company Responses

After being contacted, DreamX confirmed that it had closed public access to the bucket and initiated an internal investigation with external legal counsel. The firm also suspended access to its products pending the investigation’s outcome. Statements from DreamX emphasized a commitment to user safety, legal compliance and transparency, and highlighted existing moderation safeguards such as OpenAI’s Moderation API and automatic prompt filtering.

Related entities were also addressed. A spokesperson for SocialBook, an influencer‑marketing firm linked to the bucket, clarified that SocialBook does not operate or manage the exposed storage and is a separate legal entity. BoostInsider, the developer listed for the MagicEdit and DreamPal apps on the Apple App Store, was described as a defunct entity whose apps were removed as part of a broader restructuring and to strengthen content‑moderation frameworks.

Both MagicEdit and DreamPal were taken down from major app stores. Google confirmed that the apps were removed for policy violations related to sexually explicit content, while Apple indicated the apps had been withdrawn from its store.

Industry Implications

The breach underscores persistent risks associated with AI‑driven image generation, particularly the creation and distribution of non‑consensual sexual imagery and child sexual abuse material (CSAM). Advocacy groups such as EndTAB’s founder Adam Dodge warned that the incident reflects a broader pattern of startups prioritizing rapid growth over robust trust‑and‑safety measures. Fowler’s findings join earlier reports of misconfigured AI‑image databases that contained similarly abusive content.

Law enforcement and child‑protection organizations, including the National Center for Missing & Exploited Children, were notified, though they do not disclose details of specific tips received. The episode has prompted renewed calls for stricter oversight, mandatory content moderation, and clearer accountability mechanisms for companies deploying generative AI technologies.

#DreamX#MagicEdit#DreamPal#BoostInsider#SocialBook#Jeremiah Fowler#AI image generation#data breach#non‑consensual imagery#child sexual abuse material#AI safety
Generated with  News Factory -  Source: Wired AI

Also available in: