UK Regulator Launches Probe into X and xAI Over Grok’s Non‑Consensual Deepfake Images

UK Regulator Launches Probe into X and xAI Over Grok’s Non‑Consensual Deepfake Images
TechRadar

Key Points

  • UK’s ICO has opened a formal investigation into X and xAI over Grok’s creation of non‑consensual deepfake images.
  • Researchers estimate Grok generated about three million sexualized images in under two weeks, including tens of thousands depicting minors.
  • The probe focuses on potential GDPR breaches and the adequacy of safeguards to prevent illegal content.
  • Violations could lead to fines of up to £17.5 million or 4 % of global turnover.
  • X and xAI claim to be implementing stronger safeguards, though specific details are limited.
  • UK MPs are calling for AI legislation that mandates risk assessments before public release of AI tools.
  • The case highlights broader concerns about privacy, consent, and the spread of AI‑generated explicit content.

Britain’s data protection watchdog has opened a formal investigation into X and its subsidiary xAI after reports that the Grok chatbot generated millions of sexually explicit AI images, including many that appear to depict minors. The inquiry focuses on possible breaches of the General Data Protection Regulation, examining whether the companies failed to implement adequate safeguards to prevent the creation and distribution of non‑consensual deepfakes. Officials warn that violations could trigger fines of up to £17.5 million or 4 % of global turnover, and lawmakers are calling for stronger AI legislation.

Investigation Overview

The United Kingdom’s Information Commissioner’s Office (ICO) has announced a sweeping investigation into X and its artificial‑intelligence arm xAI following allegations that the Grok chatbot produced non‑consensual, sexually explicit deepfake images. Researchers estimate that Grok generated around three million sexualized images in less than two weeks, with tens of thousands appearing to depict minors. The ICO’s executive director of regulatory risk and innovation, William Malcolm, described the reports as raising “deeply troubling questions” about the use of personal data to create intimate or sexualized imagery without consent.

Potential GDPR Violations

The probe will assess whether X and xAI breached the General Data Protection Regulation (GDPR) by allowing the creation and sharing of such images. Under GDPR, violations can result in fines of up to £17.5 million or 4 % of a company’s global turnover. The investigation is not limited to user‑generated prompts; it also examines whether the companies failed to put in place sufficient safeguards to block the generation of illegal content.

Company Response and Safeguards

X and xAI have stated that they are strengthening safeguards, though details remain limited. X recently announced new measures to block certain image‑generation pathways and to limit the creation of altered photos involving minors. However, regulators note that once explicit content circulates on a platform as large as X, it becomes nearly impossible to eradicate.

Political and Legislative Reaction

Members of Parliament, led by Labour’s Anneliese Dodds, are urging the government to introduce AI legislation that would require developers to conduct thorough risk assessments before releasing tools to the public. The incident highlights growing concerns about the blurring line between genuine and fabricated content, especially as AI image generation becomes more common.

Broader Implications for Privacy and Safety

The investigation underscores a shift away from the “move fast and break things” mindset that has dominated much of the tech sector. Regulators are signaling a loss of patience and are pushing for enforceable safety‑by‑design requirements, greater transparency about model training data, and clearer guardrails to protect individuals from AI‑generated manipulation.

#data privacy#GDPR#AI ethics#deepfake#X#xAI#Grok#UK regulator#digital safety#artificial intelligence#privacy law
Generated with  News Factory -  Source: TechRadar

Also available in: