OpenAI Disrupts Chinese and Global Actors Using ChatGPT for Surveillance and Influence Operations

Key Points
- OpenAI banned a China‑originated account that used ChatGPT to design a social‑media listening “probe”.
- The probe could crawl X, Facebook, Instagram, Reddit, TikTok and YouTube for politically, ethnically or religiously defined content.
- Another blocked account was developing a “High‑Risk Uyghur‑Related Inflow Warning Model” to track individuals.
- Russian, Korean and Chinese developers were caught refining malware with ChatGPT.
- Networks in Cambodia, Myanmar and Nigeria used the AI to create scams; detection of scams occurs three times more often than creation.
- OpenAI disrupted influence‑campaign operations in Iran, Russia and China that used ChatGPT‑generated content to drive engagement and division.
- The company’s quarterly threat report aims to raise awareness of state‑affiliated and criminal misuse of large language models.
OpenAI reported that it has banned a China‑originated account that used ChatGPT to design a social‑media listening “probe” capable of crawling major platforms for politically, ethnically or religiously defined content. The company also blocked an account developing a “High‑Risk Uyghur‑Related Inflow Warning Model” for tracking individuals. These actions are part of a broader effort that uncovered Russian, Korean and Chinese developers refining malware, and networks in Cambodia, Myanmar and Nigeria creating scams with the AI. OpenAI estimates its model detects scams three times more often than it creates them, and it has disrupted influence campaigns in Iran, Russia and China.
Background
OpenAI has begun publishing threat reports that highlight how state‑affiliated actors and criminal networks are leveraging large language models for malicious purposes. The latest report, released in the company’s blog, summarizes a range of activities detected over the previous quarter.
Tools and Targets in China
The company disclosed that a now‑banned account originating in China used ChatGPT to help draft promotional materials and project plans for a social‑media listening tool described as a “probe.” This probe could crawl platforms such as X, Facebook, Instagram, Reddit, TikTok and YouTube to locate content defined by the operator as political, ethnic or religious. OpenAI noted that it cannot independently verify whether the tool was employed by a Chinese government entity.
In a separate case, OpenAI blocked an account that was using the chatbot to develop a proposal for a “High‑Risk Uyghur‑Related Inflow Warning Model.” The model was intended to aid in tracking the movements of individuals deemed “Uyghur‑related.” Both incidents illustrate how the technology can be repurposed for targeted surveillance.
Global Threat Landscape
Beyond China, OpenAI identified Russian, Korean and Chinese‑speaking developers who were using ChatGPT to refine malware. The company also uncovered entire networks operating in Cambodia, Myanmar and Nigeria that employed the chatbot to assist in creating scams. OpenAI’s internal estimates indicate that ChatGPT is being used to detect scams three times as often as it is used to create them.
During the summer, OpenAI disrupted operations in Iran, Russia and China that leveraged ChatGPT to generate posts, comments and other content designed to drive engagement and sow division as part of coordinated online influence campaigns. The AI‑generated material was distributed across multiple social‑media platforms both within the originating nations and internationally.
OpenAI’s Response
OpenAI’s threat reports, first published in February 2024, aim to raise awareness of how large language models can be weaponized for debugging malicious code, developing phishing scams and other illicit activities. The latest roundup serves as a summary of notable threats and the accounts that have been banned as a result of violating OpenAI’s use‑policy.
By actively monitoring and disabling accounts that exploit its technology for surveillance, malware refinement, or disinformation, OpenAI seeks to limit the misuse of its models while continuing to provide tools for legitimate users.
Implications
The disclosures underscore the dual‑use nature of advanced AI systems. While the technology offers powerful capabilities for research and productivity, it also presents opportunities for authoritarian surveillance and coordinated misinformation efforts. OpenAI’s proactive stance highlights the challenges tech companies face in balancing openness with responsibility.