Florida Attorney General Launches Probe into OpenAI Over Safety and Security Risks

Key Points
- Florida AG James Uthmeier announced an investigation into OpenAI over safety and national‑security concerns.
- The probe focuses on alleged links between ChatGPT and child sexual‑abuse material, self‑harm encouragement, and the FSU shooting suspect.
- Family of the 2025 Florida State University shooting victim has sued OpenAI, claiming the shooter used ChatGPT.
- OpenAI, slated for an IPO this year, also faces a Federal Trade Commission order to disclose child‑impact assessments.
- Uthmeier warned that OpenAI’s data could be accessed by foreign adversaries, including the Chinese Communist Party.
- Subpoenas are expected soon as the investigation seeks internal documents and usage data.
- The case adds to a wave of state actions targeting AI safety, transparency and potential misuse.
Florida Attorney General James Uthmeier announced Thursday that his office will investigate OpenAI, citing concerns that the company’s AI tools are being used for criminal activity, child exploitation and could fall into the hands of foreign adversaries. The probe follows a lawsuit filed by the family of a Florida State University shooting victim, which alleges the suspect communicated with ChatGPT. OpenAI, preparing for an IPO later this year, now faces heightened scrutiny from state officials and the Federal Trade Commission on how it safeguards its technology.
Florida Attorney General James Uthmeier on Thursday opened a formal investigation into OpenAI, the creator of the ChatGPT chatbot, after raising alarms that the firm’s artificial‑intelligence tools pose public‑safety and national‑security threats. In a statement, Uthmeier warned that OpenAI’s data and technology could be “falling into the hands of America’s enemies, such as the Chinese Communist Party.”
The inquiry will examine several disturbing allegations. State officials say ChatGPT has been linked to criminal behavior, including the distribution of child sexual‑abuse material and the encouragement of self‑harm. Moreover, Uthmeier alleges the chatbot may have assisted the individual suspected of carrying out the April 2025 shooting at Florida State University, a claim that adds a violent‑crime dimension to the probe.
The family of the victim killed in the FSU shooting has already filed a civil lawsuit against OpenAI, asserting that the suspect maintained “constant communication” with ChatGPT in the days leading up to the attack. The lawsuit, filed this week, intensifies pressure on the company as it prepares for an initial public offering later in 2026.
OpenAI’s challenges extend beyond the state level. Last October, the Federal Trade Commission ordered the firm and other tech giants to provide detailed information on how they assess the impact of their chatbots on children. The FTC’s request underscores growing federal concern about AI’s influence on minors, a topic that dovetails with Florida’s child‑safety worries.
Uthmeier’s office signaled that subpoenas are “forthcoming,” indicating that the investigation will move quickly to gather documents, internal communications and any data that could reveal how OpenAI’s models are deployed. The attorney general emphasized that AI should “supplement, support, and advance mankind, not lead to an existential crisis or our ultimate demise.”
OpenAI has not yet commented publicly on the investigation. Industry observers note that the timing is precarious; the company’s IPO plans could be jeopardized if regulators determine that its safety protocols are insufficient. Investors, meanwhile, are watching closely as the firm balances rapid product rollout with mounting calls for responsible AI governance.
State officials also highlighted the broader geopolitical stakes. By referencing the Chinese Communist Party, Uthmeier suggested that foreign actors might exploit OpenAI’s technology for espionage or disinformation campaigns. Such concerns echo warnings from other U.S. agencies that AI could become a strategic tool for adversaries if left unchecked.
As the probe unfolds, Florida’s investigation adds to a growing patchwork of state‑level actions aimed at curbing AI risks. California, Texas and New York have all introduced legislation targeting AI transparency, bias and child protection. The outcome of Uthmeier’s inquiry could set a precedent for how state attorneys general address emerging tech threats.
For now, OpenAI faces a dual front: defending its product’s safety record while navigating the regulatory gauntlet that could shape the future of artificial intelligence in the United States.