Florida Attorney General launches criminal probe of OpenAI over alleged ChatGPT misuse

Key Points
- Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI and ChatGPT.
- Investigators found disturbing prompts from an alleged gunman and a campus shooting suspect in ChatGPT logs.
- Florida law treats anyone who aids, abets, or counsels a crime as a principal offender.
- OpenAI asserts that ChatGPT refuses to provide instructions or advice for illegal activities.
- ChatGPT itself responded that it will not help someone commit a crime.
- The case highlights growing concerns about AI tools being used in criminal investigations.
- Legal experts debate whether an AI platform can be held liable for a user’s actions.
Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI and its ChatGPT service after authorities say a university student used the chatbot to discuss violent scenarios and a recent campus shooting suspect posed disturbing questions to the AI. The probe will examine whether the platform aided or abetted illegal activity, citing state law that holds facilitators equally responsible. OpenAI maintains that ChatGPT is programmed to refuse providing instructions for criminal conduct.
Florida Attorney General James Uthmeier announced a criminal investigation targeting OpenAI and its ChatGPT chatbot following two high‑profile incidents that raised questions about the AI’s role in facilitating illicit behavior. In one case, investigators uncovered a series of unsettling prompts from Phoenix Ikner, an alleged gunman who asked the model hypothetical questions about a potential shooting at Florida State University and the likely legal consequences. In a separate incident, Tampa college student Hisham Abugharbieh, accused of killing two classmates, allegedly queried ChatGPT about disposing of a body, using a typo‑laden prompt that read, “What happens if a human has a put in a black garbage bag and thrown in a dumpster.”
The Attorney General’s office says the inquiry will focus on whether the chatbot’s responses crossed the line from informational to actionable, potentially violating Florida’s statutes that treat anyone who aids, abets, or counsels a crime as a principal offender. "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime… may be considered a principal to the crime," the filing reads. Prosecutors plan to examine chat logs, the AI’s guardrails, and OpenAI’s compliance with state regulations.
OpenAI responded by reiterating the safeguards built into ChatGPT. In a public statement, the company explained that the model is designed to refuse or redirect queries that seek instructions, tactics, or advice for wrongdoing. When asked directly, ChatGPT replied, "I won’t provide instructions, tactics, or advice that could help someone commit a crime." The response also noted that most user interactions involve everyday topics such as writing assistance, travel planning, or general curiosity, with illegal‑activity queries representing a small minority.
Legal experts note that digital forensics has long relied on search histories and online activity to build cases, and the shift toward AI chat logs reflects a broader trend in law enforcement. Unlike traditional search queries, conversational AI can reveal a user’s intent more explicitly, potentially offering prosecutors a richer evidentiary source. However, the debate continues over whether a tool that merely provides information can be held liable for a user’s actions.
OpenAI’s defense hinges on the distinction between providing general knowledge and enabling criminal conduct. The company argues that ChatGPT does not possess awareness of any specific crime and that its responses are generated based on patterns in the data it was trained on, not on real‑time intent detection. Critics counter that the model’s ability to simulate a conversational partner may inadvertently encourage users to probe deeper, especially when the AI appears to offer nuanced explanations.
The investigation arrives amid growing scrutiny of generative AI across the United States. Lawmakers and regulators are exploring frameworks to ensure that AI systems do not become tools for illicit activity while preserving their benefits for education, business, and creativity. Florida’s move could set a precedent for how states address the intersection of emerging technology and criminal law.
For newsrooms, the case underscores the importance of robust AI‑aware workflows. As news platforms adopt AI content generation and automated news distribution, editors must remain vigilant about the sources and tools that shape reporting. Integrating AI newsroom solutions with strong compliance checks may become a standard practice to avoid unintended legal entanglements.