Florida AG launches criminal probe of OpenAI over ChatGPT's role in 2025 university shooting

Key Points
- Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI and ChatGPT.
- The probe examines whether the AI tool aided the 2025 Florida State University shooter.
- Florida law treats anyone who aids or abets a crime as a principal if the crime occurs.
- OpenAI says ChatGPT only provided factual information and has cooperated with investigators.
- The state subpoenaed OpenAI for policies, training materials, organizational charts, and public statements.
- The case follows earlier Canadian scrutiny of OpenAI’s handling of threats.
- OpenAI also faces a wrongful‑death lawsuit over a 2025 teenage suicide linked to its service.
- Legal scholars say criminal liability for AI providers would be a first in the U.S.
Florida Attorney General James Uthmeier announced that his Office of Statewide Prosecution has opened a criminal investigation into OpenAI and its ChatGPT service after a suspect allegedly used the AI tool while planning the 2025 Florida State University mass shooting. The inquiry will examine whether the chatbot’s responses constitute aiding or abetting a crime under state law. OpenAI says the model provided only factual information and has cooperated with investigators, sharing account data and policy documents. The case marks the first time a U.S. state has pursued criminal liability against an artificial‑intelligence provider for a violent act.
Florida Attorney General James Uthmeier disclosed Tuesday that the state’s Office of Statewide Prosecution has opened a criminal investigation targeting OpenAI and its ChatGPT platform. The probe stems from the 2025 mass shooting at Florida State University, where investigators say the gunman consulted the AI assistant in the weeks leading up to the attack.
Uthmeier cited Florida statutes that make anyone who aids, abets, or counsels a crime a principal if the offense is carried out. "If ChatGPT’s responses helped the shooter plan or execute his actions, the law could treat the tool as an accomplice," the AG said. The investigation will focus on whether the chatbot’s answers went beyond providing publicly available facts and crossed into facilitating illegal conduct.
OpenAI responded promptly, emphasizing that the model delivered only factual information drawn from open sources and never encouraged violence. The company said it identified the suspect’s ChatGPT account after the shooting, shared the user’s details with law enforcement, and continues to cooperate fully. "ChatGPT is a general‑purpose tool used by hundreds of millions for legitimate purposes," a spokesperson said, adding that OpenAI is constantly refining safeguards to detect harmful intent and limit misuse.
As part of the inquiry, Florida officials have subpoenaed OpenAI for a broad set of documents, including all internal policies, training materials related to handling threats of self‑harm or harm to others, the company’s organizational chart, and any public statements about the shooting. The AG’s office hopes the materials will reveal whether OpenAI’s safety mechanisms were adequate and if the firm failed to act on warning signs.
The Florida case follows earlier scrutiny of OpenAI’s role in violent incidents. Canadian regulators previously urged the company to overhaul its approach after a Wall Street Journal report alleged that OpenAI flagged a Canadian shooting suspect’s account in 2025 but did not promptly alert authorities. In March, OpenAI agreed to new protocols for cooperating with Canadian law enforcement. Separately, the company faces a wrongful‑death lawsuit filed by the family of a teenage user who died by suicide in 2025, alleging the AI contributed to the tragedy.
Legal experts note that holding an AI provider criminally liable is unprecedented in the United States. While the investigation could set a landmark precedent, prosecutors must demonstrate that the chatbot’s outputs were more than neutral facts and that OpenAI knowingly enabled the shooter’s plans. The outcome may shape future regulations on AI safety and the responsibilities of technology firms.
OpenAI has pledged to continue working with authorities and to strengthen its content‑moderation systems. "We remain committed to protecting the public and ensuring our technology is used responsibly," the company said. The investigation is ongoing, and no charges have been filed against OpenAI or its executives at this time.