Google Removes Developer AI Model Gemma After Senator Accuses It of Fabricating Allegations

Key Points
- Google pulled its Gemma AI model from the AI Studio platform.
- Senator Marsha Blackburn alleged the model fabricated a criminal allegation about her.
- Gemma is intended for developer use, not for answering public factual queries.
- Google will keep Gemma available via API for developers.
- The company emphasized its commitment to reducing AI hallucinations.
- The false claim involved a supposed 1987 campaign, while the actual campaign was in 1998.
- Links provided by the model led to error pages or unrelated articles.
Google announced that its Gemma family of AI models has been withdrawn from the AI Studio platform after Republican Senator Marsha Blackburn claimed the model fabricated a serious criminal allegation about her. The company said Gemma is intended for developers, not for answering factual questions by the public, and will remain accessible via API. Google reiterated its commitment to reducing hallucinations in its models while addressing the defamation concerns raised by the senator.
Background
Google’s Gemma series is marketed as a family of AI models designed for developers to integrate into applications. The models are offered through Google’s AI Studio platform and, separately, via an API for developer use. Google has emphasized that Gemma is not a consumer‑facing chatbot and should not be used to answer factual questions.
Senator’s Complaint
Senator Marsha Blackburn, a Republican from Tennessee, wrote to Google CEO Sundar Pichai alleging that Gemma generated a false statement claiming she had been accused of a sexual relationship with a state trooper during her 1987 campaign for state senate. The model also supplied a list of fabricated news articles to support the claim. Blackburn noted that the alleged campaign year was incorrect; her actual campaign occurred in 1998, and the links provided by the model led to error pages or unrelated content. She characterized the response as defamation and demanded that Google shut down the model until it could be controlled.
Google’s Response
In a post on X, Google’s official news account said the company had “seen reports of non‑developers trying to use Gemma in AI Studio and ask it factual questions.” To prevent further confusion, Google removed access to Gemma from AI Studio, stating that the model would continue to be available to developers through the API. The company reiterated its commitment to “minimizing hallucinations and continually improving all our models.”
Implications
The incident highlights ongoing challenges with AI hallucinations—situations where models generate false or misleading information that appears factual. While Google acknowledges these issues, the episode underscores the tension between developer‑focused AI tools and public expectations of accuracy, especially when political figures are involved. The senator’s complaint adds a political dimension, prompting scrutiny of how AI outputs can impact reputations and the importance of robust safeguards against defamation.