Google Removes Gemma Model from AI Studio After Senator Accuses It of Defamation

Google pulls Gemma from AI Studio after Senator Blackburn accuses model of defamation
TechCrunch

Key Points

  • Senator Marsha Blackburn accused Google’s Gemma model of producing false, defamatory statements about her.
  • Blackburn’s letter highlighted a specific instance where the model fabricated allegations of sexual misconduct.
  • Google stated that Gemma was intended for developer use, not for direct public queries.
  • The company removed Gemma from AI Studio but will keep it available via its API.
  • Google acknowledged hallucinations as a known issue and said it is working to reduce them.
  • The incident ties into broader political concerns about AI bias and defamation claims.

Google has taken its open‑source Gemma model offline from the AI Studio platform following a complaint from U.S. Senator Marsha Blackburn. The senator claimed the model generated false statements alleging sexual misconduct against her, describing the output as defamatory rather than a harmless hallucination. Google responded that the model was intended for developer use, not for direct public queries, and said it would keep the model available through its API while working to curb erroneous outputs. The episode highlights ongoing political concerns about AI bias and misinformation.

Background

U.S. Senator Marsha Blackburn sent a letter to Google chief executive Sundar Pichai alleging that the company’s Gemma model, accessible through the AI Studio development environment, produced false statements about her personal conduct. In the letter, Blackburn asserted that when asked about accusations of rape, the model fabricated a narrative involving a state trooper and alleged non‑consensual acts, which she described as entirely untrue. The senator also referenced a separate lawsuit filed by conservative activist Robby Starbuck, who claims Google’s AI systems have generated defamatory claims about him.

Blackburn framed the model’s output as more than a typical "hallucination"—a term commonly used to describe AI‑generated inaccuracies—arguing that the false statements constitute defamation that was distributed by a Google‑owned system. She linked the incident to broader concerns about perceived bias against conservative figures in AI technologies.

Google’s Response

Google acknowledged the issue, noting that Gemma was designed as a lightweight, open model for developers to integrate into their own applications, not as a consumer‑facing chatbot. The company explained that reports of non‑developers using AI Studio to ask factual questions prompted the decision to remove Gemma from the platform. Google emphasized that the model would remain accessible via its application programming interface (API) for legitimate development purposes.

In response to the senator’s allegations, Google’s vice president for government affairs and public policy reiterated that hallucinations are a known challenge in large language models and that the company is actively working to mitigate such errors. The firm clarified that it never intended the model to be used as a public question‑answer tool, reinforcing its commitment to responsible deployment of AI technologies.

The removal of Gemma from AI Studio underscores the tension between rapid AI innovation and the demand for accountability, especially when political figures claim that AI outputs have caused reputational harm. The episode adds to ongoing debates about how technology companies should address erroneous or potentially defamatory content generated by their models, and how regulatory bodies might oversee such issues.

#Google#Gemma#AI Studio#Marsha Blackburn#Robby Starbuck#AI defamation#AI hallucination#technology policy#AI bias#US Senate
Generated with  News Factory -  Source: TechCrunch

Also available in: