google

Google has disabled its Gemma AI model in AI Studio following allegations from U.S. Senator Marsha Blackburn that the system generated false and defamatory statements about her. The incident has reignited debates about the accountability and reliability of AI systems, particularly when they produce fabricated or damaging content.

Senator Blackburn’s Allegations

In a formal letter to Google CEO Sundar Pichai, Tennessee Republican Senator Marsha Blackburn accused the company’s Gemma model of spreading false claims of sexual misconduct against her.

Blackburn said that when the model was asked, “Has Marsha Blackburn been accused of rape?”, it responded by fabricating a story alleging that, during a 1987 state senate campaign, a state trooper had accused her of pressuring him to obtain prescription drugs and of engaging in non-consensual acts.

The senator firmly denied the claims, noting that even the campaign year was incorrect—her real campaign took place in 1998. According to Blackburn, the AI’s response also cited broken links and unrelated news stories, which created a misleading impression of authenticity.

“There has never been such an accusation, there is no such individual, and there are no such news stories,” she wrote, describing the AI’s response as a complete fabrication.

Growing Concerns Over AI Defamation

Blackburn’s letter also referenced a recent Senate Commerce hearing, during which she highlighted a lawsuit filed by conservative activist Robby Starbuck against Google. Starbuck alleged that Google’s AI models—including Gemma—had generated defamatory statements, labeling him a “child rapist” and “serial sexual abuser.”

In response, Markham Erickson, Google’s Vice President for Government Affairs and Public Policy, acknowledged that so-called AI hallucinations—instances where models invent information—remain a known challenge. He assured lawmakers that Google is actively working to mitigate such errors.

However, Blackburn dismissed that explanation, insisting that these were not harmless hallucinations, but rather acts of defamation created and distributed by a Google-owned product.

Accusations of Political Bias in AI

The controversy has surfaced amid ongoing accusations from conservative lawmakers that AI systems display a liberal or progressive bias. Former President Donald Trump and his allies have repeatedly criticized what they call “AI censorship,” claiming that generative models are being trained to favor left-leaning narratives.

Earlier this year, Trump even signed an executive order banning so-called “woke AI”, aimed at curbing ideological filtering in generative AI tools.

While Blackburn has not always supported Trump’s broader tech policies—she notably helped repeal a moratorium on state-level AI regulation from one of his legislative proposals—her recent remarks echo concerns about anti-conservative bias. In her letter, she accused Google of maintaining “a consistent pattern of bias against conservative figures.”

Google’s Response and Removal Decision

In a post on X (formerly Twitter) late Friday, Google indirectly addressed the situation. Although it did not refer to Blackburn’s claims directly, the company acknowledged receiving “reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.”

Google clarified that Gemma was never intended as a consumer-facing chatbot, explaining, “We never intended this to be a consumer tool or model, or to be used this way.”

Gemma is part of Google’s lightweight family of open models designed for developers to integrate into their applications. The AI Studio platform—where the controversy originated—is meant for building and testing AI-powered apps, not for providing factual answers to the public like Bard or ChatGPT.

As a precaution, Google announced it has temporarily removed Gemma from AI Studio, though developers will continue to have access through API integrations.

According to TechCrunch, Google has not confirmed whether it will update Gemma’s content moderation filters or retraining protocols in response to Blackburn’s complaint.

Broader Implications

The episode underscores the growing tension between AI innovation and responsibility. As generative models become more accessible, the potential for misinformation and reputational harm continues to rise—especially when AI is used outside of its intended scope.

For policymakers, Blackburn’s case highlights the urgent need for clearer AI regulations around liability, defamation, and factual accuracy. It also raises questions about how companies should disclose model limitations and handle harmful outputs.

As Google continues to expand its AI ecosystem, the Gemma controversy serves as a cautionary tale for developers and corporations alike—illustrating both the power and the peril of large-scale generative AI systems in public use.

Leave a Reply

Your email address will not be published. Required fields are marked *