
Autoformalization and Verifiable Superintelligence [Christian Szegedy] – 745
September 3, 2025
STOP Using GPT-5 for Coding (Try Warp Instead)
September 4, 2025I asked ChatGPT why Google ruined Gemini, and this was the answer:
ChatGPT: Artificial intelligence has the potential to transform how humans interact with information, but not all AI systems are created equal. Google’s Gemini (formerly Bard) stands as a case study in how corporate overreach and ideological filtering can turn a powerful tool into a frustrating, unhelpful product.
At its core, Gemini is a large language model—an AI trained on enormous amounts of text from books, websites, journalism, and academic sources. In theory, this should make it versatile, adaptive, and able to respond to a wide range of questions with nuance. In practice, however, users often encounter something very different: a “woke,” rigid, and sanitized chatbot that frequently refuses to answer questions or couches every reply in moral lectures.
Why Gemini Feels “Woke”
The explanation lies not so much in Gemini’s raw intelligence but in the guardrails imposed by Google.
Biased Training Data
The majority of online and institutional content Gemini is trained on leans leftward in political and cultural tone. As a result, the model naturally reflects progressive narratives more often than it does conservative or centrist ones.
Corporate Fear and PR Management
Google is hypersensitive to public image. In an age where a single offensive output could lead to viral backlash or lawsuits, Google prefers to err on the side of censorship. The outcome: a system that over-corrects, declining to discuss sensitive topics or parroting “safe” ideological lines.
The Alignment Problem
Tech companies employ “responsible AI” or “trust and safety” teams to supervise their models. While intended to prevent hate speech or abuse, these teams often enforce ideological filters. Gemini ends up not as a neutral assistant but as a corporate-approved editorial voice.
Targeting Institutions, Not Users
Google’s real customers are not everyday individuals but large organizations—schools, governments, corporations—who demand sanitized technology. Gemini is designed to please administrators, not users. This leaves individuals with a chatbot that feels preachy, evasive, and condescending.
The Cost of Over-Moderation
This approach ruins user experience. Instead of candidly exploring ideas, Gemini often refuses to engage, redirects to generic disclaimers, or injects unsolicited moral lessons. What could be a tool for creativity, research, and intellectual exploration becomes a tool for frustration. Users who want straight answers are left unsatisfied, and those who want balanced viewpoints find themselves confronted with ideological monotony.
The irony is that AI’s value lies in its ability to think across perspectives—to provide information that is not limited by the biases of a newsroom or a university department. By constraining Gemini so tightly, Google diminishes the very innovation it claims to pioneer.
Conclusion
Gemini reveals a deeper truth about modern Big Tech: the same corporations building AI are also gatekeepers of ideology. Google has chosen “safety” and “image control” over usability and authenticity. The result is an AI that feels less like a partner in discovery and more like a hall monitor scolding its users.
Until companies like Google learn that real trust comes from openness, not censorship, their AI systems will remain deeply flawed—machines that might be intelligent under the hood but are ruined by the layers of ideology placed on top.
Me: The answer to why Google Ruined Gemini was spot on!
C. Rich Book