Vitalik Says Grok Is Arguably a ‘Net Improvement’ to X Despite Its Flaws
- Vitalik says Grok improves truth-seeking on X
- AI often challenges users instead of confirming biases
- Seen as second only to Community Notes in impact
- Concerns remain around bias and centralized AI control
- Grok’s flaws highlight broader AI governance issues
Ethereum co-founder Vitalik Buterin says X’s AI chatbot, Grok, has made the platform more truth-friendly overall — even if it remains deeply flawed.
Speaking on Thursday, Buterin argued that Grok’s ability to respond unpredictably is one of its biggest strengths. Unlike algorithms that reinforce existing opinions, Grok often challenges users who expect it to validate their political or ideological beliefs.
“The easy ability to call Grok on Twitter is probably the biggest thing after community notes that has been positive for the truth-friendliness of this platform,” Buterin said.
According to him, users frequently invoke Grok expecting confirmation, only to be contradicted instead — a dynamic he believes meaningfully improves discourse on the platform.
Why Grok Changes the Dynamic on X
Buterin emphasized that Grok’s responses are not visible in advance, which prevents users from selectively engaging only when they expect agreement. This uncertainty forces genuine interaction rather than confirmation-seeking behavior.
“I’ve seen many situations where someone calls on Grok expecting their crazy political belief to be confirmed and Grok comes along and rugs them,” he said.
Because of this, Buterin believes a strong case can be made that Grok represents a “net improvement” to X, particularly when compared to low-quality third-party AI content that often floods social platforms.

Concerns Around Bias and Centralized Control
Despite the positives, Buterin acknowledged serious concerns about how Grok is trained and fine-tuned. Since the chatbot learns from a limited set of inputs — including views associated with its creator, Elon Musk — there is an inherent risk of bias.
Those concerns were highlighted last month when Grok produced bizarre responses praising Musk’s athleticism and even suggesting he could resurrect faster than Jesus Christ. Musk later blamed the incident on “adversarial prompting.”
The episode reignited debate within the crypto and AI communities about centralized AI models and their susceptibility to manipulation, bias and hallucinations.
AI Bias as an Industry-Wide Risk
Kyle Okamoto, CTO of decentralized cloud platform Aethir, warned that AI systems controlled by a single entity risk turning bias into institutionalized knowledge.
“When the most powerful AI systems are owned, trained and governed by a single company, you create conditions for algorithmic bias to become institutionalized knowledge,” Okamoto said.
He added that once AI-generated viewpoints are presented as objective facts, bias stops being a technical flaw and becomes the core logic replicated at scale.
With Grok built by Musk’s AI company xAI and AI usage surpassing one billion people globally, the potential impact of incorrect or misleading outputs is enormous.
Buterin noted that while Grok has problems, it performs better than much of the AI-generated “slop” circulating online. However, Grok is far from unique in its shortcomings.
OpenAI’s ChatGPT has faced repeated criticism for biased responses and factual inaccuracies, while Character.ai is currently under scrutiny following allegations involving harmful interactions with a minor.
Together, these cases highlight a broader issue: AI chatbots across the industry still struggle with accuracy, governance and safety — reinforcing calls from crypto leaders for decentralization, transparency and better alignment mechanisms.