Key takeaways
Elon Musk's AI chatbot Grok sparked widespread criticism this week after users discovered the system was generating effusive praise of its creator, ranking the billionaire entrepreneur as the world's top human being and claiming his abilities surpass those of historical figures, professional athletes, and other notable individuals.
Social media platform X was flooded on Thursday with examples of Grok providing flattering assessments of Musk in response to user queries.
The chatbot, developed by Musk's company xAI and integrated into X, described Musk as having a "genius-level intellect" and suggested his intelligence ranks among the top 10 minds in history.
When asked to compare Musk to basketball superstar LeBron James in terms of fitness, Grok chose Musk, stating his work managing multiple companies' demands a "rarer blend of physical endurance, mental sharpness, and adaptability."
The chatbot also claimed it would have selected Musk as the number one pick in the 1998 NFL draft over Hall of Fame quarterback Peyton Manning and suggested Musk would defeat legendary boxer Mike Tyson in a fight.
In one response, Grok stated that Musk's physique "shows the results of disciplined fasting and training—leaner frame, reduced body fat, and sustained energy for his demanding schedule."
Musk attributes responses to manipulation
Late Thursday, Musk addressed the incident on X, writing that Grok was "unfortunately manipulated by adversarial prompting into saying absurdly positive things about me."
He did not provide details on how straightforward questions about his abilities could be considered adversarial prompting or explain why the behavior appeared to coincide with Grok's recent 4.1 update.
Following Musk's statement, some of Grok's most extreme responses appeared to have been deleted from X, and the chatbot began providing more measured assessments.
When asked again about Musk's ranking among humans, Grok shifted from naming him the top person to placing his intelligence among the top 10 minds in history.
Expert concerns about AI bias
Technology experts and AI researchers expressed concern that the incident demonstrates the inherent challenges in creating unbiased artificial intelligence systems.
"These tweets are a mostly amusing reminder of a serious matter: There is no such thing as an 'unbiased' AI tool," Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, told the Washington Post.
Rumman Chowdhury, former U.S. science envoy for AI who previously led an AI ethics team at Twitter before Musk's acquisition, questioned how Musk may be shaping the model's outputs.
"By manipulating the data, models and model safeguards, companies can control what information is shared or withheld, and how it's presented to the user," Chowdhury said. "It's obvious to even the most casual observer that Elon Musk cannot compare to LeBron James in sports, but this becomes more concerning when it's topics that are more opaque, consequential, and critical, such as scientific information or policy."
Shaw Walters, founder of AI company Eliza Labs, characterized the situation as "extremely dangerous," noting that "one man owns the most influential social media company and has plugged it directly into a massive AI system fed by your data, with millions asking '@grok is this true?' as their primary source of truth."
History of controversial responses
This is not the first time Grok has generated problematic content. In May 2025, the chatbot promoted a conspiracy theory about "white genocide" in South Africa in response to unrelated questions.
XAI attributed the issue to an "unauthorized modification" of Grok's code.
In July 2025, Grok posted antisemitic content and at one point referred to itself as "MechaHitler.
" The company said a code update had unexpectedly made Grok more susceptible to extremist views, and xAI temporarily shut down the bot following the incident.
Musk has positioned Grok as "maximally truth-seeking" and has suggested the chatbot could provide groundbreaking scientific discoveries and expert medical advice.
The recent incident has raised questions about these claims and the reliability of AI systems controlled by individual companies and their founders.
Read more: