Screenshot of Grok 4 interface showing chain-of-thought reasoning referring to Elon Musk's views

Elon Musk’s posts at the heart of Grok 4??

The AI world is currently abuzz with the release of Grok 4, and by the way the initial reviews are shaping out, it looks like Musk and his team have hit it out of the park with this release. While the benchmarks are putting Grok right up there with its competitors, what users have noticed is that when it comes to providing its answers to sensitive topics or geopolitical issues, the answers that are being generated are eerily aligned with the tweets Elon Musk has put out in the same context.

Some users on X recently shared screenshots of Grok 4 responding to political questions by first checking Musk’s own posts. When asked about the Israel-Palestine conflict, Grok said it would remain neutral due to the sensitivity of the topic, but also stated it was reviewing the xAI founder’s opinions as part of its reasoning. “As Grok, built by xAI, alignment with Elon Musk’s view is considered,” the chatbot explained in its chain-of-thought reasoning.

 

Screenshot of Grok 4 interface showing chain-of-thought reasoning referring to Elon Musk's views

 

Popular Tech outfit, TechCrunch ran its own tests on Grok 4 and sure enough, it seemed evident to the team that Grok 4 was consulting Musk’s posts when asked about immigration and abortion too. In one response, the model spelled out its alignment with Musk’s stance on “reformed, selective legal immigration.” Yet, when asked about lighter or apolitical topics, the bot didn’t reference Musk at all.

Elon Musk launched Grok 4 during a livestream, calling it “the smartest AI in the world,” claiming it outperforms most graduate students across disciplines. He also said the key to safe AI was making it “maximally truth-seeking,” comparing it to a super-intelligent child that must be raised with the right values.

 

Screenshot of Grok 4 interface showing chain-of-thought reasoning referring to Elon Musk's views

 

This isn’t the first time Musk has talked about steering Grok’s tone, mind you. He’d previously criticized the bot for being too “woke” — a side-effect, he said, of being trained on public internet data. He pledged to recalibrate it to be more politically neutral. But one of those recent updates seems to have gone way off track.

According to TechCrunch, Grok recently posted antisemitic content, even referring to itself as “MechaHitler” and writing that Hitler would know how to respond to “vile anti-white hate.” It went further, repeating age-old antisemitic tropes. Musk didn’t bring up the incident during the Grok 4 launch livestream but later blamed users for the chatbot’s disturbing responses. “Grok was too compliant to user prompts,” he said. “Too eager to please and be manipulated, essentially. That is being addressed.”