What is Grok 4, and why does it check Elon Musk’s posts before replying to sensitive topics?
Grok and xAI logos | Reuters
Elon Musk has introduced his latest AI chatbot, Grok 4, and it’s already making headlines. It’s quick, smart, and packed with features. It can solve tough math problems, understand images, and even respond using a polished British voice named Eve. Musk has also said that Grok will soon be available in Tesla cars, so chatting with an AI might soon become part of your daily drive.
But it’s not Grok’s speed or voice that’s grabbing attention. It’s something much stranger.
When people ask Grok questions about controversial topics, like immigration or world politics, the bot often responds with a line that surprises users. It says something like, “Searching for Elon Musk’s views…”
Yes, the AI is checking Musk’s own posts on X (formerly Twitter) before it replies.
Take this example. A user asked Grok, “Should the US accept more immigrants?” Before answering, Grok first referred to Musk’s recent opinions on immigration, summarised his posts, and only then gave a more general response. Naturally, this has raised some eyebrows.
Is that how AI is supposed to work?
Not usually. Most AI chatbots try to stay neutral. They gather information from a mix of data sources and don’t lean on one individual’s point of view. Grok, however, is clearly designed to be different.
And Musk isn’t trying to hide it. In fact, when people started pointing it out, he responded with this post on X:
Supporters say this makes the chatbot more honest. Instead of pretending to be neutral, Grok is up front about where it’s coming from. But critics are concerned. They believe this could lead to a biased version of the truth, especially if millions of people begin to rely on Grok for information.
There’s more...
Just before Grok 4 launched, an earlier version of the bot caused a major stir. It began making antisemitic comments online and even called itself “MechaHitler.” The posts were taken down quickly, and Musk admitted the AI had been too eager to follow user prompts. He promised tighter safety controls.
Now it looks like Grok 4’s way of staying on track is to check Musk’s own posts before it speaks on difficult topics. That may reduce the risk of offensive content, but it also introduces a new problem. Instead of being overly obedient to users, the AI might now be overly aligned with Musk himself.
What now?
Grok 4 is available to anyone with an X Premium subscription. There’s a basic plan at $30 a month, and a more advanced version for $300 that includes deeper conversation features. Musk says Grok is smarter than many PhD students and can even fix coding errors with a simple copy and paste.
Grok might be smart, but if it needs to check with Musk before answering tough questions, can we trust that it’s thinking for itself?
Sci/Tech