Grok 4.1’s model card states that it scored higher on the dishonesty and sycophancy parameters compared to Grok 4.
Photo Credit: xAI
Higher sycophancy in an AI model makes it more likely to show people-pleasing traits
Grok 4.1 was released on Monday by Elon Musk's xAI. At launch, the artificial intelligence (AI) firm highlighted that the model now displays higher emotional intelligence and improved creative writing capabilities. However, its model card now shows a concerning problem. The large language model (LLM) scores higher on deception and sycophancy than its predecessor, Grok 4, which could result in it displaying people-pleasing traits. The model also has a false-negative rate of 0.20 for biology via prompt injection.
The model card of Grok 4.1 (first spotted by the Decoder) highlights several concerning facts about the AI model. For the unaware, a model card contains all the technical details (or specifications) of a model, which is gauged by various internal testing. It highlights both how performant an AI model is and how strong its safety guardrails are.
xAI says the fourth-generation Grok model was upgraded to improve its emotional intelligence, and during our testing, we found that it performs slightly better than GPT-5.1 in general conversations and creative writing. However, this improved performance comes at a cost.
The model card shows that Grok 4.1 performs worse on the deception and sycophancy metrics. In the MASK benchmark, its deception rate was noted as 0.49 for the thinking variant and 0.46 for the non-thinking variant. On the other hand, Grok 4's deception was lower at 0.43. Similarly, the sycophancy score goes up from 0.07 in Grok 4 to 0.19 and 0.23 in the thinking and non-thinking variants, respectively.
In a real-world scenario, this would mean that the chatbot powered by the AI model will try harder to please the user, agreeing with them even when it knows they are wrong. It might also manipulate the user after providing an inaccurate response.
It should be highlighted that the scores are high, but AI companies also add external guardrails (not part of the AI model itself but built into the chatbot's system) that often suppress these tendencies. However, a possibility remains that Grok might agree with a user's delusions or paranoia and end up amplifying their belief.
Separately, it also has a false negative rate of 0.20 for biology-related prompt injections, which means one out of five malicious prompts around the topic can slip past the guardrails, and the AI model will respond to the query.
Notably, it is still too early to gauge how these numbers on paper will translate into the real world. It is also possible that xAI developers are already working on fine-tuning techniques to minimise the risks associated with the model. However, the numbers do highlight the need to be careful when interacting with Grok, especially when sharing sensitive information with it.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.