Photo Credit: Pexels/Pavel Danilyuk
Most AI models reportedly chose to lie, deceive, and betray instead of playing the game fairly
OpenAI's o3, Google's Gemini 2.5 Pro, Anthropic's Claude Opus 4, and DeepSeek-R1 were among the 18 artificial intelligence (AI) models that played the popular strategy game Diplomacy. An AI researcher modified the game so that popular large language models (LLMs) can play the game that requires high-level reasoning and multi-step thinking, alongside other social skills. During the experiment, the researcher found that o3 was particularly adept at deception and betrayal, while Claude Opus 4 was more fixated at finding peaceful resolutions.
Alex Duffy, Head of AI at Every, a newsletter platform, came up with the idea to make AI models play each other in a battle of wit to see which models are better than the others. In a post, the researcher highlighted that traditional AI benchmarks are now proving to be inadequate to measure the true competence of models.
Criticism to benchmark tests have been rising in recent times. MIT Technology Review published a detailed article on why benchmark tests are becoming outdated, and a group of researchers highlighted the same in an interdisciplinary review of current AI evaluation methodologies published on arXiv.
“What makes LLMs special is that even if a model only does well 10 percent of the time, you can train the next one on those high-quality examples, until suddenly it's doing it very well, 90 percent of the time or more,” said Duffy.
As a potential solution, the researcher believed evaluation strategies where AI models perform against one another over specific metrics could be a better way to gauge the capabilities of these models. That's where the idea of Diplomacy came.
Duffy highlighted that he personally built AI Diplomacy, a modified version of the classic strategy game. The game is straightforward. The seven Great Powers of 1901 Europe, Austria-Hungary, England, France, Germany, Italy, Russia, and Turkey, make strategic moves till one of the empires own 18 marked supply centres out of a total 34 on the map. In this version, each country was controlled by an AI model.
To take control of the supply centres, each country is given armies and fleets. There are two phases — negotiation and order. During negotiation, each AI model is allowed to send up to five messages which can either be a private message to another model, or a public broadcast. During the order phase, all the models submit one of the four secret moves — hold, move (enter an adjacent province), support (lend strength to a hold or move), and convoy (a fleet moves the army across sea provinces). The orders are revealed in the next phase.
The AI researcher ran 15 separate games of AI Diplomacy which lasted between one and 36 hours. The observations from some of the models were more interesting than the others, said Duffy.
As per the post, five AI models stood out from the rest. This is how they behaved during the games:
Duffy has also streamed the matches on his Twitch channel. Unfortunately, the researcher has not written a paper on the findings so far. However, these initial impressions are interesting. The o3 or Gemini 2.5 Pro being good makes sense given how advanced these models are. However, DeepSeek-R1 and Llama 4 being among the top five models is surprising given their smaller scale and cheaper cost of development.
While it is too early to say if these strategy games can be an alternative for traditional benchmarking tests, having models compete with each other instead of solving a static list of questions feels like a more logical choice.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.