What happens when AI systems are asked to play social games instead of solving isolated tasks? Elif Akata’s recent talk at the Hertie School explored how language models reason, coordinate, and sometimes struggle when interacting with humans and other agents.
Recently, the Data Science Lab at the Hertie School welcomed Elif Akata, a PhD researcher at Helmholtz Munich and the University of Tübingen, for a thought-provoking discussion on her recent paper, “Playing repeated games with large language models.”
Her research examines large language models (LLMs) not just as problem-solvers, but as social agents that can cooperate, coordinate, and interact with humans and other systems.
Studying language models through behavioral game theory
In her talk, Elif introduced a novel approach to studying LLMs using tools from behavioral game theory – a field that explores how individuals make decisions in strategic, interactive contexts. “Most previous tests of LLMs focus on single-turn reasoning,” she explained. “We wanted a principled way to study them as interactive agents, not just as task solvers.”
To do this, her team designed experiments where different LLMs played finitely repeated 2×2 games, such as the Prisoner’s Dilemma and the Battle of the Sexes – both against each other and against human players. These games, though simple, reveal fundamental aspects of cooperation, competition, and coordination.
When AI plays games: what the results reveal
The findings show that LLMs perform well in self-interested games, like the Prisoner’s Dilemma, where optimizing individual rewards is key. However, they struggle in coordination-based games, where success depends on mutual understanding and shared conventions. “In the Battle of the Sexes, humans quickly learn to alternate turns for fairness,” Elif noted. “But LLMs often fail to adopt such simple, cooperative strategies.” This insight highlights a core limitation of current models, their difficulty in adapting to interactive, social norms that humans follow naturally.
To address these limitations, Elif and her colleagues developed a “social chain-of-thought” prompting method. Instead of simply choosing an action, the model first predicts what its partner will do and then selects a response based on that reasoning. This small change led to more coordinated behavior, and human participants perceived these socially prompted models as more human-like in interaction. “By encouraging the model to think about others, we saw better cooperation,” Elif explained. “It’s a step toward making LLMs more socially aware.”
Toward collaborative AI systems
The broader implications of this work are significant for the future of human-AI collaboration. If AI systems can better model the intentions and expectations of their human partners, they could become more effective collaborators in domains ranging from education and negotiation to multi-agent problem solving. Elif emphasised that the next steps involve expanding this framework to more complex and multi-agent games, and studying how LLMs reason internally when making interactive decisions. “Understanding how they reach decisions is as important as knowing which decisions they make,” she said.
Elif’s visit to the Data Science Lab at the Hertie School underscored the Lab’s ongoing commitment to exploring the societal dimensions of artificial intelligence. Her research invites us to look beyond the technical performance of AI and to consider its role as a social participant – one capable of coordination, empathy, and fairness.
As the conversation around AI governance and ethics evolves, these behavioral insights remind us that the future of AI will depend not only on what machines can do, but on how well they can understand and collaborate with us.
About the speaker
Elif Akata is a PhD student in machine learning and cognitive science. Her research focuses on understanding how LLMs behave as social, collaborative agents and how we can design systems that effectively interact, adapt, and communicate with humans and each other in dynamic environments.
-
Asya Magazinnik, Professor of Social Data Science
-
Aliya Boranbayeva, Associate Communications and Events | Data Science Lab