Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning
Abstract
Communicating in natural language is a powerful tool in multi-agent settings, as it enables independent agents to share information in partially observable settings and allows zero-shot coordination with humans. However, most prior works are limited as they either rely on training with large amounts of human demonstrations or lack the ability to generate natural and useful communication strategies. In this work, we train language models to have productive discussions about their environment in natural language without any human demonstrations. We decompose the communication problem into listening and speaking. Our key idea is to leverage the agent's goal to predict useful information about the world as a dense reward signal that guides communication. Specifically, we improve a model's listening skills by training them to predict information about the environment based on discussions, and we simultaneously improve a model's speaking skills with multi-agent reinforcement learning by rewarding messages based on their influence on other agents. To investigate the role and necessity of communication in complex social settings, we study an embodied social deduction game based on Among Us, where the key question to answer is the identity of an adversarial imposter. We analyze emergent behaviors due to our technique, such as accusing suspects and providing evidence, and find that it enables strong discussions, doubling the win rates compared to standard RL. We release our code and models at https://socialdeductionllm.github.io/
Community
Training RWKV models with Multi-Agent RL to play Among Us, with no human demonstrations. See https://socialdeductionllm.github.io/
thanks, looks very interesting.
using natural language as communication protocol always leaves the space to quickly add a human as an agent to working multi-agent space
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Speaking the Language of Teamwork: LLM-Guided Credit Assignment in Multi-Agent Reinforcement Learning (2025)
- Multi-agent KTO: Reinforcing Strategic Interactions of Large Language Model in Language Game (2025)
- QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search (2025)
- Improving Multi-Step Reasoning Abilities of Large Language Models with Direct Advantage Policy Optimization (2024)
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling (2025)
- VLM-RL: A Unified Vision Language Models and Reinforcement Learning Framework for Safe Autonomous Driving (2024)
- Learning Strategic Language Agents in the Werewolf Game with Iterative Latent Space Policy Optimization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper