Open Problems in Cooperative AI

Abstract

Problems of cooperation - in which agents seek ways to jointly improve their welfare - are ubiquitous and important. They can be found at scales ranging from our daily routines - such as highway driving, scheduling meetings, and collaborative work - to our global challenges - such as arms control, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation

We see an opportunity for the field of Artificial Intelligence to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problems of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in AI can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation. Research could be organized around key skills necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. In the context of machine learning based AI, it will be important to develop training environments, tasks, and domains in which cooperative skills are crucial to success, learnable, and non-trivial. Work on the fundamental question of cooperation is by necessity interdisciplinary and will draw on a range of fields, including reinforcement learning (and inverse RL), multi-agent systems, game theory, mechanism design and social choice, natural language processing, interpretability, as well as the social and behavioral sciences. This research may even touch upon fields like trusted hardware design and cryptography to address problems in commitment and communication.

Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately understand human preferences, and how best to integrate human norms and ethics into cooperative arrangements. Research should also study the potential downsides of cooperative capabilities - such as collusion and coercion - and how to channel cooperative capabilities to most improve human welfare. Overall, this research would connect AI research to the broader scientific enterprise, in the natural sciences and social sciences, studying the problem of cooperation, and to the broader social effort to solve cooperation problems.

Publications