As AI systems start to play an increasing role in the real world it is important to understand how different systems will interact with one another.
In our latest paper, published in the journal Scientific Reports, we use a branch of game theory to shed light on this problem. In particular, we examine how two intelligent systems behave and respond in a particular type of situation known as an asymmetric game, which include Leduc poker and various board games such as Scotland Yard. Asymmetric games also naturally model certain real-world scenarios such as automated auctions where buyers and sellers operate with different motivations. Our results give us new insights into these situations and reveal a surprisingly simple way to analyse them. While our interest is in how this theory applies to the interaction of multiple AI systems, we believe the results could also be of use in economics, evolutionary biology and empirical game theory among others.
"The method proves to be mathematically simple, allowing a rapid and straightforward analysis of asymmetric games
Game theory is a field of mathematics that is used to analyse the strategies used by decision makers in competitive situations. It can apply to humans, animals, and computers in various situations but is commonly used in AI research to study “multi-agent” environments where there is more than one system, for example several household robots cooperating to clean the house. Traditionally, the evolutionary dynamics of multi-agent systems have been analysed using simple, symmetric games, such as the classic Prisoner’s Dilemma, where each player has access to the same set of actions. Although these games can provide useful insights into how multi-agent systems work and tell us how to achieve a desirable outcome for all players - known as the Nash equilibrium - they cannot model all situations.
Our new technique allows us to quickly and easily identify the strategies used to find the Nash equilibrium in more complex asymmetric games - characterised as games where each player has different strategies, goals and rewards. These games - and the new technique we use to understand them - can be illustrated using an example from ‘Battle of the Sexes’, a coordination game commonly used in game theory research.
Here, two players have to coordinate a night out to either the opera or the movies. One of the players has a slight preference for the opera and one of them has a slight preference for the movies. The game is asymmetric because, while both players have access to the same options, the corresponding rewards for each are different based on the players preferences. In order to maintain their friendship - or equilibrium - the players should choose the same activity (hence the zero payoff for separate activities).
This game has three equilibria: (i) both players deciding to go to the opera, (ii) both deciding to go to the movies, and (iii) a final, mixed option, where each player will opt for their preferred option three fifths of the time. This last option, which is said to be “unstable”, can be rapidly uncovered using our method by simplifying - or decomposing - the asymmetric game into its symmetric counterparts. These counterpart games essentially considers the reward table of each player as a separate symmetric 2-player game with equilibrium points that coincide with the original asymmetric game.
In the plot below, the Nash equilibrium is plotted for the two, simple counterparts allowing us to quickly identify the optimal strategy in the asymmetrical game (a). The reverse can also be done, using the asymmetrical game to identify the equilibrium in its symmetrical counterparts.
This method can also be applied to other games, including Leduc poker, which is described in detail in the paper. In all of these situations, the method proves to be mathematically simple, allowing a rapid and straightforward analysis of asymmetric games that we hope will also help our understanding of various dynamic systems, including multi-agent environments.
Read the full paper here.
This work was done by Karl Tuyls, Julien Perolat, Marc Lanctot, Georg Ostrovski, Rahul Savani, Joel Leibo, Toby Ord, Thore Graepel and Shane Legg.