Authors
- Huaqing Zhang - Google
- Shanghang Zhang* - University of California, Berkeley (shzhang.pku[at]gmail.com)
Abstract
In reinforcement learning, complicated applications require involving multiple agents to handle different kinds of tasks simultaneously. However, increasing the number of agents brings in the challenges on managing the interactions among them. In this chapter, according to the optimization problem for each agent, equilibrium concepts are put forward to regulate the distributive behaviors of multiple agents. We further analyze the co-operative and competitive relations among the agents in various scenarios, combining with typical multi-agent reinforcement learning algorithms. Based on all kinds of interactions, a game-theoretical framework is finalized for general modeling in multi-agent scenarios. Analyzing the optimization and equilibrium situation for each component of the framework, the optimal multi-agent reinforcement learning policy for each agent can be guided and explored.
Keywords: multi-agent reinforcement learning, equilibrium, game theory, zero-sum game, chicken dare game, Stackelberg game
Content
Citation
To cite this book, please use this bibtex entry:
@incollection{deepRL-chapter11-2020,
title={Multi-Agent Reinforcement Learning},
chapter={11},
author={Huaqing Zhang, Shanghang Zhang},
editor={Hao Dong, Zihan Ding, Shanghang Zhang},
booktitle={Deep Reinforcement Learning: Fundamentals, Research, and Applications},
publisher={Springer Nature},
pages={335-346},
note={\url{http://www.deepreinforcementlearningbook.org}},
year={2020}
}
If you find any typos or have suggestions for improving the book, do not hesitate to contact with the corresponding author (name with *).