• Zihan Ding* - Princeton University (zhding[at]
  • Hao Dong - Peking University


This chapter introduces the existing challenges in deep reinforcement learning research and applications, including: (1) the sample efficiency problem; (2) stability of training; (3) the catastrophic interference problem; (4) the exploration problems in some tasks; (5) meta-learning and representation learning for the generality of reinforcement learning methods across tasks; (6) multi-agent reinforcement learning with other agents as part of the environment; (7) sim-to-real transfer for bridging the reality gaps for reinforcement learning applications in real world; (8) large-scale reinforcement learning with parallel training frameworks to shorten the wall-clock time in practice, etc. This chapter proposes the above challenges with potential solutions and research directions, as the primers of the advanced topics in the second main part of the book, including Chapters 8 to 12, to provide the readers a relatively comprehensive understanding about the deficiencies of present methods, recent development and future directions in deep reinforcement learning.

Keywords: sample efficiency, stability, catastrophic interference, exploration, meta-learning, representation learning, generality, multi-agent reinforcement learning, sim2real, scalability


To cite this book, please use this bibtex entry:

 title={Challenges of Reinforcement Learning},
 author={Zihan Ding, Hao Dong},
 editor={Hao Dong, Zihan Ding, Shanghang Zhang},
 booktitle={Deep Reinforcement Learning: Fundamentals, Research, and Applications},
 publisher={Springer Nature},

If you find any typos or have suggestions for improving the book, do not hesitate to contact with the corresponding author (name with *).