Education, Science, Technology, Innovation and Life
Open Access
Sign In

Compound Asynchronous Exploration and Exploitation

Download as PDF

DOI: 10.23977/meet.2019.93759

Author(s)

Jie Bai, Li Liu, Yaobing Wang, Haoyu Zhang, Jianfei Li

Corresponding Author

Jie Ba

ABSTRACT

Data efficiency has always been a significant key topic for deep reinforcement learning. The main progress has been on sufficient exploration and effective exploitation. However, the two are often discussed separately. Profit from distributed systems, we propose an asynchronous approach to deep reinforcement learning by combining exploration and exploitation. We apply our framework to off-the-shelf deep reinforcement learning algorithms, and experimental results show that our algorithm is superior in final performance and efficiency.

KEYWORDS

Deep Reinforcement Learning, Exploration And Exploitation, Asynchronous Methods

All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.