Education, Science, Technology, Innovation and Life
Open Access
Sign In

Asynchronous Deep Q-network in Continuous Environment Based on Prioritized Experience Replay

Download as PDF

DOI: 10.23977/meimie.2019.43075

Author(s)

Hongda Liu, Hanqi Zhang and Linying Gong

Corresponding Author

Hongda Liu

ABSTRACT

Deep Q-network is a classical algorithm of reinforce learning, which is widely used and has many variants. The research content of this paper is to optimize and integrate some variant algorithms so that it has the advantage of running in the continuous environment, and improve the learning efficiency by Prioritized Experience Replay and multiple agents' asynchronous parallel method, and establish the asynchronous Deep Q-network framework based on priority Experience Replay in the continuous environment. This paper uses some games in the Atari 2600 domain to test our algorithm framework, which achieved good results, improved stability, convergence speed and improved performance.

KEYWORDS

Deep Q-network, Continuous Environment, Prioritized Experience Replay, Asynchronous

All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.