Pre-loaded Deep-Q Learning

Tristan Falck, Elize Ehlers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper explores the potentiality of pre-loading deep-Q learning agents’ replay memory buffers with experiences generated by preceding agents, so as to bolster their initial performance. The research illustrates that this pre-loading of previously generated experience replays does indeed improve the initial performance of new agents, provided that an appropriate degree of ostensibly undesirable activity was expressed in the preceding agent’s behaviour.

Original languageEnglish
Title of host publicationIntelligent Information Processing XI - 12th IFIP TC 12 International Conference, IIP 2022, Proceedings
EditorsZhongzhi Shi, Jean-Daniel Zucker, Bo An
PublisherSpringer Science and Business Media Deutschland GmbH
Pages159-172
Number of pages14
ISBN (Print)9783031039478
DOIs
Publication statusPublished - 2022
Event12th IFIP TC 12 International Conference on Intelligent Information Processing, IIP 2022 - Qingdao, China
Duration: 27 May 202230 May 2022

Publication series

NameIFIP Advances in Information and Communication Technology
Volume643 IFIP
ISSN (Print)1868-4238
ISSN (Electronic)1868-422X

Conference

Conference12th IFIP TC 12 International Conference on Intelligent Information Processing, IIP 2022
Country/TerritoryChina
CityQingdao
Period27/05/2230/05/22

Keywords

  • Deep-Q learning
  • Experience replay
  • Neural networks
  • Q-learning
  • Reinforcement learning

ASJC Scopus subject areas

  • Information Systems and Management

Fingerprint

Dive into the research topics of 'Pre-loaded Deep-Q Learning'. Together they form a unique fingerprint.

Cite this