Compute and Memory Efficient Reinforcement Learning with Latent Experience Replay

Latent Vector Experience Replay (LeVER) is a simple modification of existing off-policy deep reinforcement learning methods . LeVER is useful for computation-efficient transfer learning in RL because lower layers of CNNs extract generalizable features, which can be used for different tasks and domains . In experiments, we show that LeVER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games . The LeVER technique is useful in computation- efficient transfer learning, as it can be applied to specific tasks and domain domains, such as Atari games and deep-Mind control environments, according to the authors of this article . Back to the page you came from: and/toptennis-lever-tenner-recovering-recovery-learning-relearning-learning

Links: PDF - Abstract

Code :


Keywords : learning - lever - article - efficient - computation -

Leave a Reply

Your email address will not be published. Required fields are marked *