Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model
Alex X. Lee Anusha Nagabandi Pieter Abbeel Sergey Levine
University of California, Berkeley
Code [GitHub] Paper [arXiv]


Abstract

Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task learning. In this work, we tackle these two problems separately, by explicitly learning latent representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC provides a novel and principled approach for unifying stochastic sequential models and RL into a single method, by learning a compact latent representation and then performing RL in the model's learned latent space. Our experimental evaluation demonstrates that our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks.

Paper

A. X. Lee, A. Nagabandi, P. Abbeel, S. Levine
Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model.
In Neural Information Processing Systems (NeurIPS), 2020. [arXiv]

[Bibtex]


DeepMind Control Suite Results


Example image sequences and samples from the model
Ground Truth Observations
Posterior Samples
Conditional Prior Samples
Prior Samples

OpenAI Gym Results


Example image sequences and samples from the model
Ground Truth Observations
Posterior Samples
Conditional Prior Samples
Prior Samples

Manipulation Results

Example image sequences and samples from the model
Ground Truth Observations
Posterior Samples
Conditional Prior Samples
Prior Samples

Acknowledgments

We thank Marvin Zhang, Abhishek Gupta, and Chelsea Finn for useful discussions and feedback, and we thank Kristian Hartikainen, Danijar Hafner, and Maximilian Igl for providing timely assistance with SAC, PlaNet, and DVRL, respectively. We also thank Deirdre Quillen, Tianhe Yu, and Chelsea Finn for providing us with their suite of Sawyer manipulation tasks. This research was supported by the National Science Foundation through IIS-1651843 and IIS-1700697, as well as ARL DCIST CRA W911NF-17-2-0181 and the Office of Naval Research. Compute support was provided by NVIDIA.