The Gradient
The Gradient: Perspectives on AI
Marc Bellemare: Distributional Reinforcement Learning
0:00
-1:12:21

Marc Bellemare: Distributional Reinforcement Learning

On progress, benchmarking, and doing impactful research in reinforcement learning.

Have suggestions for future podcast guests (or other feedback)? Let us know here!

In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare.

Professor Bellemare leads the reinforcement learning efforts at Google Brain Montréal and is a core industry member at Mila, where he also holds the Canada CIFAR AI Chair. His PhD work, completed at the University of Alberta, proposed the use of Atari 2600 video games to benchmark progress in reinforcement learning (RL). He was a research scientist at DeepMind from 2013-2017, and his Arcade Learning Environment was very influential in DeepMind’s early RL research and remains one of the most widely-used RL benchmarks today. More recently he collaborated with Loon to deploy deep reinforcement learning to navigate stratospheric balloons. His book on distributional reinforcement learning, published by MIT Press, will be available in Spring 2023.

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Outline:

  • (00:00) Intro

  • (03:10) Marc’s intro to AI and RL

  • (07:00) Cross-pollination of deep learning research and RL in McGill and UDM

  • (09:50) PhD work at U Alberta, continual learning, origins of the Arcade Learning Environment (ALE)

  • (14:40) Challenges in the ALE, how the ALE drove RL research

  • (23:10) Marc’s thoughts on the Avalon benchmark and what makes a good RL benchmark

  • (28:00) Opinions on “Reward is Enough” and whether RL gets us to AGI

  • (32:10) How Marc thinks about priors in learning, “reincarnating RL”

  • (36:00) Distributional Reinforcement Learning and the problem of distribution estimation

  • (43:00) GFlowNets and distributional RL

  • (45:05) Contraction in RL and distributional RL, theory-practice gaps

  • (52:45) Representation learning for RL

  • (55:50) Structure of the value function space

  • (1:00:00) Connections to open-endedness / evolutionary algorithms / curiosity

  • (1:03:30) RL for stratospheric balloon navigation with Loon

  • (1:07:30) New ideas for applying RL in the real world

  • (1:10:15) Marc’s advice for young researchers

  • (1:12:37) Outro

Links:

0 Comments
The Gradient
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology.