The Gradient
The Gradient: Perspectives on AI
Alex Tamkin on Self-Supervised Learning and Large Language Models
0:00
-1:10:40

Alex Tamkin on Self-Supervised Learning and Large Language Models

An interview Stanford PhD candidate Alex Tamkin, whose research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings

In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex Tamkin

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS

Review on Apple Podcasts

Alex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.

We discuss:

Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"

0 Comments
The Gradient
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology.