Nov 11, 2021 • 1HR 11M

Alex Tamkin on Self-Supervised Learning and Large Language Models

An interview Stanford PhD candidate Alex Tamkin, whose research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings

 
0:00
-1:10:40
Open in playerListen on);

Appears in this episode

Andrey Kurenkov
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more.
Episode details
Comments

In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex Tamkin

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS

Review on Apple Podcasts

Alex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.

We discuss:

Podcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"