Feb 3 • 1HR 48M

Connor Leahy on EleutherAI, Replicating GPT-2/GPT-3, AI Risk and Alignment

An interview with Connor Leahy, an AI researcher focused on understanding large AI models and aligning them to human values, and a co-founder of EleutherAI

Open in playerListen on);
Episode details
1 comment

In episode 23 of The Gradient Podcast, we talk to Connor Leahy, an AI researcher focused on AI alignment and a co-founder of EleutherAI.

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Connor is an AI researcher working on understanding large ML models and aligning them to human values, and a cofounder of EleutherAI, a decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open source AI research. The organization's flagship project is the GPT-Neo family of models designed to match those developed by OpenAI as GPT-3.


(00:00:00) Intro
(00:01:20) Start in AI
(00:08:00) Being excited about GPT-2
(00:18:00) Discovering AI safety and alignment
(00:21:10) Replicating GPT-2
(00:27:30) Deciding whether to relese GPT-2 weights
(00:36:15) Life after GPT-2
(00:40:05) GPT-3 and Start of Eleuther AI
(00:44:40) Early days of Eleuther AI
(00:47:30) Creating the Pile, GPT-Neo, Hacker Culture
(00:55:10) Growth of Eleuther AI, Cultivating Community
(01:02:22) Why release a large language model
(01:08:50) AI Risk and Alignment
(01:21:30) Worrying (or not) about Superhuman AI
(01:25:20) AI alignment and releasing powerful models
(01:32:08) AI risk and research norms
(01:37:10) Work on GPT-3 replication, GPT-NeoX
(01:38:48) Joining Eleuther AI
(01:43:28) Personal interests / hobbies
(01:47:20) Outro

Links to things discussed: