The Gradient Podcast: 2022 Roundup
A look back at an exciting year of episodes!
Happy holidays! We are not releasing a new episode over the holidays, but thought it’d be fun to take a look back at what went on with the The Gradient Podcast this past year. We appreciate your tuning in this year, and your feedback has meant a lot. We’re looking forward to another year of fascinating discussions and can’t wait to share them with you!
Here are some fun facts:
Number of episodes: 35
Average episodes listens: 2,283
Average length: 66.7 minutes
Max length: 100 minutes
Most popular episode: Connor Leahy on EleutherAI, Replicating GPT-2/GPT-3, AI Risk and Alignment
Least popular episode: Ben Green: "Tech for Social Good" Needs to Do More
Longest episode: Zachary Lipton: Where Machine Learning Falls Short
Who we interviewed: 12 professors, 8 research scientists, 5 founders, and more!
Favorite Episodes
Daniel: I’d have a really hard time picking a single favorite episode, so I’ll point to some ideas/notes from episodes this year that I find interesting:
The most immediate that comes to mind is in my final podcast of the year with Prof Melanie Mitchell—at one point we discussed the idea of concepts as analogies, and I asked here where this “bottomed out” (if a concept is defined/conceived with respect to another concept, what does the whole space of concepts look like?). Melanie’s answer was that one possible way to ground this all is in experience. If you’re familiar with synthetic a priori knowledge then you might have realized what I was getting at with the question, but I’ll leave this open. What do you think about how concepts/ideas/knowledge arise in the mind?
My conversation with Prof Marc Bellemare got me thinking a lot about the analogues between RL systems and humans in the realm of curiosity. There were some shared themes with my interview with Joel Lehman, another conversation I really enjoyed.
Nathan Benaich has some very interesting thoughts about second-order applications of some of the technologies we’re seeing today—diffusion models are great for text-to-image, but what about for biological problems?
Matt Sheehan is continuing to write fantastic analyses on China’s algorithmic governance.
Andrey:
I am really fond of my interview with Jeff Clune. Professor Clune has done a lot of exciting work in several different areas over the last two decades, and we managed to cover a surprisingly large proportion of it. It’s possibly the most dense episode i’ve recorded, and is definitely worth a listen.
I am also quite fond of my interviews with Eric Jang and Max Braun. Google is doing some of the most exciting research in robotics today, and these interviews cover quite a lot of their more recent achievements.
Lastly, i’ll mention a couple of interviews with people who are not academics - Lukas Biewald and Nick Walton. It’s a lot of fun to hear about the journeys people take to get where they are, and these two interviews have quite fun journeys to cover, and are a nice change of pace from mainly listening to discussions of papers.
Got any thoughts on what you’d like to hear from us next year? Let us know! We’d also really appreciate it if you leave us a review on Apple Podcasts.