Podcast Break: Some Favorites
Daniel and Andrey select some favorite episodes
Hi friends of The Gradient and listeners—we’re on a break this week! Instead of a new episode, we have some favorites selected from our past episodes along with short reflections. We’ll resume our normally scheduled interviews next week :)
Chris Manning is an incredible researcher, and he has spent so much time in the field of NLP that you are unlikely to find many people with the wealth of perspective and insight he brings to the table. I particularly enjoyed hearing his perspectives on grounding language in the physical world—it seems that more and more work is demonstrating the range of things we can do with language alone as input (see Text Is the Universal Interface by roon). One little connection that I thought was cool and didn’t mention in the interview was the rationalist/empiricist debate in linguistics and rationalism/empiricism in epistemology. Chris mentions this in his textbook on statistical NLP, while this paper defines the distinction in linguistics:
while rationalists believe that linguistic knowledge is triggered by the structure of the mind, which is internal, the empiricists are of the view that ultimate source of knowledge is external, and is triggered by environmental input.
I read this as a narrow application of rationalism and empiricism simpliciter: the two camps dispute the extent to which our knowledge of the external world is gained through experience. Going fully in one direction or another can lead you to some really interesting conclusions—for instance, if you’re an empiricist like Hume or Berkeley you might come to the conclusion that you have no “self” above and beyond the contents of your experience, while if you’re a hard rationalist you can get to idealism. It’s fun to think about how this maps onto ideas of linguistic knowledge.
Joel is just a wonderful human being to converse with, and I thoroughly enjoyed speaking with him. Open-endedness and evolutionary algorithms seem to be gaining steam, and I hope this area gets more attention in future. Beyond the interestingness of the research itself, I think it naturally raises lots of philosophical questions as Joel and I began to dig into. What does it mean for something to be interesting? How should we think about the role of novelty in our own lives?
I’ve always enjoyed doing interview that go beyond talking about research to more broadly discuss AI as a field, the paths that researchers take, and the unique approaches people can contribute to AI research. This conversation with Rossane certainly had a lot of that! The entire ML Collective initiative is quite interesting, so I would definitely recommend checking this one out if you’d like to hear about ways to get into AI research.
How about an older episode? I also love covering topics in AI research that are more obscure or just less hyped up. Research on generative art and creativity is fascinating, but rarely sees the spotlight. Devi Parikh’s is one of the foremost researchers on this topic, not to mention a really fun person to speak to. So check this one out!