
Stevan Harnad: AI's Symbol Grounding Problem
On Symbol Grounding in AI systems, the Turing Test, the Chinese Room Argument, and category formation.
In episode 88 of The Gradient Podcast, Daniel Bashir speaks to Professor Stevan Harnad.
Stevan Harnad is professor of psychology and cognitive science at Université du Québec à Montréal, adjunct professor of cognitive science at McGill University, and professor emeritus of cognitive science at the University of Southampton. His research is on category learning, categorical perception, symbol grounding, the evolution of language, and animal and human sentience (otherwise known as “consciousness”). He is also an advocate for open access and an activist for animal rights.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter
Outline:
(00:00) Intro
(05:20) Professor Harnad’s background: interests in cognitive psychobiology, editing Behavioral and Brain Sciences
(07:40) John Searle submits the Chinese Room article
(09:20) Early reactions to Searle and Prof. Harnad’s role
(13:38) The core of Searle’s argument and the generator of the Symbol Grounding Problem, “strong AI”
(19:00) Ways to ground symbols
(20:26) The acquisition of categories
(25:00) Pantomiming, non-linguistic category formation
(27:45) Mathematics, abstraction, and grounding
(36:20) Symbol manipulation and interpretation language
(40:40) On the Whorf Hypothesis
(48:39) Defining “grounding” and introducing the “T3” Turing Test
(53:22) Turing’s concerns, AI and reverse-engineering cognition
(59:25) Other Minds, T4 and zombies
(1:05:48) Degrees of freedom in solutions to the Turing Test, the easy and hard problems of cognition
(1:14:33) Over-interepretation of AI systems’ behavior, sentience concerns, T3 and evidence sentience
(1:24:35) Prof. Harnad’s commentary on claims in The Vector Grounding Problem
(1:28:05) RLHF and grounding, LLMs’ (ungrounded) capabilities, syntactic structure and propositions
(1:35:30) Multimodal AI systems (image-text and robotic) and grounding, compositionality
(1:42:50) Chomsky’s Universal Grammar, LLMs and T2
(1:50:55) T3 and cognitive simulation
(1:57:34) Outro
Links:
Stevan Harnad: AI's Symbol Grounding Problem
I'm going to listen with interest. But my first thought is that it's not a binary choice between intrinsic meaning and pushing around arbitrary symbols (I think that's what you're saying). Instead, the later Wittgenstein can provide a better way.
I think people are going to return to this interview several decades from now. Something amazing is happening in this conversation. I love when Harnad breaks down the distinction between the hard and soft problems after so much conversaton about the meaning of the word "grounding." That distinction---at that moment---lands in an incredible powerful, almost emotional way. I think it is around 1:12:00 or so. But you really need to listen all the way through to get the full effect.
I also love the moment later on when Harnad is remaking with something close to wonder about the amazing function of GPT LLMs. He says something like, "Without passing T3, they are doing incredible things." Then, Harnad drops the bomb, "There must be something about language... " This leads him to think about Chomsky and universal grammar, while running some parallel thought paths about the way language systems mirror grounding processes through syntactical constructions. This had me thinking of some similar pathways in your interview with Winograd. Very cool stuff happening near the end...
I haven't quite wrapped my mind around it all. But perhaps that is the point!