19 Comments
Aug 31, 2023Liked by daniel bashir

I'm going to listen with interest. But my first thought is that it's not a binary choice between intrinsic meaning and pushing around arbitrary symbols (I think that's what you're saying). Instead, the later Wittgenstein can provide a better way.

Expand full comment
Oct 12, 2023Liked by daniel bashir

Working in AI in Montréal and graduated from Université de Montréal and UQAM, I knew Dr. Harnad by reputation, but had never had the opportunity to hear him. Thanks to you, I discovered a man who thought at length about fundamental questions in cognitive science and artificial intelligence like symbol grounding and categorization. I must say that I am more of a practitioner than a theoretician of NLP, but I appreciate intelligent comments like those by Dr Harnad. I can also appreciate his activism about animal conditions and environment.

Expand full comment

I think people are going to return to this interview several decades from now. Something amazing is happening in this conversation. I love when Harnad breaks down the distinction between the hard and soft problems after so much conversaton about the meaning of the word "grounding." That distinction---at that moment---lands in an incredible powerful, almost emotional way. I think it is around 1:12:00 or so. But you really need to listen all the way through to get the full effect.

I also love the moment later on when Harnad is remaking with something close to wonder about the amazing function of GPT LLMs. He says something like, "Without passing T3, they are doing incredible things." Then, Harnad drops the bomb, "There must be something about language... " This leads him to think about Chomsky and universal grammar, while running some parallel thought paths about the way language systems mirror grounding processes through syntactical constructions. This had me thinking of some similar pathways in your interview with Winograd. Very cool stuff happening near the end...

I haven't quite wrapped my mind around it all. But perhaps that is the point!

Expand full comment
Sep 4, 2023Liked by daniel bashir

Thanks a lot Daniel, very useful! I'll re-listen to the whole episode.

Expand full comment
Sep 4, 2023Liked by daniel bashir

Thanks a lot. I indeed noticed the first mention of T3 at the time you mention, but it seems to fall out of nowhere. I am guessing this is some sort of hierarchy of capabilities (?) but it feels like I've missed some prerequisite reading. :) Is there a T1? What is the T for? Who came up with these terms? I do research in AI and never heard of them.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023Liked by daniel bashir

I found Harnard's remarks on dictionaries interesting (and I've taken a quick look at the linked paper). When I was young I would read my way through the dictionary and the encyclopedia in the following way: 1) I look up and entry and read through it. Inevitably I would find one or more words I didn't understand. So, 2) I look it up and read through the entry. If I found a word THERE that I didn't understand, then 3) I'd look that up. And so forth.

Early in my career I worked closely with the late David Hays, a first generation computational linguist who headed the RAND project on machine translation, wrote the first textbook in computational linguistics, and so forth. He'd come up with the idea that the meaning of abstract terms is "grounded" – not a word he used, this was the early 1970s – in a story. His paradigmatic example: "Charity" is when "someone does something nice for someone else without thought of reward." Any story that matches that pattern of meaning would be an example of charity. One of his students, Brian Philips, developed a computational model that identified a pattern of tragedy in drowning man stories. Another student, Mary White, used the idea to analyze the belief structure of a millenarian community her sister was in. I used the idea in analyzing a Shakespeare sonnet: Cognitive Networks and Literary Semantics, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics.

I've also played around with this kind of thing in ChatGPT. Here's two blog posts in which I explore ChatGPT:

1) ChatGPT on justice and charity, https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-metalingual.html

2) ChatGPT more generally on abstract definition, https://new-savanna.blogspot.com/2023/03/exploring-metalingual-definition-with.html

Expand full comment

Would have been useful to introduce what T27T3/T4 mean. I listened to the whole thing and still don't know what these are.

Expand full comment