19 Comments
Aug 31, 2023Liked by daniel bashir

I'm going to listen with interest. But my first thought is that it's not a binary choice between intrinsic meaning and pushing around arbitrary symbols (I think that's what you're saying). Instead, the later Wittgenstein can provide a better way.

Expand full comment

Working in AI in Montréal and graduated from Université de Montréal and UQAM, I knew Dr. Harnad by reputation, but had never had the opportunity to hear him. Thanks to you, I discovered a man who thought at length about fundamental questions in cognitive science and artificial intelligence like symbol grounding and categorization. I must say that I am more of a practitioner than a theoretician of NLP, but I appreciate intelligent comments like those by Dr Harnad. I can also appreciate his activism about animal conditions and environment.

Expand full comment
author

I’m so glad you enjoyed the episode! Yes, Prof Harnad’s work is a deep, deep rabbit hole 😅

Expand full comment

I think people are going to return to this interview several decades from now. Something amazing is happening in this conversation. I love when Harnad breaks down the distinction between the hard and soft problems after so much conversaton about the meaning of the word "grounding." That distinction---at that moment---lands in an incredible powerful, almost emotional way. I think it is around 1:12:00 or so. But you really need to listen all the way through to get the full effect.

I also love the moment later on when Harnad is remaking with something close to wonder about the amazing function of GPT LLMs. He says something like, "Without passing T3, they are doing incredible things." Then, Harnad drops the bomb, "There must be something about language... " This leads him to think about Chomsky and universal grammar, while running some parallel thought paths about the way language systems mirror grounding processes through syntactical constructions. This had me thinking of some similar pathways in your interview with Winograd. Very cool stuff happening near the end...

I haven't quite wrapped my mind around it all. But perhaps that is the point!

Expand full comment
Sep 7, 2023Liked by daniel bashir

I liked the point where he said that the question isn't whether or not LLMs get meaning. Rather, how is it that they can do so much given that they don't really get meaning.

And he's obviously delighted to play around with ChatGPT or whatever. As I am.

Expand full comment
author

I did too! The “given” is definitely something many might contest, but I’m sympathetic. The way a former philosophy TA of mine put it, that I really liked, was that these are systems that aren’t _in the business of referring_, more a function of what they are and their interaction with the world (as Harnad would focus on) vs attributes of what they say and whether statements are directionally correct, etc

Expand full comment
author

This is too kind—I’m glad you liked it! And yes, it was very interesting to consider his thoughts on possibly-T2-passing systems

Expand full comment
Sep 4, 2023Liked by daniel bashir

Thanks a lot Daniel, very useful! I'll re-listen to the whole episode.

Expand full comment
author

Thanks for asking about this! We definitely could’ve introduced things a little more clearly, but hope this helps 🙂

Expand full comment
Sep 4, 2023Liked by daniel bashir

Thanks a lot. I indeed noticed the first mention of T3 at the time you mention, but it seems to fall out of nowhere. I am guessing this is some sort of hierarchy of capabilities (?) but it feels like I've missed some prerequisite reading. :) Is there a T1? What is the T for? Who came up with these terms? I do research in AI and never heard of them.

Expand full comment
author

Ha, fair enough. Harnad coined them himself! I think T2 is just for “Turing Test” 😆 Yeah, it’s a hierarchy: T2 is what Harnad interprets as the original Turing test, eg a lifelong test through conversation; T3 is a lifelong test of robotic capacities (walk talk etc); T4 is T3 + all internal structure looks the same. I think he mentions a T0 somewhere in one of his writings? I can’t precisely remember its definition of I’m right, but not super important for all this

Expand full comment
author

I guess, to be a little more precise on the hierarchy question, by T3 you’ve exhausted all capabilities so now at T4 you’re dealing with all remaining empirical/observable things

Expand full comment
Sep 1, 2023·edited Sep 1, 2023Liked by daniel bashir

I found Harnard's remarks on dictionaries interesting (and I've taken a quick look at the linked paper). When I was young I would read my way through the dictionary and the encyclopedia in the following way: 1) I look up and entry and read through it. Inevitably I would find one or more words I didn't understand. So, 2) I look it up and read through the entry. If I found a word THERE that I didn't understand, then 3) I'd look that up. And so forth.

Early in my career I worked closely with the late David Hays, a first generation computational linguist who headed the RAND project on machine translation, wrote the first textbook in computational linguistics, and so forth. He'd come up with the idea that the meaning of abstract terms is "grounded" – not a word he used, this was the early 1970s – in a story. His paradigmatic example: "Charity" is when "someone does something nice for someone else without thought of reward." Any story that matches that pattern of meaning would be an example of charity. One of his students, Brian Philips, developed a computational model that identified a pattern of tragedy in drowning man stories. Another student, Mary White, used the idea to analyze the belief structure of a millenarian community her sister was in. I used the idea in analyzing a Shakespeare sonnet: Cognitive Networks and Literary Semantics, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics.

I've also played around with this kind of thing in ChatGPT. Here's two blog posts in which I explore ChatGPT:

1) ChatGPT on justice and charity, https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-metalingual.html

2) ChatGPT more generally on abstract definition, https://new-savanna.blogspot.com/2023/03/exploring-metalingual-definition-with.html

Expand full comment
author

So cool that you worked with David Hays! And I'll take a look at your blog posts.

The idea of literary semantics and grounding meaning in stories is very interesting—I haven't read all of your linked paper on this yet (I plan to!), but I imagine a more "literary" grounding as something like: I have experienced this sort of situation and a collection of feelings / associations, so if a word has an associated story, that associated story evokes the relevant feelings/experienced I'd had? This actually raises some other interesting points for me because I'm curious about how ineffability interacts with grounding—Tolstoy spent a very long tract arguing for "feeling" as the central demarcation of art from non-art, and that feeling for Tolstoy is something that you can't just state simply (else you wouldn't be doing art in the first place). It feels like word-story association and related evocations lets in something similar—a story is much more complicated than the direct experience of the referent of some noun, for instance, and encapsulates a set of feelings/associations that go beyond the words themselves (I also think of Proust's involuntary memory).

Expand full comment
Sep 1, 2023·edited Sep 1, 2023Liked by daniel bashir

Good to hear from you, Daniel. I'm a big fan of The Gradient.

On grounding and the ineffable, I believe Peli Greitzer has something to say about that, no? I happy with that. I wouldn't expect to get that sort of thing from a (purely) symbolic system. They're too 'hard-edged.'

Neither Hays nor I believed that symbols were the rock-bottom cognitive medium. I don't know whether or not you're familiar with the neuroscience of the late Karl Pribram. But starting back in the late 1960s he championed the idea that cortical processes were realized in a neural mesh that was holographic in nature, and I rather liked that. I kept that sort of thing in the back of my mind while working with Hays. Somewhat later we incorporated that into our thinking and published an article, in effect, on the neural holography of (deep) metaphor.

Then, at about the turn of the millennium I entered into extensive correspondence with the later Walter Freeman, a neurobiologist who had been a student of Pribram's. He was a pioneer in using complex dynamics to analyze and model cortical behavior, mostly the olfactory cortex. That was of course perfectly consistent with Pribram's neural holography.

Finally, at the time I was working with Hays he had become fond of a 1973 book by William Powers, Behavior: The Control of Perception. Powers was trained as an electrical engineer and had developed a model of brain function in terms of more or less classical control theory. We adopted Powers's account as a way of thinking about sensorimotor behavior and "grounded" (the term was not in use back then) our network model in it. I talk about that in the paper I linked.

BTW, late in this post there's a bit on ChatGPT and free association: https://new-savanna.blogspot.com/2023/02/the-fluid-mind-of-chatgpt.html

Expand full comment
author

Thanks for the kind words and your thoughtful comments! Yes, I’m well aware of Peli’s work (I had him on for a conversation recently), though I think there’s much more there. Agreed that getting such things from a symbolic system doesn’t seem possible.

I’m not familiar with Pribram, but after reading your note I’ll have to look into his work as well as your article! And thank you for sharing the blog posts!

Expand full comment
Sep 1, 2023Liked by daniel bashir

I listened to the Peli interview. And the somewhat different Ted Underwood interview earlier. I've be following Ted's work for over a decade and have corresponded with him a bit.

Here's a link to Pribram's 1969 article in Scientific America (which published more substantive articles in those days): https://www.jstor.org/stable/24927611

Expand full comment

Would have been useful to introduce what T27T3/T4 mean. I listened to the whole thing and still don't know what these are.

Expand full comment
author

We introduced them all in the episode, but in the process of the conversation so it takes a bit of careful listening. Find one def of T3 around 50:00. It’s re-defined in multiple places, eg again at 1:02:00 (walks talks and acts like us). See 1:03:00 for T3 vs T4 (T3 = observable behavioral capacities vs T4 = everything observable, including “the stuff inside” eg microstructure, is the same). T2 is explained (if shortly) in a few places

Expand full comment