Nice way to point out that AIs are cold machines and don't have the experiential intelligence we living beings have. We invented these machines to outsource some of our repetitive tasks and figured out a way to let AI mimic our use of language.
The word embeddings, those black boxes of statistical models, are just a mechanism to act as plausible surrogates for meaning. Just as the stream of characters and n-grams that came before them. We will probably find even cleverer way of modeling meaning of words, but they are still not the real, experience based meanings we living beings know.
Thanks for the comment! I like how you phrased it "surrogates for meaning" this points to the central problem. We can keep building cleverer models, but as you said, they're still not the real thing. It seems the only way we could ever be 100% sure if a machine truly "understands" would be to solve the puzzle of how to create genuine experience itself. Until then, we're just guessing.
Thank you very much for this very well-written analysis of the limitations of AI! It's very interesting that some of the points you make are not just true for AI, but also for the use of different (human) languages. I speak several languages fluently but I am also very conscious of the fact that I will never be able to "FEEL" the meaning of a word the way a native speaker does. This doesn't only have to do with the meaning of words, but especially with - mostly unconscious - connotations. This becomes especially obvious when you read poetry in another language than your own - you can grasp only partially the layers of meaning. And that makes it almost impossible to adequately translate poetry. It takes a poetry, and they will create a new work of art.
Thank you! I never considered how it relates to speaking multiple languages that is a really interesting point. Do you think it is possible to ever learn to "feel" like a native speaker with a lot of practice?
I think that over time, you start to "feel" another language, but never quite as you feel your own. In your mother tongue, every word is connected to so many experiences and stories and impressions from childhood on - as an adult, I think, you cannot catch up on it. But it really IS an interesting question and I would like to explore it a little further.
But is it better to use the word "transcends" to point towards how a mind connects and relates experiences without language (or self conscious awareness)? Or is it better to use a word like "underlies" to describe how vaster systems unconsciously (and prelinquistically) make certain types of associations and predictions ?
Maybe it doesn't make any difference, but perhaps I may not have read this piece carefully enough to dispel the impression that it centers language as a means for creating meaning and understanding? I could have written "human" language in the previous sentence because I am indeed thinking about how other life forms (especially the "higher" animals incorporate experiences and relate them to each other and to new or recurring sensations.
Our best evolutionary knowledge points to the notion that human (syntactically structured, recursive) language is unique to our species and is indeed an addition (perhaps an "overlay" in certain senses) to primordial, but complex, neural, limbic, and physiological systems maintaining inner homeostasis in many ways, including reacting to (and sometimes anticipating) environmental threats, opportunities, or changes.
An interesting idea about the origin of human language involves relationships to tool building, where primitive million-year-old procedures and their artefacts (I'm thinking of the Acheulean "hand axe") seem to have acquired symbolic/emotional significance beyond any "practical" utility. Then there is the idea that a late mutation involving the "successor" or "merge" functions which may underly syntatic structures, mathematics, and other ritual/dance behaviors. But all that would be built on top of (or into) mirror neurons which have been found in birds as well as primates and probably date back to certain very early vertebrates.
Animals have feelings and emotions that they cannot describe. Many different species also seem to exhibit "moods" where an affective state persists long after its physiological or environmental trigger. Observations of this in animals can be related to measurable levels of stress hormones (etc) which also function in humans. Humans can name and describe their emotions and use this linguistic (self consciously aware) feedback to interrupt or prolong a mood.
I don't know what would happen if an AI system were programmed to mimic such an affective feedback system. Would it be more or less "real" if such a feedback system were only linguistic and not connected to a limbic system. Also, an AI system may not have mirror neurons, but large language models could (and may already) be being connected to other AI learning programs connected to machines, robots, and vehicles in ways that prioritize the functions and existence of those hard and software amalgams.
Thank you for this thoughtful comment! You raise a good point about the choice between "transcends" and "underlies," and I'd argue that experience likely does both it's the raw material that underlies all language, but it also contains a richness that transcends what words can ever fully capture, so the best word choice really just depends on your perspective. You are right to ground this in our evolutionary history; while our specific, syntactic language may be a recent "overlay," the fundamental drive to communicate, in general, is ancient. The unanswerable question you hint at: whether a sufficiently advanced AI is merely a sophisticated mimic or possesses genuine understanding. It's a fascinating challenge because it forces us to admit that we may not even have a clear, objective line to draw between those two concepts, especially when we can't know the internal, subjective experience of another intelligence.
I love your point that words don’t hold meaning, they just point us toward it. That explains why we can talk past each other even when using the same words.
This was such a fun, mind-stretching read. Thanks for making me see something I thought I understood in a new way.
I never really considered whether or not a machine would really understand words since we understand them because of experience. This is pretty earth-shattering. Since they cannot experience the cold as we do or even if they do their experience might be different and not relatable to us, then they really don't get it. I usually go back to thinking that machines are actually dumb. They must be told what to do, whereas humans and most animals do not have to be told. They will act on their own. I think that's my biggest question regarding them. Can they just simply go through their routines and not have to be told to and furthermore, can they spot a problem and solve it without being told to do so? So many questions and now I have more. Hehe. Excellent post, though. This really made me think. Thank you for sharing, Robert.
I think you've pinpointed the fundamental difference between a tool and an agent. Most AI today operate like a highly sophisticated tool in a request-response loop. It waits for a prompt‚ executes the task‚ and then goes dormant. Its goal is temporary and externally provided.
What you're describing in humans and animals is a state of continuous goal pursuit. An animal's goal to "survive" is always active. It doesn't wait to be "told" to find food or avoid danger; it is constantly‚ proactively working to maintain its state against an unpredictable world.
OMG, another fantastic piece. I would love to jump off this and address the parallels in how we do/don't do therapy (not about AI per se, but just the importance of actual experience in the room). I'll message you.
Nice way to point out that AIs are cold machines and don't have the experiential intelligence we living beings have. We invented these machines to outsource some of our repetitive tasks and figured out a way to let AI mimic our use of language.
The word embeddings, those black boxes of statistical models, are just a mechanism to act as plausible surrogates for meaning. Just as the stream of characters and n-grams that came before them. We will probably find even cleverer way of modeling meaning of words, but they are still not the real, experience based meanings we living beings know.
AI is just a tool. We built those machines.
Thanks for the comment! I like how you phrased it "surrogates for meaning" this points to the central problem. We can keep building cleverer models, but as you said, they're still not the real thing. It seems the only way we could ever be 100% sure if a machine truly "understands" would be to solve the puzzle of how to create genuine experience itself. Until then, we're just guessing.
I loved this, great lines of thought!
Wow! There's a real moral dilemma for you! This was really fun to read. I'll be pondering it for a while to come!
Thank you!
Beautifully written. It makes me rethink how often I confuse knowing with feeling.
This was fire 🔥
Thank you!
Thank you very much for this very well-written analysis of the limitations of AI! It's very interesting that some of the points you make are not just true for AI, but also for the use of different (human) languages. I speak several languages fluently but I am also very conscious of the fact that I will never be able to "FEEL" the meaning of a word the way a native speaker does. This doesn't only have to do with the meaning of words, but especially with - mostly unconscious - connotations. This becomes especially obvious when you read poetry in another language than your own - you can grasp only partially the layers of meaning. And that makes it almost impossible to adequately translate poetry. It takes a poetry, and they will create a new work of art.
Thank you! I never considered how it relates to speaking multiple languages that is a really interesting point. Do you think it is possible to ever learn to "feel" like a native speaker with a lot of practice?
I think that over time, you start to "feel" another language, but never quite as you feel your own. In your mother tongue, every word is connected to so many experiences and stories and impressions from childhood on - as an adult, I think, you cannot catch up on it. But it really IS an interesting question and I would like to explore it a little further.
AI knows but not feels. My simplistic understanding.
Basically it!
Excellent!
But is it better to use the word "transcends" to point towards how a mind connects and relates experiences without language (or self conscious awareness)? Or is it better to use a word like "underlies" to describe how vaster systems unconsciously (and prelinquistically) make certain types of associations and predictions ?
Maybe it doesn't make any difference, but perhaps I may not have read this piece carefully enough to dispel the impression that it centers language as a means for creating meaning and understanding? I could have written "human" language in the previous sentence because I am indeed thinking about how other life forms (especially the "higher" animals incorporate experiences and relate them to each other and to new or recurring sensations.
Our best evolutionary knowledge points to the notion that human (syntactically structured, recursive) language is unique to our species and is indeed an addition (perhaps an "overlay" in certain senses) to primordial, but complex, neural, limbic, and physiological systems maintaining inner homeostasis in many ways, including reacting to (and sometimes anticipating) environmental threats, opportunities, or changes.
An interesting idea about the origin of human language involves relationships to tool building, where primitive million-year-old procedures and their artefacts (I'm thinking of the Acheulean "hand axe") seem to have acquired symbolic/emotional significance beyond any "practical" utility. Then there is the idea that a late mutation involving the "successor" or "merge" functions which may underly syntatic structures, mathematics, and other ritual/dance behaviors. But all that would be built on top of (or into) mirror neurons which have been found in birds as well as primates and probably date back to certain very early vertebrates.
Animals have feelings and emotions that they cannot describe. Many different species also seem to exhibit "moods" where an affective state persists long after its physiological or environmental trigger. Observations of this in animals can be related to measurable levels of stress hormones (etc) which also function in humans. Humans can name and describe their emotions and use this linguistic (self consciously aware) feedback to interrupt or prolong a mood.
I don't know what would happen if an AI system were programmed to mimic such an affective feedback system. Would it be more or less "real" if such a feedback system were only linguistic and not connected to a limbic system. Also, an AI system may not have mirror neurons, but large language models could (and may already) be being connected to other AI learning programs connected to machines, robots, and vehicles in ways that prioritize the functions and existence of those hard and software amalgams.
Thank you for this thoughtful comment! You raise a good point about the choice between "transcends" and "underlies," and I'd argue that experience likely does both it's the raw material that underlies all language, but it also contains a richness that transcends what words can ever fully capture, so the best word choice really just depends on your perspective. You are right to ground this in our evolutionary history; while our specific, syntactic language may be a recent "overlay," the fundamental drive to communicate, in general, is ancient. The unanswerable question you hint at: whether a sufficiently advanced AI is merely a sophisticated mimic or possesses genuine understanding. It's a fascinating challenge because it forces us to admit that we may not even have a clear, objective line to draw between those two concepts, especially when we can't know the internal, subjective experience of another intelligence.
I love your point that words don’t hold meaning, they just point us toward it. That explains why we can talk past each other even when using the same words.
This was such a fun, mind-stretching read. Thanks for making me see something I thought I understood in a new way.
Thank you for the comment! Great point it does help us understand how we can talk past each other so often.
I never really considered whether or not a machine would really understand words since we understand them because of experience. This is pretty earth-shattering. Since they cannot experience the cold as we do or even if they do their experience might be different and not relatable to us, then they really don't get it. I usually go back to thinking that machines are actually dumb. They must be told what to do, whereas humans and most animals do not have to be told. They will act on their own. I think that's my biggest question regarding them. Can they just simply go through their routines and not have to be told to and furthermore, can they spot a problem and solve it without being told to do so? So many questions and now I have more. Hehe. Excellent post, though. This really made me think. Thank you for sharing, Robert.
I think you've pinpointed the fundamental difference between a tool and an agent. Most AI today operate like a highly sophisticated tool in a request-response loop. It waits for a prompt‚ executes the task‚ and then goes dormant. Its goal is temporary and externally provided.
What you're describing in humans and animals is a state of continuous goal pursuit. An animal's goal to "survive" is always active. It doesn't wait to be "told" to find food or avoid danger; it is constantly‚ proactively working to maintain its state against an unpredictable world.
Exactly. Great breakdown, Robert.
Masterful writing right here! Thank you for sharing this!! <3
Thank you! I hope to share more
Very well written by you!
OMG, another fantastic piece. I would love to jump off this and address the parallels in how we do/don't do therapy (not about AI per se, but just the importance of actual experience in the room). I'll message you.
Thank you! I will lookout for your message.