What Do Words Really Mean?
Exploration of AI, Language, and Consciousness
It was an unusually cold morning for May. The kind that reminds you that the seasons slip by fast—the aroma of winter mixed with the pale sun of summer. I sat on the porch‚ breathing slowly‚ as thoughts drifted through my mind. For a moment‚ words were just sounds‚ detached from meaning. I only felt the cold breeze.
It's in these moments of stillness‚ when the mind is most clear‚ that deep realizations can surface. And so‚ as the world of thought returned‚ it contracted into a question:
What does “cold” really mean? And if its meaning is rooted in this feeling‚ how could an AI ever understand it?
The Symbol Grounding Problem
I’ve worked with AI models for years‚ both as a designer and as a user fascinated by them. I thought I understood them well. But that morning on the porch‚ my introspection revealed something I had overlooked—not just about AI‚ but about the nature of language itself.
When I ask myself what the word “cold” means‚ my mind doesn’t retrieve a dictionary definition. It retrieves a feeling—the memory of numbness in my toes walking to a friend's house in a blizzard‚ or the shock of jumping into a pool on a hot day. This is the true semantics of language: the meaning‚ not the pronunciation or lexical form.
This idea‚ of course‚ is not new. It’s at the heart of what philosophers and AI researchers call the symbol grounding problem: how can words have meaning if they aren’t tied to real-world experience? For a machine with no body‚ no senses‚ and no nervous system‚ can it ever truly “know” what cold is‚ no matter how many sentences it has processed?
Before that moment of meditative clarity‚ I had assumed meaning could be extracted from text if you simply had enough examples and context. But as I sat there‚ observing how words could dissolve into pure sensation‚ the gap became obvious. Language doesn’t contain meaning—it only points to it.
Why Dictionary Definitions Are Circular
Consider the circularity of a dictionary. If you look up a word‚ it’s defined by other words‚ which are defined by still other words. This leads to an infinite regress. What grounds the words?
The answer is simple yet profound: experience. We understand words not through definitions alone‚ but as abstractions built upon a foundation of sensory and emotional experience. Definitions simply help connect the dots between concepts we've already grounded.
This becomes clearer with abstract words like “anxiety.” You might think we all mean the same thing‚ but try to pin it down and the concept shifts. For one person‚ it's a jittery rush; for another‚ a slow sense of dread. We can use more specific words—“dread‚” “worry‚” “jittery”—but each of those still relies on a personal‚ experiential library for its meaning.
So how does language work at all? It functions because we share a similar-enough world and a similar-enough biology. Each word becomes a tool—like a pair of pliers—prodding the listener's brain to access a specific memory or sensation. Perfect communication would be the direct sharing of experience; words are just the closest we can get.
At some point‚ however‚ language bottoms out. We assume others understand us because their reactions mirror ours—they laugh‚ respond‚ or nod. But there’s no guarantee that the word “cold‚” or “grief‚” or “joy‚” evokes the same internal experience in someone else. At best‚ our words get us close enough to align our behaviors‚ but the internal meanings remain profoundly subjective.
Amy’s Room
To illustrate this gap between knowledge and experience‚ we can adapt philosopher Frank Jackson's famous "Mary's Room" thought experiment. Let's call it "Amy's Room":
Amy is a brilliant neuroscientist and physicist who is the world’s foremost expert on the physics and biology of temperature. She knows every law of thermodynamics and can trace every nerve signal involved in the sensation of being cold. However‚ she has lived her entire life in a perfectly climate-controlled room. One day‚ she steps outside on a freezing morning. Does she learn something new?
If you believe she does—if you believe she learns what it feels like to be cold—then you accept that knowledge alone can never capture meaning. Words and data only point to the experiences behind them.
The AI Question
This brings us back to AI. ChatGPT can describe “cold” with an eloquence that might surpass my own. But does it understand the word in the way Amy finally did?
In simple terms—no linear algebra required—an AI learns by recognizing statistical patterns in vast amounts of text‚ predicting the next most likely word. It's a master of mimicry‚ but it has no body to feel a chill‚ no lived experience to ground the symbols it manipulates. It can achieve correctness‚ but correctness is not understanding.
A calculator can execute the steps to solve a complex equation‚ but no one would claim it understands mathematics like a mathematician. While LLMs are exponentially more advanced‚ their foundation remains in statistical prediction‚ not embodied experience.
Where This Leaves Us
This distinction has profound practical and ethical implications. There are domains—pioneering scientific discovery‚ novel philosophical insights‚ the writing of a truly great novel—that may require a form of understanding that transcends data analysis. AI might excel at remixing existing ideas‚ but it could struggle to generate genuinely new paradigms built from deeply human experiences‚ especially in domains where data is scarce. AI may specifically miss the ability of introspection that I had on that cold morning meditating on the porch. The ability to gain experientially understanding into oneself.
Furthermore‚ if we ever build an AI that appears to have experiential understanding‚ we face a disturbing paradox: we may never be able to distinguish advanced mimicry from genuine consciousness.
This leaves us in an unsettling position. Either AI doesn’t understand what “cold” means in the way we do‚ and we risk anthropomorphizing it with undue moral weight‚ or it does‚ and we have created something capable of suffering.
Both possibilities are staggering. They force us to rethink not just the future of AI‚ but the nature of understanding itself. This isn't just a machine's problem; it's a mirror to our own. We face the same limitation with each other daily‚ never truly knowing if our internal experiences align. We bridge that gap with an assumption of shared humanity‚ with a compassionate leap of faith. The question is‚ is that a leap we are prepared to make for a machine?




Nice way to point out that AIs are cold machines and don't have the experiential intelligence we living beings have. We invented these machines to outsource some of our repetitive tasks and figured out a way to let AI mimic our use of language.
The word embeddings, those black boxes of statistical models, are just a mechanism to act as plausible surrogates for meaning. Just as the stream of characters and n-grams that came before them. We will probably find even cleverer way of modeling meaning of words, but they are still not the real, experience based meanings we living beings know.
AI is just a tool. We built those machines.
I loved this, great lines of thought!