On Consciousness
The Hard Problem and Self-Awareness
When I first started studying computer science in high school, I was fascinated by the idea that you can build small scale intelligence from simple commands. This idea of universality that all machines spoke one language defined by Turing was amazing to me. However, one thing bothered me. I didn’t feel humans were anything like these machines. I’d heard and been taught about how humans are just really complex machines, but I was disturbed by this idea because, although I couldn’t put my finger on why, I felt different. It wasn’t until I got to college that I discovered the articulation of what I was trying to point at: the one thing that makes humans different from machines is subjective experience.
Consciousness is one of the biggest mysteries in both science and philosophy, touching every area of inquiry because of how deep it goes. However, I think we often talk past each other in these conversations about consciousness. In this essay I want to open up a way to incorporate the increasingly divided views on the subject. This topic is relevant to AI because it’s crucial to know if we are or can build conscious machines.
Two Families of Views
Before addressing this misconception, it’s important to examine the different understandings of consciousness. The first is the reductionist and materialist side, probably the most common view. Simply put, this view holds that consciousness can be fully explained by understanding the brain. This is a broad family of views that disagree on details. For example, Daniel Dennett believed consciousness is an illusion that emerges from an internal model of the world, while physicist Sean Carroll thinks it’s not an illusion but still fully explainable through brain processes. This is the common view among scientists because it fits neatly within physical laws. David Chalmers has argued that if you want to be a materialist, then the only explanation that really makes sense is an illusionist one, which would mean we are mistaken when we claim to be conscious.
I think the strongest argument for this view actually comes from Chalmers himself, even though he argues against it in The Conscious Mind. He presents this challenge: if the laws governing consciousness aren’t based on the physical structure of the brain, then it would be possible for one’s qualia to change every moment without the physical structure changing there for your brain wouldn’t notice. This would be deeply strange. But the further implication is that our spoken language has no connection to our internal experience, so saying “we are conscious” would just be a coincidence with the fact that we actually are conscious. This doesn’t mean you must be a reductionist but it does force you to acknowledge that the laws governing consciousness must be connected to the physical structure in some way. Meaning saying it just comes from those laws is the most parsimonious.
This brings us to the second family of views—those that believe consciousness is fundamental in some sense there are many here are a few:
Dualism: Consciousness and physical objects are governed by two separate sets of rules
Idealism: Consciousness gives rise to the laws of physics, or is prior to them
Panpsychism: Consciousness is a fundamental property of matter
The classic “hard problem” argument supports these views. Descriptions of the brain can explain how things work at a complex level, but they never capture what it’s like to be that machine. For example, you could understand the full visual process of seeing red, yet if you were an alien, you might conclude it was just an unconscious process. No physical description seems to convey “this is what it’s like to be that machine.” I’ve studied computer science and AI and never saw a way that any amount of information processing would somehow make it feel like something to be that thing.
Hofstadter’s Insight
I recently read Douglas Hofstadter’s Gödel, Escher, Bach, where he suggests that qualia seem best described as the meaning of the current brain state to the brain itself. Imagine that the brain is a machine whose job is to represent the world; part of the world it represents would be itself. There would be no need to give that system knowledge about neurons. I think a system like this is what we call the self, and it answers the question of self-awareness—but this doesn’t explain why it’s like to be the brain, even though it helps explain what it’s like if you assume it is like something.
One thing my understanding has made me realize is that maybe I was wrong about my initial assumption. No matter which of these two sides one takes, if you follow it deeply enough, it seems to point to the idea that the construction of conscious machines isn’t fundamentally out of reach. The materialist view would say that if you build the right information processing system—even if it requires building a brain—in theory it could be done. While a fundamentalist view would say that consciousness comes along with any system you build, building it sufficiently similar to humans would also bring consciousness along for the ride.
Two Problems, Not One
I think these two competing views are talking past each other, because both are right and wrong—it depends on what they’re talking about. This brings me to my central claim: there are actually two distinct problems being discussed. One is the question of why we are conscious, which is the famous hard problem that seems unanswerable within materialism. However, most of the time those who think a better theory of the brain will answer this question are actually explaining self-consciousness or knowledge about oneself even how we describe qualia.
Hofstadter can explain why qualia feels disconnected from brain structure: the brain has no evolutionary reason to communicate that information to itself. Furthermore Evolutionary explanations do much of the heavy lifting. For example, why does sugar taste good? Because we needed it to taste good for survival. Furthermore, the reason it tastes the way it does can probably be explained by specific evolutionary pressures.
So, if all that can be explained, what’s left? The only thing that’s left is the fact that it’s like something to be that machine—that there is an internal experience. That’s what the hard problem is. The issue is that both sides often mistake the structure of experience (qualia) for the mystery itself. I think the structure of experience can be explained by science. The only thing that will remain is the fact that it’s like something to be that thing.
Now, it’s important to mention that these thinkers genuinely believe that all that needs explaining is self-consciousness. This is a fair view, but if you are going to claim to answer the question of consciousness, the answer should not rely solely on some representation of ourselves within the brain without further explaining why it would be like something to be that part of the brain. Many seem to not want to bite the bullet and admit they are illusionists. This problem appears in a wide family of views, from like Hofstadter or Global Workspace Theory.
What the Non-Materialist Side Gets Wrong
I also want to address what the other side gets wrong, and I think it has to do with failing to differentiate what is actually part of the hard problem from what is explained by the brain’s structure. Mary’s room is a popular thought experiment and demonstrates this well.
Mary is in a black-and-white room but is said to know every physical fact about color and how the brain processes it. If that’s really true, then when she leaves the room and sees color, she doesn’t learn a new fact—she just has the experience she already fully understood. The only reason it seems like she would learn something new is because real humans can’t actually grasp all that information at once, so we mistake having the experience for gaining new knowledge. I think this sounds impossible to humans, because how can you separate the fact that it’s something to be from what it’s like to be? You can’t and I think they are both fundamentally related just explained by two different things. This I think is the mistake of those who believe its fundamental that is putting more than the fact of experience as fundamental.
Consciousness as Existence Itself
Now, for what I find most compelling, I’d like to present what I think best explains everything I have mentioned. This is purely by opinion.
I think that when we say something is conscious, it is actually a tautological claim—it is repeating the same thing twice. That’s because I think to say something “is” is to say it is conscious.
My argument goes like this: Imagine a universe in which nothing exists except a single photon. This universe contains nothing else. What makes this universe different from one that contains nothing at all? People will be tempted to say you can describe it by its charge or position, but all of those things are defined by relationships. In order for us to know about any property, it must exist relationally; otherwise, we couldn’t know about it.
There are only two options: either that thing is equal to the empty set—which doesn’t make sense, because then everything would be equal to the empty set—or there is something about that thing that separates it from the empty set but is not knowable from the outside or relationally. It is intrinsic. Some would call this “being.” But I think that doesn’t explain what the property is—it’s basically saying that what separates it from the empty set is that it’s not the empty set.
No, I think the obvious answer is the one thing we already know is not observable from the outside, is intrinsic, and is not dependent on anything external: consciousness. Everything else including the qualia is governed by the laws of what exists, mainly the laws of physics.
Originally I believed there was something special about humans, and that was consciousness. But the more I come to understand it, the more I think that it’s a feature of the universe itself. This doesn’t mean that I think machines today are conscious in the way we are. I think that even if consciousness is fundamental, the unified and rich consciousness we experience is something more complex than an LLM. That richness is really what separates humans from other things. Love is not special because it’s experienced, it’s special because it’s experienced deeply.




This is such an interesting reflection.
In the Hindu way of seeing, consciousness is not something the universe developed, it is the canvas on which the universe appears.
Space, time, matter, energy, even evolution rest inside consciousness, not the other way around.
And your dream experiences fit that beautifully.
When the mind quiets, we start touching a layer of awareness that doesn’t feel “personal” anymore.
It feels older, bigger, and strangely familiar, almost like remembering something instead of learning it.
As for evidence, the strongest pointer is simple:
everything we know, sense, imagine, measure, or discover is known within consciousness.
We never step outside it.
That doesn’t prove it’s fundamental, but it makes a very strong case that it isn’t just a biological glitch.
Whether Nietzsche would feel like a fool, I don’t know.
But he might feel a little less lonely knowing consciousness may be the one constant that never turns against us.
Robert, this was a great read! A lot of my recent work has been caught up in the topics like consciousness, the quantum, tarot and more. Looking forward to reading some of the names you mentioned here. I recently read Bentov’s Stalking the Wild Pendulum…you might be interested in his theories! Bentov really changed my understanding of conciousness. I touch on some of that in my newest post, “Conciousness Causes Collapse: On Wigner, Bentov, The Quantum, and the Tarot, on my Substack, Tracework. I’d love to hear what you think about it!
https://open.substack.com/pub/thetracework/p/consciousness-causes-collapse?r=6moahs&utm_medium=ios