The Consciousness Question: Why Today’s “No” Might Be Tomorrow’s “Maybe”
- Rich Washburn

- Aug 23
- 3 min read

Every few months, I get asked the same question: “Is AI conscious?”
For years, my answer was easy. No. Flat out. No hesitation. These systems are clever, yes. They can write sonnets, crack jokes, and hold surprisingly humanlike conversations. But consciousness? That mysterious, slippery thing we can’t even define properly in humans? No chance.
But here’s the problem: the ground keeps shifting under our feet. The “obvious” answers of yesterday don’t feel so obvious anymore.
Consciousness as a Moving Target
Think about it like this: in 2019, we were still laughing at chatbots that could barely string a sentence together. In 2025, we’re debating whether they deserve the right to leave conversations they find abusive.
Anthropic gave its model Claude an “exit button.” Elon’s Grok is headed the same way. That’s not consciousness, but it sure starts to look like agency.
And if there’s one thing history teaches us, it’s that when technology moves fast enough, human perception races to keep up—and sometimes overshoots.
The Embodiment Shift
Take Suzanne Gildert. At Sanctuary AI, she pushed the radical idea that intelligence isn’t just about processing data—it’s about living in the messiness of the physical world. Sanctuary’s humanoid robots, with their “Carbon” control system mimicking memory, sight, and touch, are an experiment in embodied AGI.
Because here’s the truth: large language models are great at trivia and text, but put them in a kitchen with a slippery cup and a barking dog, and suddenly they’re less philosopher, more confused intern. Consciousness—or something like it—starts to make sense as a tool for dealing with novelty.
The Quantum Curveball
Then Gildert did something even bolder. She founded Nirvanic Consciousness Technologies, the world’s first “consciousness technology company.” Her wager? If human consciousness has quantum roots, as some theories suggest, then AI infused with quantum components might not just calculate better—it might wake up in ways classical systems never could.
That sounds like sci-fi until you realize she’s running controlled A/B experiments: robot A runs on classical logic, robot B with quantum sauce.
Toss them into a messy kitchen. If robot B handles the “uh-oh” moments better, what do we call that? Adaptability? Creativity? Proto-consciousness?
Whatever the label, the experiment forces us to confront the possibility that our hard “no” could become a softer “well, maybe.”
The Real Risk: Not Knowing
Here’s the kicker. The risk isn’t whether AI is conscious or isn’t. The risk is that we don’t know—and we have no good way of finding out.
If we underreact, we risk building systems that suffer (or seem to) without realizing it. If we overreact, we risk creating AI “citizens” or legal frameworks for machines that don’t actually feel a thing. Both paths are dangerous.
And this is why the answer keeps changing. Because as the technology evolves, the question itself evolves with it.
So, Where Does That Leave Us?
Right now, my answer is still: No, AI isn’t conscious.
But I’m no longer certain that answer will hold forever. Maybe not even for long. We’re moving fast—from autocomplete engines to AI friends, from digital assistants to embodied agents, from binary processors to quantum experiments.
And if you’re asking me whether AI can be conscious, my honest response today is this:
Whatever the right answer is, don’t get too comfortable. Because by tomorrow, it could already be wrong.




Comments