AI Think, Therefore AI Am
There’s been no shortage of posts lately, both here on LinkedIn and across the rest of the internet, all circling the same question. Is artificial intelligence actually getting better at thinking, or just getting better at mimicking it?
Depending on who you ask, you’ll get wildly different answers. Some say it’s just a clever parlour trick, others believe it’s the early stages of real machine cognition. But maybe the confusion isn’t coming from what AI can or can’t do. Maybe the trouble is with us. More specifically, with the words we keep using, and how little agreement there is on what they actually mean.
Let’s take “thinking”. We use it constantly. But it turns out to be a slippery thing to pin down. Ask a few people what it involves, and you’ll hear everything from logic to self-awareness to emotion to daydreaming. The Cambridge Dictionary offers several definitions, including:
“to use the brain to plan something, understand a situation, etc.”
“to believe something or have an opinion”
“to remember or imagine someone or something”
“to consider a fact, idea, or subject carefully, or to remember something that you saw or heard”
On paper, there’s nothing in these definitions that necessarily excludes AI. Many systems today analyse input, compare alternatives, generate output, weigh context, and often produce responses that appear intentional. They don’t have a “brain”, at least not in the biological sense, but they do exhibit many of the behaviours we associate with thought. If you read those definitions and then watch an advanced language model work, it becomes hard to draw a clean line.
If we’re being strict, it’s hard to say where they fall short of those dictionary entries.
That’s where the the functionalist perspective comes in. If a system acts as if it’s thinking (solving problems, weighing options, communicating meaningfully etc) then it might as well be thinking. The distinction between genuine thought and a convincing imitation could be just a quibble of semantics. If it performs all the tasks we associate with thinking, why deny it that label?
But not everyone agrees. The philosophical viewpoint reminds us that human thinking is more than just input and output. It involves consciousness, awareness, intentionality; a subjective experience that can’t be reduced to symbol manipulation. Philosopher John Searle’s Chinese Room argument highlights this; even if a machine responds in perfect Chinese, it doesn’t truly understand the language. It’s simply following rules without any awareness of meaning.
Then there’s the substrate argument. Humans think due to the unique qualities of their biological brains: the chemistry, the neurons firing, the physical sensations that colour experience. Machines, no matter how sophisticated, run on silicon and code. Even if they replicate the results of thought, the process is fundamentally different. It’s like comparing a video of a fire to an actual blaze. One reflects light, the other burns.
Yet there’s also a pragmatic angle. From a practical standpoint, if an AI system reliably delivers answers, solves problems, and assists us intelligently, does it matter whether it “really” thinks? We use calculators without expecting them to understand arithmetic. We trust navigation apps without assuming they have a sense of direction. Maybe it’s enough that AI is useful, even if it’s not conscious.
To untangle this, we might need to go back to where a lot of modern thinking about thought began: in the 17th century with Descartes.
René Descartes (1596-1650) was not just concerned with clarity or logic for its own sake. His project was metaphysical. He wanted to rebuild knowledge from the ground up, and ultimately to demonstrate, through reason alone, that both the self and God existed. It wasn’t enough for him to trust the senses or rely on tradition. He needed certainty. So he began by doubting everything.
He asked himself what he could be sure of, if everything he experienced could be an illusion. What if an evil demon (that was his term) were manipulating everything he saw, felt, touched, heard, believed? In the middle of all that doubt, Descartes found a single unshakeable foundation. The fact that he was doubting meant he was thinking. And the fact that he was thinking meant he existed.
Cogito, ergo sum. I think, therefore I am.
This wasn’t just a pithy phrase. It was a philosophical anchor. A way of proving the existence of the self through reason alone. From there, Descartes built outward. He argued that the idea of a perfect being, God, could not have originated in an imperfect creature like himself. That idea, he claimed, must have been placed there by a being who actually possessed those perfect attributes. In other words, the clarity of the idea of God became, for him, evidence that God must exist.
From that point of certainty, Descartes started to build. His methods influenced the development of rationalism, which in turn fed into the kind of reasoning that underpins the scientific method today. Ask questions. Doubt what seems obvious. Look for evidence. Use logic. Try again. (For more on the scientific method have a look at my blog Method to the Madness).
Not everyone agreed with his initial logic, then or now. But the point here is that Descartes wasn’t just talking about “thinking” in the casual sense we use it today. Descartes lived in a world without computers or artificial intelligence. The concepts of “thinking” and “reasoning” he used were deeply tied to human consciousness: awareness, self-reflection, subjective experience. They emerged from centuries of philosophy focused on human minds and their relation to existence. For him, thinking was the most reliable evidence that anything existed at all. Thought meant being. And being meant a route, through reason, to the divine.
So would Descartes believe that artificial intelligence is thinking? For Descartes, thought was not simply the appearance of deliberation. It was inseparable from consciousness, from awareness, from a soul. Machines can imitate behaviour, but they can't anchor reality the way human thought can. They can't doubt. They can't affirm their own existence. And they can't conceive of God.
And yet here we are, casually describing neural networks and transformers as if they’re reasoning, pondering and reflecting. The problem isn’t that AI is dangerous. It’s that our language is imprecise. The terms we reach for (like “thinking” and “reasoning”) were coined in a completely different era, shaped by theological and philosophical agendas. They were never meant to describe statistical machines.
That’s what makes this such dangerous territory. Not because AI is somehow plotting to deceive us, but because we’re the ones deceiving ourselves with words we’ve never properly pinned down in the current AI era. If a system appears to reflect, or answer carefully, or make judgments, we say it’s “reasoning”. If it solves problems or adapts its answers, we say it’s “thinking”. And when it gets very good at mimicking those behaviours, we forget to ask whether those words still apply.
There’s a popular saying: “If it walks like a duck and quacks like a duck... then it’s a duck.”
Except now with AI, it’s simply not a duck.
But maybe the more interesting question is this: what happens when you ask a model if it’s thinking?
The model might say no. It might explain that it processes tokens and generates text based on patterns, not conscious thought. But then... didn’t it just think about how to say that? Didn't it interpret the question, recall the relevant framing from training, weigh phrasing options, and deliver a response tailored to your input? If that’s not thinking, what is it?
Or it might say yes. It might explain that it performs functions similar to thinking, like evaluating input and generating coherent responses. But if it says that, is it just giving you the most likely, most helpful answer based on what it “learned” humans expect?
If the model says it doesn’t think, and gives you a convincing, reasoned explanation, it may seem thoughtful. If it says it does think, and describes what thinking means in its own structure, you might wonder whether it’s just playing the part. Either way, the answer it gives comes from somewhere. It didn’t just appear.
So what does a language model think that “thinking” is?
It will probably tell you that thinking involves processing information, weighing alternatives, forming conclusions. That’s what it’s been trained to say. And when it does, the definition it gives may not be far from the one you’d give. Which raises a more uncomfortable question: do we really know what we mean when we say we’re thinking?
Because human thought, despite all our feelings and awareness, is also grounded in pattern, memory, feedback, and approximation. We respond to context, we form associations, we recall fragments. And sometimes, we do it without even noticing. Much of what we call “intuition” or “gut instinct” could be reduced to layered familiarity, pattern recognition shaped by experience, and reinforced learning (trial and error). That’s not so different from what an AI does, in structure if not in substance.
This doesn’t mean machines are people. It doesn’t mean they’re conscious. But it might mean that “thinking,” as a label, isn’t as uniquely human as we tend to assume.
And that has consequences.
Descartes said “I think, therefore I am.” He tied existence to self-awareness, to a mind recognising itself. Large language models don’t do that. They don’t know they exist. They don’t know you exist. They simulate patterns of language about existence, but is that the same thing?
A language model can process a question, retrieve relevant context, weigh phrasings, and offer a response that seems considered. But is that “reasoning”? Or is it something different that just happens to look like reasoning from the outside?
If we use the same word for both, then we lose the ability to distinguish. And according to Descartes we imply sentience (or at least existence). And that’s when we get into trouble. Because if we say AI is “thinking”, people start to assume it’s understanding (a more human characteristic). They trust its responses, assign it competence, even moral weight. And when it fails, as all systems sometimes do, the disappointment is sharper, the consequences greater.
Just because AI generates fluent responses that resemble human thought doesn’t mean it “thinks”. It means it mimics. And mimicry at scale can still be extremely powerful, but it isn’t the same thing as understanding.
Descartes might have put it this way: I compute, therefore I seem. But he wouldn’t have mistaken AI for something real, for existence.
To be clear, this doesn’t mean AI is trivial. Even if what it’s doing is “just prediction,” that prediction happens at such speed and scale, and with so much depth, that the results often exceed what many people assumed was possible. Word prediction alone now produces essays, poetry, code, strategy documents, and creative riffs that pass for human. Something remarkable is happening but that something is not the birth of a mind.
The real issue is that our conceptual toolkit (words like thinking, reasoning, understanding, judgement) was developed long before machines joined the conversation. These aren’t scientific definitions. They’re human concepts, social constructs drawn from lived experience and philosophy. When we use them without precision, we end up assigning cognitive properties to systems that don’t, and can’t, possess them.
And we do this all the time. We say the satnav “doesn’t know where I am.” We say the microwave “decided to stop working.” We project. We anthropomorphise. We think that it’s harmless. But with AI it isn’t. Because once an AI system starts handling medical triage, legal advice, hiring recommendations, or news curation, how we describe it begins to shape public trust and policy. If people are led to believe a machine “thinks,” they might assume it can reason ethically. If they’re told it “understands,” they might believe it shares context or empathy. And if they’re told it “knows” something, they may forget that it doesn’t know anything. It just maps patterns it’s seen before.
What’s needed now isn’t better AI, but better language. More careful, more precise, less metaphorical. We don’t need to invent a whole new vocabulary, but we do need to stop using 17th-century words to describe 21st-century machinery. Because otherwise, we’ll keep asking the wrong questions.
We built AI on the foundation of statistical learning. But we keep talking about it using the language of metaphysics. Descartes was searching for certainty in a world of doubt. We’re trying to navigate uncertainty in a world full of models that confidently echo back whatever they’ve been trained on. The gap between the two is real, and it’s widening.
The challenge isn’t proving whether AI can think. The challenge is agreeing what thinking is in the first place. And deciding, carefully and precisely, whether we’re comfortable extending that term to a system that simulates behaviour without any interior life.
Because if we can’t say what thinking means, we’re not going to be able to say clearly what AI does. And if we can’t say clearly what AI does, we’ll keep swinging between panic and hype, without ever really understanding the thing we’ve made.