This article is part of the From Code to Consequence series, which explores the practical realities of artificial intelligence in organisational settings. It draws on the July 2025 panel discussion, where experts in data governance, healthcare analytics, and public sector transformation tackled a deceptively simple question: What do we really mean by “AI”? The answer, it turns out, is anything but simple. And the consequences of misunderstanding it are already playing out across sectors.

The term “AI” has become so diluted by marketing and hype that it now functions more as a branding tool than a technical descriptor. One panellist described the absurdity of this trend with a wry observation:

“I was looking at an AI-enabled toothbrush the other day. I’ve got no idea how that could possibly work. Does it brush your teeth automatically? What does it do?”

This wasn’t just a joke; it was a critique of how the term has been stretched to cover everything from basic automation to complex decision-making systems. The problem is not just semantic. When organisations deploy “AI” without understanding what kind of system they’re dealing with, they risk applying inappropriate governance, misjudging the system’s capabilities, and failing to anticipate its limitations. A rule-based scheduling tool and a generative model trained on billions of documents are not the same thing; but both are routinely labelled “AI.” This lack of precision undermines oversight and makes it difficult to assess risk, assign accountability, or even explain what the system is doing. The result is a landscape where the term “AI” obscures more than it reveals, and where organisations are left guessing about the nature of the tools they are using.

The term “AI” has become so diluted by marketing and hype that it now functions more as a branding tool than a technical descriptor...When organisations deploy “AI” without understanding what kind of system they’re dealing with, they risk applying inappropriate governance, misjudging the system’s capabilities, and failing to anticipate its limitations

The panel offered a helpful framing: AI is not a monolith. It spans a wide spectrum of approaches, from deterministic systems that follow explicit rules to probabilistic models that generate outputs based on statistical likelihoods. At one end are symbolic systems; those built on “if this, then that” logic. These are often used in diagnostics, smart devices, and basic decision support. They are predictable, auditable, and relatively easy to test. At the other end are generative systems, which rely on statistical pattern matching. These systems do not “understand” anything in the human sense. They operate by calculating what is most likely to come next, based on vast amounts of training data. As one panellist explained,

“Most of what we use is what we call generative AI… it’s all based around probabilistic pattern matching. The clue is in the word probabilistic. It’s not about certainties.”

This matters because probabilistic systems can produce different outputs for the same input. They can invent facts, misinterpret queries, and fail in ways that are difficult to detect. Unlike rule-based systems, they cannot be fully predicted or exhaustively tested. This makes oversight more complex and more urgent. For example, a chatbot trained on customer service transcripts might respond accurately most of the time, but occasionally fabricate a policy or misquote a regulation. Without a mechanism to detect and correct these errors, the system becomes a liability.

One of the most dangerous misconceptions about AI is that it “understands” the world. It doesn’t. As the panel made clear,

“It has no idea of what it’s just said. It’s got no idea of the context. It’s got no idea of who you are. It’s got no idea of the world it’s operating in. It’s just a statistical model.”

This lack of awareness is not a flaw, it’s a feature. These systems are designed to generate plausible outputs, not to reason about their correctness. Yet their responses often appear polished, confident, and authoritative. This creates a veneer of reliability that can be deeply misleading. In high-stakes environments such as healthcare, finance, or criminal justice, this illusion of certainty can lead to serious harm. “You only have to be wrong once,” the panel warned.

“It’s not that it’s right nine million times. It’s the fact that it’s wrong once.”

That single error, if unchallenged, can result in a misdiagnosis, a wrongful arrest, or a denied mortgage. And if no one is watching, it may never be caught until it’s too late. Consider a triage system that misclassifies a patient’s symptoms due to a subtle shift in data patterns. If clinicians rely on the output without question, the consequences could be fatal.

their responses often appear polished, confident, and authoritative. This creates a veneer of reliability that can be deeply misleading. In high-stakes environments such as healthcare, finance, or criminal justice, this illusion of certainty can lead to serious harm.

During the discussion, one participant asked, “Isn’t that just the flip side of human decision-making? There’s no certainty there either.” It’s a fair point. Humans make mistakes. We misjudge, misremember, and miscalculate. But we also have something AI lacks: context.

“You use your brain,” one panellist said. “You look at results and go, ‘Nah, that’s just not relevant.’ Or, ‘That was 10 years ago.’ Or, ‘That’s written by someone who talks nonsense.’”

Humans can evaluate sources, weigh evidence, and adjust our thinking. AI cannot. It will confidently produce an answer, even if that answer is wrong. And unless someone is watching, that error may go unnoticed until it causes harm. This difference is not academic. It’s operational. It determines whether a system can be trusted, and under what conditions. For example, a doctor might use AI to support diagnosis, but they will also consider the patient’s history, symptoms, and context. AI cannot do that. It can only calculate probabilities based on patterns it has seen before. The human ability to apply judgment, experience, and ethical reasoning remains irreplaceable.

Without a clear definition of what AI is, organisations cannot govern it. They cannot test it. They cannot explain it. And they cannot be accountable for it.

“Responsibility doesn’t lie with the AI itself,” the panel stressed. “AI is only a tool. It can’t think, feel, or make ethical judgments.”

Accountability must be assigned before deployment. If it’s not written down, it doesn’t exist. This is not just a philosophical point. It’s a practical one. If a system produces an error, who is responsible? The developer? The implementer? The user? Every organisation must ask: What’s the worst that could happen? This mindset encourages proactive thinking about risk and forces teams to consider the consequences of failure. It also helps clarify who is responsible when things go wrong. For example, if an AI system used in recruitment filters out candidates based on biased data, is the fault with the model, the data, or the organisation that deployed it? Without a clear definition of what the system is and how it works, these questions remain unanswered and accountability remains diffuse. The panel likened this to hiring a new employee.

“You wouldn’t just say, ‘Go and make decisions.’ You’d check. You’d train. You’d supervise. AI should be treated no differently."

Every organisation must ask: What’s the worst that could happen? This mindset encourages proactive thinking about risk and forces teams to consider the consequences of failure. It also helps clarify who is responsible when things go wrong.

The panel warned against the hype that surrounds AI.

“There’s so much hype around that says AI will do this, AI will do that, AI will do the other. But people are just jumping in and using it without understanding the implications.”

This is not a call to slow down. It’s a call to wise up. Organisations are rushing to adopt AI tools without understanding the underlying models, the data they rely on, or the risks they introduce. This is especially dangerous when AI is used to make decisions that affect people’s lives.

“If you’re taking results and saying that’s the answer and you’re not checking it, the responsibility then lies with you.”

Yet in many organisations, AI is deployed without oversight, without testing, and without a clear understanding of its limitations. This is not innovation. It’s recklessness. Consider a financial institution using AI to assess loan applications. If the model is trained on biased historical data, it may systematically exclude certain groups. Without proper governance, the institution may not even realise the harm it is causing until it faces legal action or public backlash.

The panel was unanimous: the first step is education.

“We talked about the real importance of data literacy five, ten years ago,” one speaker said. “We’re now in a world where AI literacy is important as well. The two go hand in hand.”

This means training people not just to use AI, but to understand it. To know its limits. To question its outputs. To recognise when it is wrong and to act accordingly. Because AI is not magic. It is not sentient. It is not a replacement for human judgment. It is a tool. And like any tool, it must be used with care. Organisations that invest in literacy will be better equipped to assess risk, assign accountability, and deploy systems responsibly. Those that don’t will be flying blind. For example, a procurement team choosing an AI vendor must understand the difference between rule-based and generative systems. A clinical team using AI for triage must know how to interpret its outputs. Without this literacy, governance becomes reactive and often too late. The panel emphasised that literacy is not just for technical staff; it must extend to leadership, operations, and anyone involved in decision-making.

like any tool, it must be used with care. Organisations that invest in literacy will be better equipped to assess risk, assign accountability, and deploy systems responsibly. Those that don’t will be flying blind.

Before we can govern AI, we must define it. Not in abstract terms, but in practical ones. What kind of system are we using? What data does it rely on? What decisions does it influence? And what happens when it gets things wrong? AI without oversight is a loaded weapon in the hands of the unaware. But AI without definition is something even worse: a system we cannot see, cannot test, and cannot trust. The panel made it clear: governance begins with understanding. And understanding begins with asking the right question; not “What can AI do?” but “What do we really mean by AI?” Until we go beyond the buzzwords and that question is answered, oversight will remain superficial, and organisations will continue to operate in the dark.

In the next article, we’ll explore another question raised during the From Code to Consequence panel: Where does accountability for AI sit?