AI Compliance Frameworks: Fact or Fiction

This article builds on the From Code to Consequence panel discussion held in July 2025, where experts from healthcare, public policy, and data governance came together to explore the real-world implications of AI.

Artificial intelligence is no longer a distant concept. It is embedded in decision-making, operations, and customer interactions across industries. But as adoption accelerates, a critical question remains:

Are AI regulatory compliance frameworks already in place?

That question was posed early in the From Code to Consequence discussion. The debate that followed is the right way to approach this topic, because the law follows the use, not the other way round. To answer whether rules are “already in place,” the panel first unpacked what we are even talking about. One speaker said, “No one really knows what AI is.” That wasn’t a joke. Some tools simply follow clear rules. Others, like text or image generators, guess the next word or pixel based on patterns in data. These systems don’t “know” facts. They work on likelihoods. That is why the panel kept repeating a simple warning: “You only have to be wrong once.” A polished paragraph, a confident score, or a neat label can still be wrong. When it affects hiring, loans, or clinical triage, one bad output is enough to cause real harm.

A polished paragraph, a confident score, or a neat label can still be wrong. When it affects hiring, loans, or clinical triage, one bad output is enough to cause real harm.

This matters for law. If a tool sits quietly in the background and nudges hiring, grading or triage, the legal duties track the impact on people. If the tool is used in advertising or on websites, cookie and tracking rules may be the pinch point. The conversation repeatedly returned to who owns the decision. “AI is a tool… the responsibility lies usually with the organization that’s actually put that AI in place.”

One panellist summed it up as a mindset problem. “Governance is a good way to unlock opportunities but do it in a safe, well managed fashion.” Another warned that real world harm can end careers and close programmes, because “you only have to be wrong once.”

With that in mind, where do rules apply today? There are three layers most teams will meet. The first is Regulation (EU) 2024/1689 (known as the EU AI Act) which identifies a new set of EU wide rules focused on how certain AI uses affect people. These rules group uses by risk, ban some outright, and set extra duties for areas like employment, education and health. The law is already in force, with bans and duties phasing in across 2025, 2026 and 2027. Serious breaches can mean penalties up to €35 million or 7% of worldwide turnover. If a system is sold into the EU or its outputs are used there, these rules can apply even if the maker is based in the UK. Examples of banned practices include social scoring by public bodies and certain live facial recognition in public places for policing. For listed higher risk uses, a human must be able to check and override, the maker must keep records of how the system is intended to be used, and it must be testable in practice.

The second layer is about personal data, often used within, or by, AI models. Inside the EU you have EU GDPR. In the UK you have UK GDPR working together with the Data Protection Act 2018. These rules say you need a solid reason to use someone’s data, you must tell people in plain terms what you are doing, you must keep data safe, and you must offer a route to challenge certain automated decisions and get a human involved. The UK’s regulator confirms these duties continue to apply after Brexit.

The third layer lives in your websites and apps which may be integrated with AI. The Privacy and Electronic Communications (EC Directive) Regulations deal with cookies and similar tools. If your service stores or reads information on someone’s phone or computer, these rules on cookies and similar tools apply. In general you need the person’s consent unless the tool is strictly needed for the service they asked for.

The panel kept pulling the conversation back to two ideas that make those legal layers real: clear decision rights and clear records. One speaker said it in a single line: “There can be no executive authority bestowed upon artificial intelligence.” Another connected this to documented use: for higher impact areas you need “human oversight and control,” plus “documentation and traceability” so people can follow the chain of reasoning at a process level. The focus is not on exposing source code. It is on showing how the tool is used, where mistakes might creep in, and where a person steps in when needed.

for higher impact areas you need “human oversight and control,” plus “documentation and traceability” so people can follow the chain of reasoning at a process level. The focus is not on exposing source code. It is on showing how the tool is used, where mistakes might creep in, and where a person steps in when needed.

The panel also described what goes wrong when teams treat rules as a last step. AI tools drift away from accuracy without anyone noticing. Processes slow or break. People over trust neat outputs. Data use crosses a legal line. Trust with users is damaged and stays damaged. Security seams where systems connect become weak points. That is why one panellist said, “Governance is a good way to unlock opportunities but do it in a safe, well managed fashion.”

Real world examples made this concrete. The panel talked about how bias in training data can influence decisions in finance, affecting loan approvals, and how clinical tools need stronger checks because patient impact is direct. They also pointed out that in health, AI driven support can be classed as a medical product, so it must pass the same kind of safety and quality checks people already know in that sector.

Another thread from the discussion was the mix of old and new rules that already apply. One speaker listed the familiar names that keep turning up on project teams: domain experts, data scientists, product managers, people covering ethics and compliance, and security specialists. The punchline wasn’t about job titles. It was that each of those roles needs to understand where the law actually bites on their piece of the work. For example, the person responsible for customer facing design should know that the cookie banner must not load optional trackers before a choice is made. The person writing up a hire or grade support tool should know when and how a person can ask for human review, and how that review point is recorded.

At this point it helps to lay out, in plain terms, what the EU AI rulebook expects for certain uses and who it can reach. If you sell a tool into the EU, or your outputs are used there, you should assume these rules may apply. Banned practices are off the table. For listed higher risk areas, expect duties around design records, a named person who can answer “how does this work here,” testing that fits the real task, and a clear human check at the right point in the flow. Some parts already apply, and more arrive through 2026 and 2027.

The discussion noted that the UK is not automatically covered by the EU’s AI rulebook. If a tool stays within the UK market, those EU specific duties do not apply. But the moment a system is placed on the EU market or its outputs are used there, the EU rulebook can apply, even to a UK supplier. That is how the law is written.

For UK only services, the UK laws on data and cookies continue to set the day to day guardrails. On top of that, there is a new UK Act that tweaks several areas. The Data (Use and Access) Act 2025 doesn’t replace the older laws; it updates them in places. It makes automated decision rules more flexible in the UK by allowing certain machine only decisions that have a big effect on someone, so long as there are safety nets: you tell people, you give them a route to challenge, and a human can step in. It also plans to relax cookie consent for some low risk uses like simple analytics and site features, though these changes only take effect once the government formally starts them. It should also be noted that UK specific changes under the Act arrive in steps. The UK regulator’s pages and the government plan set out the schedule in detail. Some early provisions started in August 2025. The main UK data changes, including the updates to machine only decisions and some cookie relaxations, are planned for around December 2025, with further steps into early 2026. So always check the regulator’s pages for what is live now versus what is still to come.

A separate but related point is how personal data moves between the EU and the UK. At the moment there is an EU decision that allows data to keep flowing to the UK without extra paperwork. That decision was extended to 27 December 2025 while the EU checks that the UK’s new Act still meets the standard. For planning, that means you can keep moving data, but keep an eye on the review later in 2025.

The discussion also covered the idea of “explainability,” but not as an abstract buzzword. The panel used a very human example: a doctor talking to a patient while wearing smart glasses that record details. The tool might suggest a likely diagnosis. The doctor then says what they think, orders the medicine, and a pharmacist checks it. In other words, the tool supports a person who remains responsible. One speaker put it practically: the records you keep should tell people the inputs, the steps taken, where the tool is involved, and where checks happen. The goal is a clear story of use that someone can follow.

the records you keep should tell people the inputs, the steps taken, where the tool is involved, and where checks happen. The goal is a clear story of use that someone can follow.

At this point, the panel made an important observation: “If you explain it in that way then anybody who’s looking at it will go.... AI here is being used as a decision support mechanism… It’s a medical device and therefore it has to go via the MHRA to be checked etc. to ensure that it meets all these particular criteria of a medical device.”

That means if smart glasses are doing more than recording, if they analyse symptoms and suggest diagnoses, they need to be legally treated as a medical device. This triggers strict rules in every major market. In the UK, that means approval by the MHRA before the product can be used. In the EU, it means compliance with the Medical Device Regulation and the new AI law, which classifies such tools as “high-risk.” In the US, it means clearance or approval by the FDA under its medical device framework. All three systems require evidence that the device is safe, performs as intended, and includes human oversight. They also require ongoing monitoring after launch.

So, while the panel’s example made explainability sound like common sense, in reality it is often a legal requirement. If you build or deploy such a tool, you are stepping into one of the most heavily regulated spaces in tech. The rules are there for a reason: when a device influences diagnosis or treatment, the margin for error is zero. And as the panel kept reminding us, “You only have to be wrong once.”

The rules are there for a reason: when a device influences diagnosis or treatment, the margin for error is zero. And as the panel kept reminding us, “You only have to be wrong once.”

The most sobering part of the conversation was about testing. A lot of teams still lean on the same checks they used for regular software: unit tests, user tests, maybe a security scan. The panel called this out as too thin for AI. You need to test not only “does it do what we expect,” but also “what happens when inputs are odd or drift over time,” and “what happens when people start using the output in ways we didn’t plan.” It is not enough to test once and sign off. You keep an eye on real world behaviour and adjust.

Rules also need a place to live in your organisation. One of the panellists gave a memorable line for this: the “Dr Pepper” test: “what’s the worst that could happen?” The point wasn’t to scare people. It was to force clear answers about who owns the decision, who watches for changes over time, and what exact steps are taken when something looks off. It turns the big idea of accountability into names, dates, and actions that can be checked later.

So back to the question the panel were confronted with: Are AI compliance rules already in place? Yes, but you have to look in the right places and match them to the use in front of you. The panel’s message was steady throughout. Keep a person in charge of the outcome. Make sure there is a clear point where that person can step in. Keep records that explain what the tool does, where it sits in the task, and how you know it still behaves as intended. Test for the expected and the unexpected. And remember the one liner that kept coming back: “You only have to be wrong once.”