Avoiding the Light Touch
The question of whether light-touch AI regulation in some countries undermines the value of stricter frameworks elsewhere was a central theme in the recent From Code to Consequence panel, hosted by In the Know Limited. Over the space of an hour, the experienced panel explored the global tension between innovation and oversight in AI development. In particular, this question touched on a fundamental tension in how AI is being developed and deployed across the world. On one hand, some countries are pushing ahead with detailed, risk-based regulation. On the other, some are deliberately holding back, choosing to prioritise innovation and market growth over formal oversight. The question is whether those lighter approaches undermine the value of stricter ones. If a company can simply move its operations to a country with fewer rules, does that make the effort to build strong governance frameworks pointless?
The question is whether those lighter approaches undermine the value of stricter ones. If a company can simply move its operations to a country with fewer rules, does that make the effort to build strong governance frameworks pointless?
The panel didn’t think so. In fact, they saw this as one of the most urgent issues facing organisations today. One speaker compared the situation to vaccination. The idea was that for regulation to be effective, it needs to be adopted widely. If only a few countries take it seriously, the benefits are limited. Risks don’t disappear. They just shift to places where oversight is weaker. Another speaker used a different analogy. They said, “There’s no point saying we’re only going to release it in this field because sooner or later the genetically modified code is going to be everything.” That comment captured the reality that AI systems don’t stay local. A tool developed in one country can be deployed in another. The data it uses might come from anywhere. And the people affected by its decisions could be on the other side of the world. In short, regulation based on geography doesn’t work when the technology itself is borderless.
This is why the panel kept returning to the idea that regulation must follow impact, not location. One speaker explained that the EU’s new AI law doesn’t just apply to companies based in Europe. It applies to any system that is used in Europe or affects European citizens. They said, “If you want to have customers overseas, you’re then going to have to fall in line with the EU AI Act because it covers citizens rather than borders.” That means even if a company is based in the UK, the US, or anywhere else, it still has to follow the EU’s rules if its AI tools are used within the EU or if they process data about EU citizens. This approach reflects a growing recognition that the consequences of AI decisions are what matter most, not where the company is headquartered.
The EU AI Act itself is not a light-touch framework. It introduces a tiered system that categorises AI systems based on the level of risk they pose to people. Some uses are banned outright. For example, real-time biometric monitoring in public spaces is prohibited unless it is for very specific government or military purposes. Other uses are labelled as high-risk. These include systems used in areas like employment, education, and healthcare. For these, the law requires organisations to meet additional obligations. These include keeping detailed records, ensuring that a human can override the system’s decisions, and being able to explain how the system works in practice. One speaker summed it up clearly: “You must have human oversight and control.” This isn’t just a suggestion. It’s a legal requirement.
The reason these rules matter is because AI systems can go wrong in ways that are subtle and hard to detect. One common issue is performance drift. This happens when a system that was accurate at launch gradually becomes less reliable as the data it uses changes over time. Without regular monitoring, these shifts can go unnoticed until they cause real harm. Another issue is the appearance of authority. AI outputs often look polished and confident. They can give the impression of certainty even when they are based on probabilities. One speaker described this as a “veneer of certainty.” It’s the illusion that just because something looks right, it must be right. That’s a dangerous assumption, especially when decisions affect real people. Without proper oversight, there’s no way to challenge or verify those outputs. And when things go wrong, the damage can be hard to undo.
It’s the illusion that just because something looks right, it must be right. That’s a dangerous assumption, especially when decisions affect real people. Without proper oversight, there’s no way to challenge or verify those outputs. And when things go wrong, the damage can be hard to undo.
Some organisations might think they can avoid these problems by operating in countries with lighter rules. But the panel warned that this is a risky strategy. First, because the EU rules apply based on use, not location. And second, because the risks don’t go away just because the law is quieter. One speaker said, “The risks of AI are far bigger than the risks of bad data.” That’s a strong statement, but it reflects the reality that AI systems can make decisions that have serious consequences. Another speaker pointed out that fines under the EU AI Act can reach €35 million or 7 percent of global turnover. These aren’t theoretical penalties. They are already being enforced. Organisations that ignore them do so at their own peril.
The panel also discussed the practical side of regulation. It’s not just about having policies on paper. It’s about making sure those policies are understood and followed by the people building and using AI. That includes data scientists, product managers, compliance officers, and developers. Everyone involved needs to know where the law applies to their part of the work. For example, the person designing a customer-facing interface needs to understand when a cookie banner is required and what it must do. The person writing a hiring algorithm needs to know when and how a candidate can ask for human review. These are not abstract ideas. They are real-world requirements that affect how systems are designed, tested, and deployed.
To make this more concrete, the panel gave an example from healthcare. Imagine a doctor wearing smart glasses that record details during a consultation. The tool might suggest a likely diagnosis based on what it hears and sees. The doctor then decides what to do, and a pharmacist checks the prescription. In that case, the AI is supporting the doctor, not replacing them. But if the tool starts making decisions on its own, it becomes a medical device. And that triggers strict rules. One speaker explained, “If you explain it in that way then anybody who’s looking at it will go, oh well AI here is being used as a decision support mechanism… It’s a medical device and therefore it has to go via the MHRA to be checked etc to ensure that it meets all these particular criteria of a medical device.” That means the tool must be approved before it can be used, and it must meet safety and quality standards. This applies not just in the UK, but also in the EU and the US. All three regions have frameworks for regulating medical devices, and AI tools that influence diagnosis or treatment fall within their scope.
the tool must be approved before it can be used, and it must meet safety and quality standards. This applies not just in the UK, but also in the EU and the US. All three regions have frameworks for regulating medical devices, and AI tools that influence diagnosis or treatment fall within their scope.
This example also highlights the importance of explainability. It’s not enough to say that a system works. You need to be able to show how it works, what data it uses, what steps it takes, and where a human can intervene. The panel stressed that the goal is a clear story of use that someone can follow. That means keeping records of the inputs, the steps taken, and the checks that happen along the way. It’s not about exposing the source code. It’s about showing how the tool is used and where mistakes might creep in. This kind of documentation is essential for accountability, and it’s increasingly becoming a legal requirement.
Testing was another major theme. The panel warned that many teams are still using the same checks they used for regular software: unit tests, user tests, maybe a security scan. But that’s not enough for AI. You need to test not just whether it works as expected, but what happens when the inputs are unusual, or when people start using the output in ways you didn’t plan. You need to keep watching how it behaves in the real world and be ready to adjust. That’s part of governance too. It’s not a one-time task. It’s an ongoing responsibility.
And governance isn’t just about avoiding risk. It’s also about unlocking opportunity. One speaker explained that if you understand your data and have proper oversight in place, “your path to innovation is faster.” You’re not wasting time fixing problems after the fact. You’re not stuck trying to reverse-engineer how a system works. And you’re not risking reputational damage that could take years to recover from. “You’re not wasting money. You’re not reverse engineering. You’re not stuck in rework.” That’s a powerful argument for investing in governance early, rather than treating it as a final step.
The panel didn’t see strict regulation as a barrier to innovation. They saw it as a way to enable it. “Governance is a good way to unlock opportunities but do it in a safe, well-managed fashion.” That means building systems that are reliable, explainable, and accountable. It means making sure someone is responsible for what the AI does. And it means thinking about the worst-case scenario before it happens. One speaker called this the Dr Pepper scenario, asking “what’s the worst that could happen?” before you deploy a system. That kind of thinking helps organisations prepare for problems before they occur and ensures that someone is ready to step in when things go wrong.
the Dr Pepper scenario, asking “what’s the worst that could happen?” before you deploy a system. That kind of thinking helps organisations prepare for problems before they occur and ensures that someone is ready to step in when things go wrong.
The panel also predicted that over time, more countries will move toward stricter rules. They pointed to GDPR as an example. That started as a European law but quickly became a global standard. Companies around the world adopted similar practices just to keep doing business with Europe. The same thing is likely to happen with AI. “Eventually, everyone else will fall in line with the EU.” That doesn’t mean every country will copy the EU AI Act exactly. But it does mean that the core ideas (like human oversight, transparency, and risk-based classification) will become the norm.
So to come back to the original question: does light-touch AI regulation elsewhere make strict rules pointless? The panel’s answer was clear. No, it doesn’t. In fact, it makes them more important. Because in a world where AI systems cross borders, the only way to protect people is to make sure the rules follow the impact. Strict regulation isn’t about slowing things down. It’s about making sure that when things go wrong, and they most certainly will, someone is watching, someone is accountable, and someone can step in.
Strict regulation isn’t about slowing things down. It’s about making sure that when things go wrong, and they most certainly will, someone is watching, someone is accountable, and someone can step in.
In a world where AI systems operate across borders, industries, and lives, the idea that light-touch regulation elsewhere makes strict rules pointless is not just flawed, it’s dangerous. The From Code to Consequence panel made it clear: governance is not a bureaucratic burden, but a strategic necessity. Without accountability, explainability, and oversight, AI becomes a black box with the power to amplify bias, erode trust, and cause real harm. The EU AI Act’s risk-based approach is not a barrier to innovation but a blueprint for responsible progress. As global markets converge and citizen data becomes the currency of AI, organisations that ignore regulation today will face reputational collapse and legal consequences tomorrow. The choice is stark: lead with integrity, or be led by algorithms you no longer control.
In the next article, we’ll explore another foundational question discussed by the “From Code to Consequence” panel: What do we really mean by “AI”?