The AI-Way Code

Scroll through LinkedIn and you will see plenty of people claiming to be AI experts. Some insist they alone understand AI, while others argue that no one outside their narrow speciality should have a say. This kind of attitude is risky. AI is too complex and far-reaching for just one group to have all the answers. It’s safe and effective use needs many voices to be heard.

AI is not just a technical subject. It affects people in different roles and contexts. If we want AI to be used responsibly, we need to equip people with tools that match their role and we need to make space for different kinds of expertise. Who the “expert” is depends entirely on what question is being asked and in what setting. It is not about who has the loudest voice, but about knowing who to listen to, and when.

The model builder who understands the technical details is not always the best person to explain how to use AI safely. The policymaker who writes the rules may not be able to fix a faulty algorithm. And the everyday user may spot problems that others miss. All of them have valid contributions to make, and can be seen as being "the expert".

To understand this better, let’s use the analogy of AI as a car and the system around it.

The key takeaway from this analogy is that no single role controls the whole car journey. The manufacturer builds the car but does not teach you how to drive. The mechanic fixes it but may not know the best route. The driving instructor guides new drivers but cannot repair the engine. The salesperson markets the car but cannot ensure safety. Drivers need to know their limits and respect the expertise of others on the road. Highway authorities and insurers provide the rules and safety nets to keep everyone accountable.

The same applies to AI. No single person or group holds all the answers. The system only works well when every role is recognised and respected. Claiming to be the only AI expert ignores the complexity involved and risks serious blind spots. Collaboration and mutual understanding are the best way to keep AI safe and useful.

Additionally, every road user in the UK relies on the Highway Code. It sets out the rules of the road: when to stop, who has right of way, how to handle roundabouts, what road signs mean, and how to treat other road users, including pedestrians and cyclists. It does not matter whether you drive a brand new EV or a second-hand Fiesta, the Highway Code applies to you just the same.

Then there is the user manual. This is specific to your car. It tells you how to adjust the seats, what all the buttons on the dashboard mean, how to use the air conditioning, what kind of oil your engine takes, and what to do if a warning light comes on. Without it, you can still drive but you risk using the car badly, missing early signs of trouble, or never unlocking half its features.

You need both. The Highway Code makes sure you behave safely around others. The user manual makes sure your car is roadworthy and running at its best.

With AI, we often have neither.

Most people using AI tools today (whether they are writing documents, planning lessons, or helping customers) are handed the keys without a manual. They have no idea what the dashboard lights mean, what fuel the system runs on, or what problems to look out for.

This creates two problems. First, people don’t get the best out of the tools. Second, they may not realise when something has gone wrong. In a car, you can see a red warning light and pull over. With AI, you might keep driving straight into trouble.

A user manual for an AI system should help people understand:

This information doesn’t need to be technical. It just needs to be accurate, honest, and specific to the tool at hand.

Beyond the individual system, we also need something like the Highway Code; shared rules and guidance that help AI users interact safely with others. It should cover things like:

Just like the Highway Code, this wouldn’t be about coding or technical details. It would be about norms, responsibilities, and shared expectations. For example, it might detail how to disclose when AI is used in recruitment or writing, or how to treat people fairly when AI is involved.

Let’s return to the road for a moment.

A mechanic knows how your car works under the bonnet. They can strip it down, fix faults, and put it back together again. They do need to know a bit about how the car behaves on the road ( enough to test-drive it and confirm it’s working or pass it’s M.O.T.) but their focus is on the internals. And just because they’re a good mechanic doesn’t make them a good driver.

A driving instructor, on the other hand, needs to know how to use a car safely and teach that to others. They need to understand the dashboard, the pedals, the visibility, the feel of the car on the road. But they don’t need to know the difference between two types of fuel injection. They are teaching people how to operate the vehicle, not rebuild it. A good driving instructor has to be a good driver, but there is nothing to say they can fix the car.

And the driver? They need to follow the Highway Code and the user manual for their particular vehicle. They don’t need to be an expert in mechanics or teaching.

It’s the same with AI. Developers and machine learning specialists may be brilliant with the model architecture, but that doesn’t automatically make them good at teaching others how to use it. Instructors and trainers are experts in a different way: they can explain how to use AI tools safely and effectively, often to people with little background knowledge. And everyday users are the ones who need both the user manual for their tool and the Highway Code to stay safe and respectful among others.

We must also remember those without a steering wheel: pedestrians, cyclists, motorcyclists, and other road users including passengers. They share the space and are often the most impacted by poor decisions made behind the wheel. Their voices matter deeply because safe roads depend on considering everyone who uses or is affected by them. In an AI context they are the people who don’t use AI themselves, but are affected by its decisions. They do not control the systems, yet their safety, privacy, and well-being depend on how AI is designed, deployed, and governed. Their perspective reminds us that technology’s impact goes beyond just those who build or use it. Think of someone who gets turned down for a job based on an AI-screened CV, or whose loan application is flagged by a model they’ve never seen. Their safety and wellbeing matter too, and they deserve to know what rules are in place to protect them.

So we really need two things:

These two guides serve different purposes. One keeps your “vehicle” running well. The other makes sure everyone using the “road” is doing so safely.

You don’t have to be a mechanic to drive a car. You don’t need to be a driving instructor to spot bad behaviour on the road. And you don’t need to be a software engineer to talk about how AI should be used in your life or work.

So if you’re working with or using AI, ask yourself: which lane am I in? Am I trying to be the mechanic and the driver and the salesperson all at once? Am I ignoring other lanes entirely?

If someone tells you that only one group should be allowed to speak about AI, ask them which part of the road they’re standing on. Are they under the bonnet? In the driver’s seat? On the pavement?

Safe AI use depends on recognising and respecting all roles and the people affected by the technology. Different kinds of knowledge apply in different situations. No one expert has all the answers. It takes a community working together, each knowing their lane, listening carefully, and driving with awareness of others. We need all of them. And we need a way to bring them together so we can build an AI future that works for everyone.