AI gets talked about like it’s one clever system. One neat thing you can just plug in and watch go. But if you’ve spent any real time around it, you’ll know that’s just not how it works. AI isn’t one brain. It’s more like a whole family. They don’t always get along, and they’re not always the sort you want showing up uninvited.
Think of it like this. You’re hosting a big family gathering. Everyone’s invited. Cousin Steve. Mum and Dad. Your older sibling who thinks they're basically in charge now. Uncle Rob, who never stops talking. Grandad with his paper folders and his suspicious glances at the thermostat. It’s all happening at your place. And if you’re not ready for what’s about to unfold, you’ll be ankle-deep in chaos by 7pm. So before you have them round you need to do some planning, and be realistic about what you’ll have to clear up afterwards.
Same goes for using AI. Different types of systems behave differently. You can’t treat them all the same. And if you invite them in without knowing who they really are, you’ll spend most of your time trying to clean up the mess.
Steve’s enthusiastic. He shows up wearing a vintage bomber jacket and claims he once sold mushrooms to a Hollywood actor on a ferry. He’s got a lot of energy. He’ll write you a poem, paint your cat in the style of Monet, and explain 17 conspiracy theories before pudding. Unfortunately he makes stuff up. He doesn’t mean to lie. He just gets carried away. He once swore blind he met Cher in Aldi. He didn’t.
That’s generative AI. It creates content; text, images, audio, video. It can sound creative, articulate, and sometimes downright inspired. But Steve doesn’t check facts. Neither does this AI. It doesn’t know things. It’s mimicking patterns. If the patterns are dodgy, so is the output. It can be brilliant. Or it can confidently serve you nonsense in a three-piece suit.
Decide what Steve’s allowed to help with. Let him sketch ideas, not send press releases.
Have a process in place for checking what Steve creates.
Make it crystal clear to others that Steve needs supervision, and that he’s probably still influenced by the mushrooms. He’s not a serious researcher. He’s the enthusiast friend who read one blog post and now thinks he’s an expert.
Ensure your legal and comms teams know where and how Steve’s information is being used, because the last thing you want is a hallucinated quote in a board report.
Entire documents or designs that feel convincing but turn out to be wrong or plagiarised.
People saying, “But Steve (the AI) said it was true.”
Complaints from people whose names have been invented, misused, or assigned opinions they never held.
Trust issues when someone realises the beautifully written response they got from “you” wasn’t written by you at all, but by Steve.
Mum’s been watching you since birth. She knows your tells. “You’ve gone quiet. Something’s wrong.” Bring someone home and Mum will whisper, “She’s just like the one from college with the long fringe. I give it three weeks.” or “I knew you’d go for someone who plays the drums.” Dad quietly writes it all down in a spreadsheet. They’re always trying to guess what’s next, based on what’s happened before. They’re not usually wrong, but they’re not always right either.
That’s predictive AI. It uses past data to forecast future outcomes. It underpins everything from weather models to hiring tools. It studies the past. Tries to spot patterns. Makes a guess about what comes next. And often, it’s useful. But it’s still just a guess, dressed up in statistical confidence.
Define what the AI is actually trying to predict, and more importantly, why. Vague questions lead to vague models.
Check the training data. Mum’s forecasts only make sense if she’s basing them on reality, not teenage gossip. Same for AI. Biased data in, biased predictions out.
Decide what decisions are allowed to be based on predictions and what still needs a human to review. Don’t always believe your parents!
Prepare to understand the limitations. You need to ask, “how confident am I in this?” rather than just assume it’s right.
Decisions that weren’t fair or transparent. That's parents for you!
Misunderstood predictions being treated as certainties and becoming a self-fulfilling prophecy.
People asking why the system recommended X when it clearly should have gone with Y.
The fallout from things that were predicted to go wrong but didn’t. And vice versa.
Uncle Rob and Cousin Steve are closely related. Ask Uncle Rob anything and he’ll give you a fluent, confident answer. Doesn’t matter if it’s about climate change or how to unblock a sink with a lemonade bottle. Rob’s been waiting for a microphone his whole life. He can talk fluently about almost anything, whether he knows it or not. Ask him a question and he’ll give you a polished, structured answer full of metaphors, anecdotes, and weirdly specific facts about 19th-century trade routes. Often helpful. Sometimes weirdly off-base. He’s not lying, he just learned how to sound convincing before he learned how to check his facts.
LLMs are great at stringing words together. That’s their job. They’ve read a lot and they remember the patterns. They don’t know what’s true, or what they’re saying. They’re just very good at predicting the most likely next word in a sentence. But the result can be dazzling. You think, “Wow, this guy gets it.” Until you look closer.
Set clear boundaries. What’s Rob allowed to talk about? Don't let him handle medical advice or anything to do with regulation.
Try to spot when he’s bluffing. Confidence doesn’t equate with correctness.
Combine LLM outputs with grounded data or checks. Use it for style, not substance.
Include guardrails in your system. That means prompt engineering, restrictions, and escalation paths for anything even slightly sensitive.
Errors that snuck through because they “sounded right.”
People quoting Rob to justify decisions, without verifying the info.
Hallucinated facts, made-up references, or customer-facing content that includes misleading or incorrect claims.
Complaints from legal when Rob writes a privacy notice using 40 percent imagination.
They’ve just returned from a retreat in the Lake District and now want to run your household like a productivity app. They want to optimise your kitchen, automate your reminders, and rearrange your finances. They mean well. They’re proactive. But they don’t always ask first. You leave them alone for ten minutes and they’ve alphabetised your spice rack and accidentally cancelled your dentist appointment. They’ll start your laundry, cancel the doctor, reschedule your meetings, and send flowers to your neighbour. All before breakfast. Sometimes it’s helpful. Sometimes it’s like letting a Shitsu do your tax return.
Agentic AI doesn’t just respond. It acts. It plans. It adjusts. It takes initiative. And yes, it’s the most promising and the most dangerous bit of the whole family. It can be amazing. Or it can turn into chaos if it misunderstands the task.
Define the scope of its autonomy. What decisions can it make, and what needs approval?
Limit access. Don’t let it roam free across your systems.
Build in checkpoints. If it takes three steps toward a goal, a human should confirm before it takes the fourth.
Train your people to spot when something’s going sideways early. Agentic AI moves fast.
Accidental system updates, misfired messages, or irreversible actions taken without enough context.
Loss of trust when staff realise the AI "went rogue" and nobody noticed in time.
Rebuilding processes to be more resilient to automation that got overexcited.
Picking through audit logs to figure out what it thought it was doing.
Grandad has a system. He’s got files. Indexes. Boxes labelled by year. Rules for how things should be done. He doesn’t trust apps. His worldview is built solely on rules. If it says rain on the calendar, you bring your coat, even if it’s sunny. He's stuck in his way of doing things and he doesn't like new challenges. But if you need to know how to do a tax return from 1983, he’s your guy.
Symbolic AI is built on logic and rules. No guesswork. Everything’s explained. That makes it stable, but not very flexible. It’s solid. Predictable. And a nightmare when life doesn't play by the rules, when things get messy or ambiguous.
Be clear where it works best: predictable environments with well-understood rules.
Don’t rely on it to cope with nuance or improvisation.
Build in flexibility if the world changes.
Pair it with human oversight when dealing with cases that aren't clear cut. Grandad doesn’t like grey areas.
Hard fails when the system can’t handle exceptions.
Bottlenecks caused by inflexible logic.
Staff frustration when they have to explain something obvious because the system “wasn’t expecting that.”
Long rework cycles every time a rule changes in the real world.
This is where it gets messy. Because most systems don’t live in isolation anymore. You want prediction, but you also want creativity. You want structure, but you want flexibility too.
You invite the whole family round, thinking it'll be helpful if they all talk to each other. But it’s loud. Things are being said. Steve’s painting on the wall with pesto and writing song lyrics about the chicken casserole. Rob’s translating these lyrics into legalese, while also rewriting the family history. Mum’s reading out your diary from 1998 and trying to forecast who’ll cry before the night’s over. Your sibling’s in the kitchen renaming your Wi-Fi and reprogramming the oven. And Grandad’s looking for a fuse box that hasn’t existed since 1972.
This is hybrid AI. Most real-world setups are now a mix. Generative plus predictive. Large language models guided by rules. Agentic systems with symbolic constraints. You’ve got to manage the whole crowd.
If you're not prepared, it’ll end in a headache. Possibly a fire.
Who talks to whom. Set clear permissions, connections, boundaries.
Decide which systems are allowed to influence or override each other.
Have a plan for how you’ll monitor, explain, and update each bit of the puzzle.
Keep humans firmly in the loop, especially for anything touching people, money, safety, or trust.
Conflicting outputs from different systems. ("Steve said one thing, Rob said another.")
Incomprehensible results that make sense to the machines, but not to you.
Unexpected feedback loops. One AI feeding off the errors of another.
Culture shifts where people stop asking questions because “the system knows best.”
Plan like you’re hosting that party. Know who you’ve invited. Set expectations. Limit access. Watch who’s influencing whom. Don’t leave them alone with the sound system or your credit card.
And after it’s done? Debrief. Check what went wrong, what got broken, and who needs to be told they’re not allowed to write emails unsupervised anymore.
AI can be useful. It can also be confusing. And exhausting. And if you don’t prepare for what each type brings to the table, it’ll take over your party, your business, and possibly your entire week.
Invite them in, yes. But plan properly. Supervise. Limit their reach. Know what they’re good at, what they’re terrible at, and when they’re just winging it.
And whatever you do, don’t let Steve near your LinkedIn account. He once told someone you were a cardiologist in Denmark. It still hasn’t been sorted.