I’ve been posting a lot about AI lately. It’s not really surprising, given how it keeps coming up: work discussions, podcasts, panels, newsfeeds. Everyone’s got a take on it. Some are thrilled by it, others look worried. Some just stick fingers in their ears. And then there’s the type of people that keep showing up with serious warnings, the ones convinced we’re on the verge of something catastrophic.
So let’s talk about the AI doomsayers.
You’ve seen them. They’re the modern equivalent of the apocalyptic prophets you find in the Old Testament; Elijah calling down fire, Jeremiah tearing his robes, warning Jerusalem of ruin. Only now it’s not fire or plague they’re warning us about, but algorithmic collapse, of synthetic minds becoming sentient. The desert has been replaced by TED stages and LinkedIn, but the impulse is the same: sound the alarm, before it’s too late. Otherwise “we’re all doomed”.
Like those ancient prophets, they are the outsider. The voice crying out from the wilderness while the rest of us, they believe, sleepwalk into something irreversible. And whether or not you agree with them, you get the sense they’ve already made peace with the idea that they’ll be remembered either as visionaries; or madmen (or women) screaming at the stars.
You’ve probably read one of their long threads, or watched a clip where someone explains, with a straight face, how AI might wipe us all out. It starts off sounding reasonable, then picks up speed and goes somewhere much darker. They speak like everything’s already set in stone. That there’s no turning back. They tell us that AI is going to take control of everything; of work, of institutions, of life itself. People will be sidelined, and we’ll all be stuck in a world we no longer understand, with robotic overlords.
And they don’t all come from the same place. Some are academics. Some used to be tech insiders. Some are influencers who have probably figured out that fear gets more clicks than nuance. It doesn’t really matter where they started. The script tends to look the same: stark predictions, emotional language, a lot of talk about the end of humanity as we know it.
Of course, it’d be easy to laugh them off, or get overly annoyed by their endless warnings. A lot of it feels extreme. And some of it clearly is. But not all of it. And that’s the tricky bit. You can’t just shrug your shoulders and say that they’re wrong across the board. You have to look at what’s real, and what’s just noise.
You can’t just shrug your shoulders and say that they’re wrong across the board. You have to look at what’s real, and what’s just noise.
The first thing they usually get wrong is the idea that AI will erase work altogether. Not change it. Not challenge it. Remove it entirely. Every job, gone. That’s the tone. Yes, some tasks are being taken over; especially the predictable, repetitive stuff. Machines are doing that better, or at least faster, than before. But that’s not the same as replacement.
We’ve lived through this sort of thing before. The printing press didn’t kill writing. Photoshop didn’t kill off the graphic designer. The calculator didn’t put an end to mental arithmetic. Excel spreadsheets didn’t get rid of accountants. Each of those changed how people worked, but they didn’t wipe the slate clean. They just moved things around a bit. We adjusted and moved on.
The same goes for AI now. What’s really happening now is a change in focus, in what people spend time on. Of course, certain roles might disappear. But others will evolve. New kinds of work will appear that we haven’t quite figured out yet. That’s disruption, not ruin. And while it’s messy, and occasionally unfair, it isn’t the major collapse it’s made out to be.
Then there’s the usual claim that AI will somehow “wake up”, become sentient and turn on humanity. The idea of a self-aware system deciding to get rid of its creators. It sounds dramatic and fits well into sci-fi, but it doesn’t line up with the technology we’ve actually built. Believe it or not, we’ve really not advanced that far. Nowhere near. Today’s systems can process, predict, and mimic. But they don’t understand the world. They don’t think. They don’t care. They just run on clever algorithms.
It’s not sentience. It’s pattern matching.
Today’s systems can process, predict, and mimic. But they don’t understand the world. They don’t think. They don’t care. They just run on clever algorithms .... It’s not sentience. It’s pattern matching.
Expecting a model to rebel against humans is like expecting your calculator to turn on you because it’s had enough of your calculations. The fear is misplaced, at least for the foreseeable future. The real issues, the issues to worry about, sit elsewhere.
This is where the doomsayers start to make more sense; when they shift focus from runaway robots to harm that’s actually happening now. Not fantasy. Not ten steps ahead. But things already unfolding in quiet ways.
You see, the way these systems behave depends entirely on the data and instructions they’ve been trained on. If that’s messy, so is the output. And because these tools are often polished, even when they’re wrong, it’s hard to notice when something’s not gone right.
Unfortunately, with AI, mistakes are often more than just technical hiccups. They affect real people. An inaccurate risk model can lock someone out of a loan. A flawed medical tool might miss an early diagnosis. AI used in policing or recruitment might carry forward all sorts of historical bias. These things are happening already. Not in headlines, but behind the scenes, in ways most people don’t see.
Unfortunately, with AI, mistakes are often more than just technical hiccups. They affect real people.
Obviously, there’s the visible part. The stuff that’s hard to miss now. Deepfakes. Voice clones. Fraud powered by generative AI models. This is where the doomsayers have a stronger case. It’s not just about people being tricked. It’s money lost, reputations damaged, even democracy being eroded. These tools are cheap, easy to use, and convincing enough to fool just about anyone.
That’s not paranoia. That’s reality. And it’s happening fast.
So how do we deal with all this? I don’t know why we think that we can just throw AI into the mix and assume everything else stays the same. AI’s not a software add-on. It fundamentally changes how choices are made, how services are run, how people are treated. And if you're using it, you have to think differently; about how it fits, who’s responsible, and what happens when it doesn’t work as expected.
Treating AI like just another IT upgrade is flawed. It’s too big for that. The ones who’ll handle it best aren’t the ones chasing every shiny new tool, or the ones backing away in fear. It’ll be the organisations asking the hard questions. The ones that actually slow down enough to think it through. Where are the risks? Who gets hurt? How do you keep things honest?
The ones who’ll handle it best aren’t the ones chasing every shiny new tool, or the ones backing away in fear. It’ll be the organisations asking the hard questions. The ones that actually slow down enough to think it through. Where are the risks? Who gets hurt? How do you keep things honest?
That needs to be built in from the start. Not as an afterthought. People need to be able to challenge outcomes. There has to be a clear way to appeal when things go wrong. And someone independent has to be watching closely; not just companies marking their own homework.
Trusting firms to police themselves on this? History suggests that’s naive at best.
And the doomsayers? No, they shouldn’t be steering the ship. But pretending they’ve got nothing useful to say is a mistake too. Their value isn’t really in the forecasts because they get a lot wrong. But they keep pressure on. They ask the awkward questions. And sometimes, we need that.
Take away the panic and the theatre, and what’s left is still worth hearing. They push us to look harder, think clearer, and act sooner.
The advancement and use of AI is a test. One we’ll have to pass not by overreacting, but by doing the slow, awkward, necessary work of getting it right. If we manage that, the doomsayers won’t go quiet because we silenced them. They’ll go quiet because we listened, made better calls, and left them with less to warn us about.