When Seeing is No Longer Believing

The phrase “perception is reality” has been around so long it’s basically furniture in certain conversations. People say it when they want to shrug something off or give an opinion more weight than it deserves. The idea behind it sounds clever enough. What we believe shapes how we respond, so how we see things becomes more important than how they are. Except it doesn’t hold up. Just because someone perceives something a certain way doesn’t mean it is factually correct. Perception is influenced by bias, misinformation, emotional state, and cognitive limitations.

Perception isn’t neutral. It bends under pressure. People see things differently based on fear, mood, background, even how much they slept. A Chelsea fan and a Liverpool fan will watch the same football match and come away arguing about whether the referee favoured one side over the other. And, of course, both will swear they’re right. Similarly, a witness to a hit and run might be absolutely certain the car was red when the traffic-cam later shows it was silver. Confidence doesn’t equal accuracy. It never has.

That’s why we’ve always needed anchors. A photo. A recording. A signed statement. Something that doesn’t wobble. That worked for a while. But even those are on shaky ground now.

For over a century, visual evidence, such as photographs and video recordings, has been the gold standard of truth. If you could see it with your own eyes, if it was captured on film, then it must have happened. That belief is now dangerously outdated.

Deepfake technology, powered by AI, allows for the seamless manipulation of video and audio. This is no longer a gimmick confined to Hollywood special effects. It is accessible to anyone with a decent computer and the right software. That's evidenced with the picture for this article. Charles Dickens never rode a Harley!

AI can now:

This means that photos and videos, once regarded as irrefutable proof, are no longer enough. Seeing is no longer believing.

Imagine watching a news broadcast where a world leader appears to declare war, only to find out later that the entire clip was AI-generated. Or a courtroom playing what seems like a damning piece of video evidence, only for forensic analysts to discover it was fabricated.

With the rise of AI-generated content, our traditional ways of verifying reality are under threat. Deepfake technology can produce ultra-realistic fake videos. AI-generated voices can mimic anyone with eerie precision. Entire news articles can be created by AI, filled with fabricated events that never happened.

The world has reached a tipping point: if perception was already unreliable, AI is now making reality itself uncertain.

If we can no longer trust what we see, how do we prove what is real?

In 2018, a video circulated of Barack Obama saying things that were wildly out of character. Turns out, it wasn’t him. It was a deepfake, made by researchers to show how easily footage could be manipulated. It looked real. It sounded real. It fooled people. That was seven years ago. The technology has only gotten better, and cheaper, since then.

It’s not just about fooling people for fun. In India, during a local election, a politician used AI to make himself appear to speak multiple regional dialects. He hadn’t actually learned them. He’d used a tool to make it seem like he had. His team saw it as clever campaigning. Critics saw it as deception. Either way, the line between performance and truth blurred, and no one stepped in to redraw it.

Videos used to settle arguments. You’d say, “Here, watch this, and you’ll understand.” Now, even that’s risky. If someone shows you footage of a public figure saying something inflammatory, there’s a chance it never happened. Or that the person’s face was lifted and stitched into a different context. It could even be a complete fabrication, stitched together with AI, given a voice that sounds eerily natural. If a courtroom shows a video of a confession, do we believe it instantly? Should we?

AI is not only altering how we perceive the reality of politics or crime. It’s reshaping the way we consume music, film, and literature.

In 2023, Spotify removed thousands of songs suspected to be AI-generated. Some had racked up millions of listens. Listeners thought they were hearing real musicians. And maybe they didn’t mind. But others felt cheated. They’d connected with something they thought came from a person.

Artificial intelligence can now:

While this technology has fascinating creative possibilities, it also blurs the line between human artistry and machine-generated imitation.

If an AI can paint like Van Gogh, compose like Mozart, and write like Shakespeare, does it diminish the value of human creativity? When we listen to a new song, will we need to check whether it was written by a person or assembled by an algorithm? And when AI can generate entire fake interviews, speeches, or books, how will we distinguish an authentic source from a manufactured one?

More importantly, as AI content becomes indistinguishable from human-made works, how do we verify authorship? Will we need digital certificates proving whether a song, film, or article was created by a real person?

The more AI imitates reality, the more we will need systems to verify originality.

There was also the case of an AI-generated interview with the late chef Anthony Bourdain, pieced together from existing recordings and scripted lines. It was used in a documentary and sparked a wave of discomfort. Did Bourdain “say” those words? Not exactly. But it sounded like he did. It sounded enough like him that people started asking where the ethical lines were.

This raises a bigger issue. If machines can create content that feels genuine, how do we assign credit or responsibility? Who gets praise? Who gets sued? If an AI paints something that sells for thousands, is that art? Is the creator a coder, a curator, or the algorithm itself? And if an AI writes a fake news article that sparks outrage, who’s accountable?

People are starting to look for fixes. Some companies are working on tools that can detect deepfakes by spotting glitches in lighting or unnatural eye movement. It’s promising but far from perfect. Others suggest digital watermarks or certificates of authenticity. If an image or song is created by a person, maybe it comes with a stamp saying so. Like a signature, but cryptographic. But even that needs oversight. And trust.

This isn’t just about sorting truth from lies. It’s about attention and how quickly falsehoods spread. During the early stages of the Russia-Ukraine war, fake videos flooded social media. Some were old clips reused out of context. Others were outright fabrications. They got shared anyway. People panicked, took sides, or disengaged completely, unsure what to believe.

With so much AI-generated content flooding the internet, it’s easy to feel like truth itself is slipping away. If AI can fabricate videos, write fake news, and even generate artificial voices, we must shift our focus from what seems real to what can be verified. This means:

If "perception is reality" was already a problematic phrase, AI has now shattered its foundation. Perception has never been a perfect reflection of reality, and now, reality itself is under siege by artificial intelligence.

But while AI complicates our ability to trust what we see and hear, it also forces us to rethink how we validate truth. If we can no longer rely on perception alone, we must turn to evidence, verification, and digital transparency.

We need to adjust the way we interact with information:

The challenge is immense, but the solution isn’t to reject technology. It’s to develop new tools and critical thinking skills that allow us to navigate an AI-shaped world.

Reality hasn’t gone away. But finding it might take more work. The information is still out there. The facts haven’t disappeared. What’s changed is the fog that sits on top of everything. Thick enough now that even obvious things get second-guessed.

Maybe that’s not entirely bad. A little more doubt. A little more caution. Asking for receipts. Slowing down before reposting that too-good-to-be-true clip.

Reality is not about what we perceive. It is about what we can evidence. And in an AI-driven world, that distinction has never been more important.