Utopia and Economics

(This is the fifth article on the human cost of artificial intelligence)

The story we keep hearing is full of promise. It sounds like Utopia. Artificial Intelligence will lift the weight of repetitive work, freeing people to focus on creative, complex, and interpersonal tasks. Machines will handle the routine so humans can concentrate on the meaningful. Businesses will become more productive, workers will become more fulfilled, and society will become more equitable.

It's a nice story. It’s not even far-fetched.

The technology already exists to support much of this vision. AI systems can triage customer queries, draft legal documents, analyse images, schedule appointments, build training plans, write code, simulate scenarios, and offer insights from patterns buried deep in data. What was once futuristic now feels strangely normal.

In principle, this could be the beginning of a better world of work. One where doctors are freed from endless form-filling and get more time with patients. One where teachers spend less time creating worksheets and more time actually teaching. One where customer service is improved, not by removing humans, but by removing the tedium that wears them down. One where businesses do more with less effort and choose to share that success with those who helped build it.

There is nothing physically stopping this from happening. But that’s not the same as saying it will. And right now, there’s nothing to show that it is.

The real constraint isn’t the technology. The algorithms are improving, the hardware is getting faster, and the use cases are multiplying. In many areas, the capabilities of AI now exceed what most organisations know how to use effectively. That’s not the bottleneck.

The real constraint is the economic and institutional framework we’re dropping AI into. A framework that was already skewed before the first chatbot launched or the first line of generative code was written. AI is entering a system that has, for decades, rewarded companies for extracting maximum output with minimum input. That means fewer people, tighter margins, shorter timeframes, and decisions shaped less by public interest than by private return.

The system is deeply unequal. It favours scale. It rewards those who already have data, capital, infrastructure, and market share. And it does not easily accommodate the idea of reinvestment in people, especially if that investment takes time to show a financial return.

So when AI enters this environment, it does not rewrite the rules. It adapts to them.

The logic of the system bends AI toward certain outcomes. Toward replacing jobs rather than redesigning them. Toward reducing headcount rather than retraining it. Toward using efficiency as a justification for cuts, not a foundation for improvement. AI amplifies what is already there, because most of the people deploying it are not being asked, or incentivised, to do anything else.

This is how the promise of AI gets reshaped. Instead of making work more human, it makes it more extractable. Instead of freeing time for care, reflection, and creativity, it enables tighter deadlines and higher output with fewer people. Instead of opening up opportunities, it concentrates them further.

And the strange part is that this can all happen while the language around AI stays positive. We can talk about progress, innovation, productivity, and transformation, while underneath, the human experience of work deteriorates. More surveillance. Less security. Fewer pathways to advancement.

It’s important for us to recognise that efficiency on its own does not create justice. It doesn’t share wealth. It doesn’t create resilience. It doesn’t cushion those affected by change. Efficiency is a force multiplier. And if the system is built to concentrate wealth, then AI will help it do that faster.

We’re often told that AI is just a tool, and that it’s neutral. That what matters is how we choose to use it. And in abstract terms, that’s true. AI doesn’t have values or intentions. It doesn’t have a worldview. But tools don’t exist in empty space. A hammer is one thing in the hands of a carpenter building a table, but it’s another thing entirely in a hand raised to destroy. The object itself is the same. Yet the outcome is different because of who holds it, what they want, and what the context allows.

AI is no different. It can support, or it can displace. It can reveal insights that lead to better care, better teaching, better decision-making. Or it can be used to cut costs, strip out teams, and deepen the pressure on those left behind.

And that’s why the claim that “it’s just a tool” is only half the story. The rest depends on ownership. On governance. On public voice. On the accountability mechanisms that exist outside the technology itself. If we don’t get those right, then AI will behave exactly as it is being shaped to behave. And that’s not by abstract ethics but by institutional priorities.

This is where governments should come in. Not just to regulate safety risks or fund research, but to help reshape the system itself. To ensure that AI operates in a context where efficiency is not the only goal, and where long-term human wellbeing is part of how success is measured. At the moment, that conversation is being delayed. Or diluted. Or passed on to advisory panels. To be blunt, if governments do nothing, here is what we can expect.

Companies will continue to automate wherever it increases margins. Headcounts will shrink, but prices will not necessarily fall. Profit will rise, but that surplus will flow mainly to investors and shareholders. Workers in repetitive or process-driven roles will be the first to go. The burden to retrain, reskill, and re-enter the workforce will fall entirely on individuals. Some will make it through. Many will not.

Those already in insecure jobs will be hit the hardest. People like retail assistants, contact centre staff, warehouse workers, junior analysts, and clerical teams. Of course, some roles will evolve, but unfortunately many will disappear entirely. And they won’t return in a different form. The factory jobs of the 1970s did not reappear as coding jobs in the 2000s. The shift was real. It changed lives. It emptied towns.

The same could happen again, but a lot faster.

And at the same time, those who own the platforms and technologies that drive automation will find themselves in a position of even greater power. The data they hold will be proprietary. The systems will be trained on user behaviour at scale. The intellectual property will be locked down. These companies will not just be successful. They will become a necessary part of the infrastructure.

And as they do, the gap between the winners and everyone else will widen. We can already see that it is.

Of course, it’s easy to say that governments must act. But what does that look like in practice?

Firstly, we need a clear picture of what’s happening. Right now, most governments are flying blind. There is not enough granular data about how AI is affecting different sectors, different regions, different types of workers. If you can’t measure it, you can’t manage it. And if you’re just guessing, policy quickly becomes reactive, not strategic.

We need real-time tracking of displacement patterns. We need proper mapping of where automation is happening, which roles are evolving, where pressure is building in the labour market, and which groups are being left behind. And we need it as quickly as possible.

We also need regulatory reform. Employment law is still based on a model of stable, full-time work. That model is no longer dominant. Gig platforms, zero-hours contracts, freelancers, hybrid setups. These are not the exception anymore. Governments need to make sure rights and protections follow workers, even if their jobs don’t follow a traditional path. That means sick pay, holiday entitlement, union access, grievance procedures, and access to retraining regardless of contract type.

There’s also retraining and transition support. At the moment, much of what is called retraining is shallow and disconnected from demand. Online courses are fine, but they don’t guarantee employment. And telling a fifty-year-old administrative assistant to become a coder overnight isn’t just naïve. It’s cruel.

Retraining needs to be longer term, properly funded, and locally tailored. It needs to involve actual employers and lead to real jobs. And it needs wraparound support: childcare, transport, advice, mentoring. Otherwise, people fall through the cracks. They already are.

Governments also need to think about redistribution mechanisms. When firms automate and save millions in labour costs, there is a social cost being paid elsewhere. One response could be to introduce a form of automation levy. Another may be to tax profits more effectively from firms that are scaling using public data and infrastructure but giving little back. Those funds could be ringfenced for social protection, regional development, or even universal basic income pilots in places hardest hit by displacement.

There is also room for public sector leadership. Governments don’t have to sit back and hope the private sector behaves responsibly. They can set standards through procurement, requiring ethical use of AI, transparency in algorithms, and shared productivity gains. They can invest in public-interest AI; systems designed not for profit but for public value, particularly in areas like health, education, social care, and transport.

And at a higher level, governments need to rethink what counts as progress. If AI increases GDP by five percent but leaves thirty percent of workers worse off, is that success? If productivity rises but work becomes more precarious, who is benefiting?

We need to stop treating economic growth as the sole yardstick. We need to build a broader picture of wellbeing, opportunity, security, and dignity. Without that, the shiny charts mean nothing.

Let’s be realistic though. None of this is easy. These are not simple fixes. They will meet resistance. Powerful lobbies will push back against AI regulation. Budget constraints will be cited as reasons to delay supporting staff. Political attention will be diverted. And by the time action finally happens, the damage may already be done.

But if we wait until the cracks show up in the labour market, it will be too late. Social cohesion is hard to rebuild once trust has gone. And economic insecurity spreads fast. And you can’t automate your way out of unrest.

The point here is not to fear AI, but to plan for it. To govern it and make our decisions with eyes open.

Because a better future really is possible. One where AI improves lives rather than disrupts them. One where businesses succeed, but not by squeezing people out. One where progress is shared rather than concentrated.

But that future won’t emerge on its own. It needs design. It needs pressure. It needs courage.

And it needs to start now.