The Missing Mirror

Most organisations claim to value learning, but few are willing to face what that really means. Reports often talk about reflection and accountability, yet real scrutiny is often avoided. Projects are launched with energy, celebrated in press releases, and quietly forgotten when results fail to appear. Across government, business, and the charity sector, the pattern is the same: confident beginnings, vague endings, and little appetite to look back. The habit of honest evaluation has slipped away, leaving organisations without a clear view of what actually happened. That absence is the missing mirror.

You can see it almost everywhere. A new structure, project, or system is launched with conviction. The launch promises change and efficiency, but by the time the first results emerge, enthusiasm has waned. The follow-up that might confirm or challenge those promises either never appears or is reduced to a short internal note that few people ever read. Without that mirror, organisations move forward without ever checking the view behind them.

Nowhere is this more visible than in the public sector. The NHS, for instance, has gone through successive waves of reform over the past two decades. Health authorities became trusts, merged, split, and merged again. Each new configuration was described as the key to better care, greater efficiency, or clearer accountability. Yet we have little evidence that this is the case. And you can't really blame the NHS for this. When the government insists on unevidenced change or reorganisation, there seems little point in evaluation.

Instead, it is the government that should ensure evaluation is built in from the start and that results are reviewed honestly to see whether change has been effective. That evidence could then help break the endless cycle of reorganisations that we see. Because anyone who has spent a long time in the NHS knows that it tends to go a full loop every ten to fifteen years, reintroducing structures previously deemed ineffective. The problem lies in the lack of long-term memory in government, not in the NHS’s ability to learn or evaluate its own work. Yet independent audits have long observed that formal evaluations after implementation are often inconsistent or missing altogether.

independent audits have long observed that formal evaluations after implementation are often inconsistent or missing altogether.

When the latest reforms brought Integrated Care Systems into being, the official ambition was to improve coordination between health and social care. The early reviews found that while progress had been made on partnership working, many areas began without fully defined baselines or agreed measures of success. That absence made it difficult to know whether the changes were delivering what had been promised. The reforms went ahead regardless, justified largely on the strength of intention rather than measurable outcomes.

The same pattern has appeared before. Large-scale NHS technology programmes have repeatedly been launched with sweeping ambitions but only partial evaluation. The national effort to create a shared electronic patient record was once one of the biggest public-sector IT projects in the world. Billions were spent before the scheme was eventually dismantled, and the subsequent reviews concluded that the opportunity for systematic learning had been lost. With no robust framework for judging effectiveness built in from the start, lessons could only be drawn after the fact and in fragments.

Local government tells a similar story on a smaller scale. Councils across the country have alternated between outsourcing and insourcing services, from waste collection to housing management. Each change has been announced as a step toward efficiency or better service, but detailed public evaluations of those shifts are rare. In some cases, councils have reversed earlier outsourcing decisions after concerns about performance, yet published analysis of what went wrong (or right) has remained limited.

Several authorities have repeated the same experiments in different policy areas, each time promising improvement but leaving little in the way of comparative evidence. When external auditors or local journalists ask for the data, they are often told that reviews are still “in progress” or “not yet completed”. That vagueness has become its own tradition.

The reasons are not hard to find. Honest evaluation carries risk. A review that concludes a project failed can bring political embarrassment, professional difficulty, or awkward questions about public money. For senior leaders, there is seldom a personal incentive to reopen decisions that have already been publicly celebrated.

Honest evaluation carries risk. A review that concludes a project failed can bring political embarrassment, professional difficulty, or awkward questions about public money.

The tendency is not confined to the public realm. Private companies are just as likely to skip genuine evaluation once a product launch or transformation programme is declared complete. Success is often defined by delivery rather than outcome: the new platform is live; the restructure has taken place; the timeline was met. Whether those changes improved things for customers or staff is seldom tested rigorously.

Independent research into large corporate IT projects has repeatedly shown that a significant proportion overrun on budget, deliver less than intended, or fail to generate the expected value. Yet even when projects stumble, the lessons are rarely applied to the next one. Evaluation, when it happens, is treated as a compliance step rather than a learning process.

Charities face a different pressure. Funders expect success stories, and future grants can depend on how well results are presented. This encourages a kind of “friendly evaluation”: collecting positive feedback, highlighting personal stories, and avoiding questions about broader impact. Regulators and independent evaluators have noted that voluntary organisations frequently report activity (what was done) rather than effectiveness (what actually changed). When projects end, they are often succeeded by new initiatives without anyone knowing how much difference the last one made.

When evaluation becomes optional, organisations fall into the habit of believing their own summaries. Reports include “lessons learned” sections that sound thoughtful but rarely translate into action. The same phrases reappear: communication could have been better; timelines were tight; external factors intervened. Nothing changes because nothing uncomfortable is recorded.

When evaluation becomes optional, organisations fall into the habit of believing their own summaries.

Over time this shapes the internal culture. Staff learn that asking whether outcomes were achieved is risky. Analysts who point out missing data are told to stay positive. Evaluators who push too hard for evidence are quietly excluded from the next round of discussions. The culture evolves toward polite agreement, where failure is reframed as partial success and the past becomes something to move on from, not learn from.

Even official strategies often carry the same pattern. Plans talk about “building on lessons learned” from previous reforms, yet the evidence behind those lessons is often buried or incomplete. Without accessible evaluation, the same weaknesses (unclear objectives, poor baseline data, over-optimistic forecasts) reappear under new names.

In the corporate world, the equivalent is the endless cycle of reinvention. A business introduces a new transformation strategy every few years, each one claiming to build on the success of the last. Staff, remembering the disruption, are rarely convinced. But without independent evaluation, the narrative of success goes unchallenged. The illusion of progress becomes part of the brand story.

The consequences are practical as well as cultural. Policies based on untested assumptions consume resources that could have been better used elsewhere. Projects that quietly underperform are replicated by others because no one documented their limitations. A lack of honest feedback means waste becomes systemic.

Projects that quietly underperform are replicated by others because no one documented their limitations. A lack of honest feedback means waste becomes systemic.

Major infrastructure schemes show this clearly. Evaluators have pointed out that some of the early cost assumptions for large transport projects remained untested for years after approval. When later scrutiny arrived, the scale of over-runs and revisions was no surprise. The problem was not unforeseen events but the absence of early, independent evaluation. Overconfidence filled the space where evidence should have been.

Smaller versions play out across local authorities. A council outsources a service promising savings, only to find costs rising again as contract variations multiply. A later administration brings the work back in-house, speaking of “lessons learned”, but no clear record exists of what those lessons were. Without structured evaluation, knowledge dies when the people involved move on.

In the charity sector, the cost of weak evaluation often falls on the people programmes are designed to help. A scheme to reduce loneliness may engage hundreds of participants but produce little lasting change. Without proper follow-up, the shortfall remains invisible, and the same design is used again. Resources drift toward activity rather than impact.

Evaluation is not a bureaucratic ritual; it is the means by which organisations distinguish between what happened and what they hoped would happen. Without it, even well-intentioned projects become acts of faith.

Evaluation is not a bureaucratic ritual; it is the means by which organisations distinguish between what happened and what they hoped would happen. Without it, even well-intentioned projects become acts of faith.

Good evaluation depends on three things: clear objectives, independent review, and transparency. The first makes outcomes measurable. The second prevents self-congratulation. The third allows others to learn from the results. Where all three are in place, learning becomes cumulative rather than cyclical.

There are examples of this being done properly. In education, independent foundations publish detailed assessments of classroom interventions, including those that show no measurable effect. By treating null results as useful rather than shameful, they build a collective understanding of what does and doesn’t work. In health research, national institutes now make all funded trial results public for the same reason: honesty strengthens credibility.

Some councils have begun to adopt similar principles. After a period of serious financial strain, one London borough introduced an external assurance panel whose reports on progress and performance are published online. The process is not comfortable, but it has helped rebuild confidence by showing that scrutiny is no longer optional.

The biggest obstacle remains fear. Leaders fear that uncertainty will look like incompetence. Teams fear that unfavourable findings will be used to justify cuts. Funders fear that frankness will discourage future investment. Every actor in the system has a reason to prefer selective learning.

Timing also undermines the process. Evaluations often arrive only at the end, when staff have dispersed and the data are incomplete. By then, the review can only tell a partial story. It becomes a document of justification rather than discovery.

The solution is to embed evaluation from the outset. Objectives should be measurable before the work begins. Plans for review should be built into the business case, not bolted on later. Independent reviewers should be appointed early and given freedom to publish. That approach takes more time, but the cost of skipping it is far greater.

Objectives should be measurable before the work begins. Plans for review should be built into the business case, not bolted on later. Independent reviewers should be appointed early and given freedom to publish.

Learning in private is easy; learning in public builds trust. When an organisation shares full results (including where things failed) it signals seriousness. When it hides behind selective summaries, it signals insecurity.

Independent think tanks and evaluators have argued for years that open scrutiny leads to better decisions. In the charity world, several groups now promote the idea of “learning transparency”: publishing both achievements and disappointments so others can learn. The trend is slow, but funders increasingly recognise honesty as a marker of maturity, not weakness.

In the corporate world, some companies have started publishing independent reviews of their environmental or social programmes. Large consumer brands have commissioned external audits of issues such as food waste, carbon targets, and labour standards. The methods vary, but the willingness to expose results to outside judgment marks a change from the reflex of self-approval.

The absence of evaluation does more than waste money. It corrodes purpose. People join public service, charities, and responsible businesses to make a difference. When they see the same projects recycled under new names, with the same untested claims, faith in leadership fades. Citizens become sceptical, donors cautious, staff disillusioned.

When institutions stop testing their own results, they lose the ability to see themselves clearly. Decisions become stories. Assumptions harden into dogma. That is why evaluation is the missing mirror. Without it, every organisation sees only what it wants to see.

When institutions stop testing their own results, they lose the ability to see themselves clearly.

The remedy is straightforward: make evaluation routine, independent, and public. Treat unfavourable findings as information, not failure. Build reflection into the structure of every major project, not as decoration at the end.

There are encouraging signs though. A cross-government task force established in recent years has begun to promote consistent standards for evaluation and to strengthen the evidence base behind public spending. Its influence is still limited, but it signals that learning is once again being treated as part of accountability, not an afterthought.

True accountability depends on reflection. Every institution that spends public money, raises donations, or affects lives should be able to show what happened after the announcement. Anything less is performance without learning.

If public money, private investment, or charitable goodwill are being spent in the name of progress, the people paying for it have the right to know what actually happened. That requires more than slogans about learning and improvement. It demands the courage to publish results, to admit when things have not worked, and to change course before mistakes harden into habit. Every board, cabinet, and executive team should ask one blunt question before approving a new idea: how will we know if it worked? Until that becomes routine, organisations will keep mistaking motion for progress and confidence for truth. It is time to hold up the missing mirror.