Everyone likes to believe their choices are guided by evidence. It sounds calm and responsible. Reports and speeches are now full of words like “data” and “evidence-based policy”. They promise a world where reason drives decisions, not instinct or convenience. A world of data-based decision making.

In practice though, the order is often reversed. Decisions are made first and the supporting evidence comes later. The result is a story that looks analytical but starts with certainty and works backwards. The reality is often decision-based data making.

You can see this pattern almost anywhere. In the NHS, new structures are introduced with confidence before anyone has checked whether they are workable.

Restructures are often sold as solutions. They arrive with confident language: efficiency, modernisation, streamlined delivery. The promise is that a new shape will fix old problems. In reality, the shape often comes first and the evidence later.

In reality, the shape often comes first and the evidence later.

I saw this pattern repeatedly when working in a particular NHS organisation. One proposal was to replace multidisciplinary teams, who oversaw the full end to end process of data with a “factory model”. This would act like a conveyor belt where work passed from one specialist group to another. New teams were set up: data collection, data management, data access, data analysis etc. Senior leaders believed this would boost productivity. I argued it de-skilled individuals and would create inevitable bottlenecks. It was clear that the blame for delays would fall on the final team, my analysts and statisticians, who produced the reports.

As someone experience in research and evaluation, I offered to design metrics to test whether the restructure worked. The offer was rejected. Instead, I was labelled as obstructive, “not on board”. The change went ahead and was declared a success. No evaluation followed. The only numbers discussed were complaints that my team was slow, even though the process itself had introduced the delays.

This was not an isolated case. Each time a new structure, or approach, appeared, I volunteered to set up a proper evaluation. Each time, the offer was refused by the organisation. The story was always the same: confident claims, no metrics, and growing suspicion of anyone who asked awkward questions.

The lesson is clear. In these types of environments, evidence is welcome only when it agrees. Anything else feels like dissent. And so the cycle repeats: decisions first, data later, and analysts asked to make the numbers fit the plan rather than test it.

And so the cycle repeats: decisions first, data later, and analysts asked to make the numbers fit the plan rather than test it.

In local councils, services are outsourced or brought back in-house with strong claims of efficiency and improvement, but little honest evaluation afterwards. In large charities, digital transformation programmes are announced before staff or beneficiaries have been asked what will actually help. Private companies are no different. They roll out new systems or reorganisations and then look for figures that suggest the change was inevitable and successful.

The same behaviour repeats because it feels decisive. A senior team gathers, a plan is agreed, and once it is made public, no one wants to reopen it. Teams are then sent to “gather the data”. The evidence that should shape the decision becomes evidence that protects it.

I once worked analysing data on a new hospital to be built locally. The number of beds had already been fixed before the analysis began. My task was to demonstrate that the chosen figure was the right one. I helped to build simulation models, alongside respected Universities, and ran statistical tests using admissions, length of stay, population trends and seasonal pressures. The models kept producing different, higher numbers. Each time, I was asked to adjust the assumptions until the model produced the figure that matched the decision. It was clear the analysis was never meant to guide planning; it was there to defend what had already been decided.

This experience was not unusual. On another occasion I was contacted by the CEO of a local NHS Trust to provide data that showed that heart disease wasn’t a problem in their area. This was presumable because they hadn’t appropriately funded services that dealt with the issue. I refused, because the data was clear that heart disease, including the incidence of myocardial infarction etc. was significantly higher in their area than others.

Across the public sector, similar stories appear. The NHS has been through a series of reorganisations over the last two decades. Trust mergers, the creation of Integrated Care Systems, and hospital IT rollouts have all been introduced with confident claims of improvement. Yet, as reported by the National Audit Office, robust independent evaluation is rare.

Local authorities have followed a similar route. Councils have repeatedly shifted services between private contractors and direct provision. A particular council's decision to bring its homelessness assessment service back in-house in 2023 was one of several examples where earlier outsourcing arrangements failed to meet expectations. Reports presented to committees spoke of “improvements” and “efficiencies”, but comprehensive, independent follow-up studies were scarce.

At the national level, HS2 has become the most visible example of what happens when large projects move faster than their evidence. Early projections promised affordability and on-time delivery. Years later, both claims collapsed under public scrutiny, with costs escalating far beyond the figures originally presented to Parliament. The project has been examined repeatedly by the National Audit Office and the Public Accounts Committee, both of which highlighted the absence of transparent evaluation at key stages.

The project has been examined repeatedly by the National Audit Office and the Public Accounts Committee, both of which highlighted the absence of transparent evaluation at key stages.

The charity sector shows similar habits. Many organisations have embraced “digital transformation” in recent years, encouraged by funders and by a desire to modernise. The annual Charity Digital Skills Report tracks this trend, noting that while most charities report progress, many also admit they have little concrete evidence that the changes improve outcomes for service users. Once again, the claims of success are often made before any evaluation has begun.

Private industry is not immune. Retailers, banks and energy companies announce restructures and product overhauls with striking confidence. Press releases speak of growth and modernisation. Internal teams are then asked to provide the numbers that make those claims look sound. Profit margins, market share, staff turnover and customer satisfaction are presented selectively, favouring the version of reality that matches the chosen narrative.

Why does this happen so often? Part of the answer is pressure. Leaders are expected to act quickly and to project certainty. In politics, confidence wins headlines. In business, it reassures investors. In charities, it attracts funders. Waiting for full evidence feels risky. It takes time, and time is scarce.

In politics, confidence wins headlines. In business, it reassures investors. In charities, it attracts funders. Waiting for full evidence feels risky. It takes time, and time is scarce.

Another part is fear. Once a decision has been announced, changing it looks like failure. People who raise doubts or highlight awkward numbers are seen as obstructive. The natural instinct is to defend the plan, not to question it. That creates an unspoken rule: find data that fits, and quietly forget the rest.

The problem isn’t that everyone is dishonest. It’s that the current environment rewards confidence and penalises hesitation. Over time, organisations learn to value the appearance of progress over the messy reality of learning. Analysts produce neat graphs that comfort their audience but say little about what is actually happening. Managers come to believe their own summaries. Staff stop asking awkward questions.

The consequences can be serious. In health, misjudged capacity planning leads to chronic pressure on wards and emergency departments. In local government, short-term “savings” vanish once the real costs of contract management and service recovery appear. In national infrastructure, billions are spent correcting optimistic assumptions. In charities and companies, morale drops when staff realise that the success being claimed on paper does not match their daily experience.

In health, misjudged capacity planning leads to chronic pressure on wards and emergency departments. In local government, short-term “savings” vanish once the real costs of contract management and service recovery appear. In national infrastructure, billions are spent correcting optimistic assumptions.

Yet there are examples of organisations that resist the pattern. Some hospital trusts have insisted that evaluation plans are written before any major change begins. The National Audit Office and the King’s Fund have both published guidance on embedding evaluation into programme design, showing that projects built this way tend to adapt more quickly when problems appear. In the charity sector, groups such as NPC (New Philanthropy Capital) have campaigned for open reporting of both successful and failed initiatives, arguing that learning is impossible without the full picture. These examples are still rare but they show that the cycle can be broken.

There is also a lesson here about personal integrity. It is easy to believe that numbers are neutral, that analysis is pure. But analysis is shaped by the questions people are allowed to ask. When analysts are told, as I once was, to keep adjusting the model until the result looks right, the line between evidence and endorsement disappears. The more this happens, the more everyone involved begins to accept that reality should fit the story, rather than the other way around.

If evidence is to mean anything, it has to be allowed to say something uncomfortable. That means gathering it before decisions are finalised, publishing it even when it contradicts expectations, and treating disagreement as part of learning rather than a threat. It also means leaders must resist the temptation to make public commitments too early. Announcing a neat number may feel strong in the moment, but it is far weaker than admitting uncertainty and inviting scrutiny.

If evidence is to mean anything, it has to be allowed to say something uncomfortable. That means gathering it before decisions are finalised, publishing it even when it contradicts expectations, and treating disagreement as part of learning rather than a threat.

Good decision-making is rarely quick and never tidy. It involves false starts, conflicting data, and the humility to be wrong. The organisations that manage to stay honest are those that build that humility into their process. They allow evidence to change their minds. They treat evaluation not as a formality but as an ongoing conversation with the people affected by their choices.

That is what it means to let evidence lead. Because when decisions come first, the data that follows can only ever play catch-up.