How Guesses Become Gospel

This article follows When Decisions Come First, which showed how policy choices are often made before the evidence exists to justify them. What follows is the next step in that story: how guesses become gospel.

It starts quietly, with a number in a slide deck or a line in a report. A cautious estimate turns into a confident statement, then a headline fact. By the time it reaches a minister’s desk, nobody remembers where it came from or how uncertain it was to begin with. T

This transformation happens so smoothly that most people never notice. A claim that began with caveats such as “roughly” or “subject to revision” becomes an accepted truth through repetition. Each retelling gives it more authority. Reports cite other reports, journalists quote headlines, and before long everyone agrees on something that was never properly tested.

That is how organisational myths begin. They grow from half-true statements that are repeated until they sound natural.

A claim that began with caveats such as “roughly” or “subject to revision” becomes an accepted truth through repetition

You see, the human brain loves consistency. Once a number has been heard enough times, it feels familiar, and familiarity feels true. Psychologists call it the “illusory truth effect”. Organisations fall for it constantly.

One of the clearest public examples is the long-standing claim that the NHS could save billions through digital transformation and shared records. That figure has appeared in countless reports since the early 2000s. Early estimates were drawn from pilots and best guesses, including figures quoted during the National Programme for IT (NPfIT) which began in 2002. By 2011, the Public Accounts Committee had described many of those early claims as “unproven” and “optimistic”. Yet the numbers survived. They were quoted in later strategies, and still appear, in slightly altered form, in policy papers today.

The same thing happens with targets for hospital waiting times. When the four-hour target for A&E departments was introduced, it was backed by strong logic and good intentions. Over time, though, the target itself became the story. Reports focused on the percentage achieved rather than on what the figures actually meant for patients. Hospitals learned how to record attendance to show improvement, even when patients were still waiting elsewhere. The target turned from a measure into a myth, a symbol of success rather than a reflection of experience.

Hospitals learned how to record attendance to show improvement, even when patients were still waiting elsewhere.

This is not unique to health. Local authorities do it with recycling rates. A percentage appears in a strategy paper, perhaps 50 per cent household recycling by a certain year, and it soon becomes a public promise. When the target proves unrealistic, definitions are quietly adjusted. What counts as “recycled” expands to include incinerated waste or exported material. The published rate looks good, the press release reads well, and the public hears that progress is on track. The assumption that the target was ever realistic remains unchallenged.

Once a figure or claim gains authority, it begins to generate more evidence in its favour. Analysts and communications teams look for supporting data, and because most large datasets contain noise as well as signal, something useful can usually be found.

The process is rarely dishonest in intent. People want to show progress. They interpret the data through the lens of what has already been declared. For example, iIf a charity announces that its new programme reduces isolation among older people by 30 per cent, then staff will look for survey results that confirm this. A small improvement in one region becomes the headline figure. The less flattering findings from other areas are quietly sidelined.

The 2023 Charity Digital Skills Report highlighted this tendency. Many organisations described their digital projects as successful, yet the same report found that only a small proportion had formal evaluation processes or metrics to measure impact. Success was being declared largely on perception and selective examples. Those early self-assessments then appeared in funding bids, presentations, and public statements, taking on the weight of fact.

Many organisations described their digital projects as successful, yet the same report found that only a small proportion had formal evaluation processes or metrics to measure impact.

Private companies follow the same pattern. So, a retailer might publish a report claiming that its environmental initiatives cut emissions by 25 per cent. Look closely at the fine print though and you may find the reduction refers only to a specific subset of operations, or uses a baseline year chosen for convenience. The figure is technically true, but once repeated in advertisements, media articles, and shareholder statements, it sounds far more comprehensive than it is.

The pattern is the same each time. A tentative estimate becomes a confident statement, then an unquestioned truth.

Organisations crave certainty. It calms investors, reassures staff, and gives journalists something to print. But certainty has a habit of outliving accuracy.

Once a figure appears in a glossy report, it gains a permanence that is hard to undo. Few people have the time or inclination to track its origin. Even internal teams lose sight of where the numbers came from. I have seen departments cite figures from old reports without realising they were citing their own earlier drafts.

That loop of self-reference builds momentum. An internal presentation quotes a figure from last year’s strategy. This year’s strategy quotes the presentation. A consultant’s report cites both. Before long, the same statistic has been referenced five times in five documents, none of which contain the original evidence. It feels robust simply because it exists in print.

Once an organisation has built a story around a fact, challenging it becomes difficult. Staff who question the figures risk being labelled negative or unhelpful. Managers who built careers on the success of a project are unlikely to reopen old debates. The myth becomes part of institutional memory.

Managers who built careers on the success of a project are unlikely to reopen old debates. The myth becomes part of institutional memory.

In health, one of the longest-running examples is the assumption that centralising specialist services automatically improves outcomes. The logic behind this idea is sound in some cases. Concentrating expertise can lead to better results. But when applied broadly, without local evidence, it can also lead to longer travel times, reduced access, and new bottlenecks. Despite mixed findings in evaluations, the assumption has taken root. The phrase “evidence shows centralisation improves quality” still appears routinely in consultation documents.

Charities and private companies show the same reluctance to revisit their own narratives. A social enterprise might promote its training programme as a success story based on early participant feedback. Years later, when follow-up data shows limited long-term impact, the original claim remains on the website because no one wants to rewrite the story. Similarly, a bank might continue to claim it is improving financial literacy through outreach campaigns even when independent studies show little measurable effect. Once success is declared publicly, the reputational cost of admitting uncertainty outweighs the value of truth.

The danger of hardened assumptions is that they become foundations for new decisions. A shaky figure at the start of one policy becomes the baseline for the next.

Take the case of Universal Credit. Early government statements claimed that it would save billions through efficiency and encourage more people into work. Those projections were based on pilot data and economic modelling with heavy caveats. The National Audit Office later noted that the estimates were “highly uncertain” and dependent on assumptions about behaviour that had not been tested at scale. Despite this, the savings figures continued to appear in political speeches and media briefings long after internal documents had cast doubt on them.

In such cases, the repetition of early optimism shapes future spending, staffing, and legislation. Policy evolves on the back of a story rather than evidence. When outcomes fail to match expectations, the narrative often shifts to execution, blaming local implementation or external factors rather than revisiting the original data.

Policy evolves on the back of a story rather than evidence. When outcomes fail to match expectations, the narrative often shifts to execution, blaming local implementation or external factors rather than revisiting the original data.

The solution is straightforward but rarely followed. Every time a number or claim appears in a document, its origin should be cited clearly. That citation should link back to the original evidence, not to another summary. If the figure is based on modelling or limited trials, the uncertainty should remain visible.

Several organisations have begun to make their use of evidence more transparent. The Institute for Government, working with Sense about Science and the Alliance for Useful Evidence, has promoted the idea of “showing your workings” — setting out clearly how policy claims are supported by underlying research and data. Their Transparency of Evidence reports (2015–2016) describe this as a way of helping the public trace how conclusions were reached and assess whether the evidence justifies them. The What Works Network and the Office for National Statistics have also encouraged this practice, arguing that it improves trust and accountability, even if it slows the process of publication.

Independent evaluation also helps. When the National Audit Office reviews large projects, it routinely finds discrepancies between the evidence cited in early business cases and the later data. Publishing those reviews is crucial. They remind everyone that assumptions can be wrong and that evidence is not a fixed asset but a process of discovery.

The same principle applies in charities and the private sector. Funders and boards should ask not only for results but for documentation of the evidence behind those results. When a claim of success relies on selective data, the organisation should be expected to show the rest of the picture.

Funders and boards should ask not only for results but for documentation of the evidence behind those results. When a claim of success relies on selective data, the organisation should be expected to show the rest of the picture

At first glance, the hardening of assumptions might seem harmless. After all, what difference does it make if a number is slightly off? The answer lies in trust. When public bodies, charities, and companies repeat figures that later prove shaky, confidence erodes. Staff grow cynical. The public stops believing official claims. Donors and investors start to suspect that every success story is half true.

More practically, decisions built on false certainty waste resources. A service planned on inflated demand data ends up underused. A project justified by overstated savings drains budgets from better options. A charity that repeats untested claims risks spending years on work that helps fewer people than it believes.

The cost of these errors is not just financial. It is cultural. Organisations that cannot admit uncertainty lose the ability to learn. The habit of repetition replaces the discipline of inquiry.

There are signs of progress. The government’s Evaluation Task Force, launched in 2021, aims to embed evaluation and evidence standards across departments. It promotes clarity about where figures come from and how they should be used. Independent organisations like the King’s Fund, the Institute for Government, and NPC continue to publish detailed assessments of how evidence is distorted in practice. These efforts may not eliminate the problem, but they make it harder for myths to survive unchallenged.

Ultimately, the issue is not that people lie. It is that the truth becomes simplified, polished, and repeated until it loses its shape. A provisional claim becomes a comfortable story, and the story becomes policy. The pattern is familiar and corrosive. Early optimism becomes settled truth, and repetition replaces verification.

Yet it doesn’t have to stay that way. Every organisation, from government departments to charities and private firms, can choose to make its evidence trail visible. Keep the caveats. Publish the sources. Question the figures that sound too neat. Each time a number is used, ask who first said it and why.

The call to action is simple: stop rewarding certainty and start rewarding honesty. The real truth may be awkward and uneven, but it is the only thing that can be trusted to stand up over time.