On Tuesday, 24 August last, I emailed the Doherty Institute to ask if they had performed any earlier work which would justify the confidence reposed by our government in their Covid modelling. You can see the ensuing exchange here. As I wrote then, their initial response was not encouraging, and I resolved not to hold my breath waiting for a reply. It was a wise decision, as I would by now have turned blue.
The readiness of governments to use the output of computer models to inform policy, without regard to their proven skill, ought to worry us more than it does. Not only was the Covid scare started by a hopelessly inept model touted by the Imperial College team led by the serial catastrophist Neil Ferguson, but the climate catastrophe exists, not in observed data, but entirely in the scary output of computer models which have repeatedly failed to predict outcomes in the real world successfully.
In both cases, despite this substantial history of failed forecasting, the catastrophists have succeeded in commanding the attention – and the obedience – of policy-makers. Ferguson’s failures include epidemiological forecasts for BSE, FMD, ‘bird flu’ and ‘swine flu’, but successive governments have treated his eructations like holy writ, and thrown the kitchen sink in the direction they dictate.
When it comes to climate, I have been hearing computer-modelled predictions of catastrophic anthropogenic climate change all of my adult life – going right back to the 70s, when the scary prediction was global cooling, for God’s sake. If I had a dollar for every time I’ve heard that we have ‘x years’ to save the planet, I’d have enough money to afford the ‘renewables surcharge’ on my electricity bills. Yet ‘x years’ keep on passing, and here we are, in a world no warmer than we should expect for a climate emerging – thank goodness – from the Little Ice Age, and getting greener by the minute, thanks to all the plant food we spew out of our exhaust pipes.
What is it about the two words ‘computer model’ that turns otherwise sane adults into credulous morons? Models are treated, by people who claim to be ‘following the science’, as if they were observational experiments, whose outcomes have the force of observational evidence. In fact, they are ways of testing the modellers’ hypotheses about related phenomena in the real world. If the outcomes they forecast disagree with real-world observations, then we must conclude that at least one of those hypotheses is false, and that quite possibly all of them are.
Ferguson’s forecasts having failed, repeatedly, to predict real-world outcomes, we must conclude that the hypotheses they embody are disconfirmed, and reject their use for policy-making. Similarly, climate models which predict ‘runaway warming’ (or cooling – I’m 70, remember) on a given time-scale must, when that period passes without the warming being observed, be rejected. And yet it never seems to happen.
For all I know, the Doherty Institute models have impeccable skill, and the government is right to trust them. But if so, why not publish, on their website, the case studies that demonstrate that success?
One thought on “Models are not data #3”