A more specific and credible assessment of climate models comes from physicist Freeman Dyson:
I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in.Still, such models, and more importantly, the recommendations and forecasts of the scientists who create such models, are commonly used by policymakers, and feature prominently in IPCC reports. This is a disconcerting thought -- that billions of dollars and lives hinge upon such activities -- but nevertheless, forecasting is one of the cornerstones of climate policy, and the subject of much media attention. When the choice of what to cover is a relatively simple statement of what may happen in 20 years (especially if it claims looming disaster) versus detailed charts and graphs of temperature proxies that require pages of technical explanation, it's not at all surprising that we regularly read predictions of doom in the newspapers.
The science behind the forecasts is far from settled as the battle rages over the validity of this or that proxy series, and this or that model. But what of the accuracy and validity of the forecasting itself? Is there a way to empirically assess what a good forecast is?
The answer is: yes.
Authors KC Green and JS Armstrong describe basic principles of proper forecasting and how they apply to climate science in their 2007 paper Global Warming: Forecasts by Scientists versus Scientific Forecasts. These principles are ranked based on the strength of evidence supporting them, "for example some principles are based on common sense or received wisdom. Such principles are included when there is no contrary evidence. Other principles have some empirical support, while 31 are strongly supported by empirical evidence."
They go on to say that some principles are even counter-intuitive, and that "those who forecast in ignorance of the forecasting research literature are unlikely to produce useful predictions." They then name "some well-established principles that apply to long-term forecasts for complex situations where the causal factors are subject to uncertainty (as with climate)."
The application of these principles to our post-Climategate world should be obvious, especially the first two. Both unaided forecasts by experts, and the amount of agreement between said experts, has little to no relation to the accuracy of their opinions. Green and Armstrong address this question directly by questioning basic assumptions:
- Unaided judgmental forecasts by experts have no value. This applies whether the opinions are expressed in words, spreadsheets, or mathematical models. It applies regardless of how much scientific evidence is possessed by the experts.
Among the reasons for this are:
a) Complexity: People cannot assess complex relationships through unaided observations.
b) Coincidence: People confuse correlation with causation.
c) Feedback: People making judgmental predictions typically do not receive unambiguous feedback they can use to improve their forecasting.
d) Bias: People have difficulty in obtaining or using evidence that contradicts their initial beliefs. This problem is especially serious for people who view themselves as experts.- Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems, as is the case with the IPCC process.
- Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply.. . .
- Given even modest uncertainty, prediction intervals are enormous. Prediction intervals (ranges outside which outcomes are unlikely to fall) expand rapidly as time horizons increase, for example, so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.
- When there is uncertainty in forecasting, forecasts should be conservative. Uncertainty arises when data contain measurement errors, when the series are unstable, when knowledge about the direction of relationships is uncertain, and when a forecast depends upon forecasts of related (causal) variables. For example, forecasts of no change were found to be more accurate than trend forecasts for annual sales when there was substantial uncertainty in the trend lines (Schnaars and Bavuso 1986).
But is it necessary to use scientific forecasting methods? In other words, to use methods that have been shown by empirical validation to be relevant to the types of problems involved with climate forecasting? Or is it sufficient to have leading scientists examine the evidence and make forecasts?Why is this an important question? As noted before, potentially disastrous public policy is at stake here, and "Many public policy decisions are based on forecasts by experts. Research on persuasion has shown that people have substantial faith in the value of such forecasts. Faith increases when experts agree with one another."
After listing some examples of expert forecasting gone horribly wrong and describing studies in the field of forecasting science, they state the conclusion succinctly:
Comparative empirical studies have routinely concluded that judgmental forecasting by experts is the least accurate of the methods available to make forecasts. [bold added]What does this have to do with computer models currently used in climate science?
The methodology for climate forecasting used in the past few decades has shifted from surveys of experts’ opinions to the use of computer models. Reid Bryson, the world’s most cited climatologist, wrote in a 1993 article that a model is "nothing more than a formal statement of how the modeler believes that the part of the world of his concern actually works" (p. 798-790). Based on the explanations of climate models that we have seen, we concur. . . . Climate models are, in effect, mathematical ways for the experts to express their opinions. [bold added]In this context, the emails between prominent proponents of AGW take on a particularly sinister quality. A "trick" may truly mean that the scientist is coming up with a novel hack to deal with a problem, but when it is used to "hide the decline" one has to wonder whether science has become the handmaiden to a political agenda. Couple that with the attempt to silence dissent, expel nonconformists, and blackball journals (and this is just scratching the surface of what Climategate shows us), as well as the original Opinion-generating Engine, the climate model behind the infamous Hockey Stick Graph, and it's difficult to see how any of this meshugas is taken seriously at all.
Sadly, it still is, because as the authors note repeatedly, "people have substantial faith in the value of such forecasts." This is why it is important for rational people to be able to question the very nature of the forecasts we hear every day, rather than simply quibbling about the validity of one proxy series over another, or going through a he said/she said about whether polar bear populations are decreasing or not.
I can't recommend reading this paper highly enough, but let me conclude by excerpting the abstract of the paper for those who just want the soundbyte version:
In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no”.
To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making.
...We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. ... The forecasting procedures that were described violated [many] principles. Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder. [bold added]
1 comment:
I remember reading an "American Scientist" article back in the late 1980s where the admitted margin of error was about 3 orders of magnitude larger than what they were trying to measure. Now what they were doing was explaining their methods rather than endorsing their specific findings, but you would not have figured that out from the abstract.
On a more populist level there was a National Geographic article (sorry, don't even have a rough time scale on this one) where the climate modeler admitted that when the model "didn't come out right" they just "tweaked the programming until it did." Right, of course, being in accord with his hypothesis going in.
C. Andrew
Post a Comment