WEBCommentary Guest

Guest Author
Date:  March 7, 2009

Topic category:  Other/General

Climate change forecasts are useless for policymaking
By Kesten C. Green, J. Scott Armstrong, and Willie Soon


Almost every day, media outlets quote “experts” who predict that soaring temperatures, rising sea levels, increasing storms, prolonged droughts and other disasters will result from human activity. Many of these “forecasts” and “predictions” are the product of climate change computer models that produce a variety of “worst-case scenarios.” Others are simply speculations, expectations, scenarios, probabilities or supposed certainties. However they might be described by their creators, the question is: Are these projections valid? Are they a sound basis for policy decisions that will have incalculable, far-reaching impacts on our energy security, economy, living standards and lives? Read this article, and decide for yourself. Better yet, post it on your website, and let people all over the world decide for themselves.

Even as we struggle with serious global financial and economic difficulties, some people believe manmade global warming is a real problem of urgent concern. Perhaps this is because, almost every day, media outlets quote “experts” who predict that soaring temperatures, rising sea levels, increasing storms, prolonged droughts and other disasters will result from human activity.

NASA scientist James Hansen claims “death trains” carrying coal are putting our planet “in peril.” If we continue using hydrocarbon energy, he predicts, “…one ecological collapse will lead to another, in amplifying feedbacks.” He further forecasts that only by eliminating coal-fired power plants and other sources of carbon dioxide can we prevent the collapse.

The situation recalls a 1974 CIA report that concluded there was “growing consensus among leading climatologists that the world is undergoing a cooling trend”… one likely to cause a food production crisis. Dr. Hansen would probably appreciate the frustration those CIA experts must have felt when Congress ignored their forecasts and recommendations.

If it makes sense to enact measures to reduce CO2 emissions when experts forecast warming, then surely it also makes sense to emit extra CO2 when experts forecast cooling. Or perhaps not.

Perhaps any link between climate change and carbon dioxide is not so strong or important. Consider the historical record.

The tiny fraction of carbon dioxide in the atmosphere increased through the twentieth century. And yet, during that time, global average temperatures rose till about 1940, fell till about 1975, rose again till 1998, and then dropped away again. It is not surprising, then, that despite claims “the science is settled,” thousands of scientists disagree with forecasts of dangerous manmade global warming.

History again provides useful guidance.

Back in 1860, scientists used observations and mathematical modeling to predict the existence of planet Vulcan in an orbit 13 million miles from the Sun. More observations of the planet and extensive debate followed. Finally, the science was settled. The model was wrong. Planet Vulcan does not exist.

Climate change is a complex problem that has generated a similarly heated debate Reliable data exist only for the last three decades, whereas climate changes occur over decades and centuries. Not surprisingly, there are rival theories.

What is the status of experts’ forecasts in such a situation? Scientific forecasting research has shown that experts aren’t able to provide accurate predictions in this kind of complex and uncertain situation. It doesn’t matter whether experts present their forecasts as certain outcomes, detailed scenarios, expectations, likelihoods or probabilities. Or that the forecasts are the product of hard thinking by many highly qualified experts, or even of mathematics or computer simulations. The expert forecasts are nonetheless worthless.

This lack of credible climate forecasts matters, because proposed policies – including taxing carbon emissions and cap-and-trade regimes – will increase energy prices, cause major wealth transfers, and cost jobs. It would be immoral to impose such punishing policies on the basis of dodgy forecasts.

Fortunately, proper forecasters know how to do better.

Global average temperatures vary up and down over short and long periods, without apparent pattern -- and our current knowledge about what causes temperature and other climate changes is speculative and incomplete. Thus, the first question a bona fide forecaster would ask is: Can we do better than assume future temperatures will be the same as current temperatures?

The forecasting model based on this assumption is called the “no-change” model, and studies have shown it is often difficult to beat. The model predicts that global average temperatures in each of the next 100 years will be the same as the previous year’s temperature.

When this model is applied, starting in the year 1850, the differences between the forecasts and global temperature measurements turn out to be quite small. For example, for temperature forecasts for 20 years in the future, the average difference turns out to be 0.18°C (0.32°F). For forecasts for 50 years into the future, the average error was 0.24°C (0.43°F).

These are temperature differences that a normal human being would have trouble detecting and are well within the range of natural variation. The evidence clearly suggests that the no-change model is the obvious one for public policy makers to use.

Policymakers, however, have tended to defer to the projections of the United Nations’ Intergovernmental Panel on Climate Change. Perhaps it isn’t surprising that they should prefer projections that governments have paid billions for, over forecasts from a free and simple model. But how do the IPCC projections perform?

The IPCC first projected a global warming rate of 0.03°C per year in 1992. The errors of the IPCC projection over the years 1992 to 2008 were little different from the errors from the no-change model, when compared to actual measured temperature changes. When the IPCC’s warming rate is applied to a historical period of exponential CO2 growth, from 1851 to 1975, the errors are more than seven times greater than errors from the no-change model.

The models employed by James Hansen and the IPCC are not based on scientific forecasting principles. There is no empirical evidence that they provide long-term forecasts that are as accurate as forecasting that global average temperatures won’t change. Hansen’s, and the IPCC’s, forecasts, and the recommendations based on them, should be ignored.

It would be irresponsible and immoral of policymakers to impose the heavy burden of costly anti carbon-based-energy policies, in the absence of any credible evidence that those burdens will result in net benefits to man, beast or tree.

Guest Author


Notes: 

Dr. Kesten Green is a Senior Research Fellow with the Business and Economic Forecasting Unit of Monash University in Australia. Dr. Scott Armstrong a Professor at The Wharton School, University of Pennsylvania. Dr. Willie Soon is a scientist at the Harvard-Smithsonian Center for Astrophysics.

Dr. Green will discuss the use of scientific methods for forecasting climate change at the International Conference on Climate Change in New York City on March 9. The team’s paper on this subject will be published later this year in the International Journal of Forecasting and is available from http://publicpolicyforecasting.com.


Biography - Guest Author

No bio for this guest.


Copyright © 2009 by Guest Author
All Rights Reserved.


© 2004-2009 by WEBCommentary(tm), All Rights Reserved