The Limits of Climate Modeling

As the public seeks answers about the future impacts of climate change, some climatologists are growing increasingly uneasy about the localized predictions they are being asked to make.

Now that the world largely accepts our climate is changing, and that humans are to blame, we all want to know what the future holds for our own backyard. How bad will it get? Flood or drought? Feast or famine? Super-hurricane or Mediterranean balm?

The statisticians and climatologists who brought us the big picture are now under huge pressure to get local. But they are growing increasingly concerned about whether their existing models and computers are up to the job. They organized a summit in Reading, England, in May to discuss their concerns. As Brian Hoskins of Reading University, one of the British government’s top climate advisers, put it: “We’ve worked out the global scale. But that’s the easy problem. We don’t yet understand the smaller scale. The pressure is on for answers, and we can’t wait around for decades.”

Already, policymakers are starting to take at face value model predictions of — to take a few examples — warming of 18 degrees Fahrenheit (7.8 degrees Celsius) or more in Alaska, and super-droughts in the southwestern United States, but little warming at all in central states.

But is the task doable? Some climate modelers say that even with the extraordinary supercomputing power now available, the answer is no. That, by being lured into offering local forecasts for decades ahead, they are setting themselves up for a fall that could undermine the credibility of the climate models.

Lenny Smith, an American statistician now working on climate modeling at the London School of Economics in the United Kingdom, is fearful. “Our models are being over-interpreted and misinterpreted,” he says. “They are getting better; I don’t want to trash them. But policy-makers think we know much more than we actually know. We need to drop the pretense that they are nearly perfect.”

There are two areas of concern. First, how accurate are the global models at mimicking atmospheric processes? And second, are they capable of zooming in on particular areas to give reliable pictures of the future where you live?

Nobody much doubts that greenhouse gases like carbon dioxide accumulating in the atmosphere will cause warming. It would be a contradiction of 200 years of physics if they did not. But exactly how much warming will occur — and how it will be distributed across the globe and impact other climatic features like rainfall — depends on feedbacks in the climate system, the oceans, and the natural carbon cycle. The influence of some of these feedbacks is much less clear.

One big issue is the influence of clouds. The models are pretty hopeless at predicting future cloud cover. And we can’t even be sure whether, overall, extra clouds would warm or cool the planet. (Clouds may cool us in the day, but will usually keep us warm at night.) In the language of Donald Rumsfeld, we would call this problem a “known unknown.”

And there may also be “unknown unknowns.” For instance, a paper in Earth and Planetary Science Letters in March reported finding fossilized ferns in central Siberia that suggest that in the late Cretaceous era, temperatures there were like modern-day Florida. Yet current climate models predict that the area should have had average temperatures around zero Celsius. The British climate modeler involved in the study, Paul Valdes of Bristol University, says this snapshot from the era of the dinosaurs could mean that “the internal physics of our climate models are wrong.” That the models may also be drastically underestimating likely warming in the 21st century.

This uncertainty at the heart of the models seems surprising when the predictions of most global climate models seem to be in agreement. For more than a decade they have estimated that a doubling of carbon dioxide in the air will warm the world by between 1.5 and 4.5 degrees Celsius.

Some experts think the consensus of the models is bogus. “The modelers tend to tweak them to align them. The process is very incestuous,” one leading British analyst on uncertainty in models told me.

Another, Jerry Ravetz, fellow at Oxford University’s James Martin Institute for Science and Civilisation, says: “The modelers are trained to be very myopic, and not to understand the uncertainty within their models. They become very skilled at solving the little problems and ignoring the big ones.”

These are serious charges. But the custodians of the big models say this is really a communications problem between them and the outside world. Gavin Schmidt of NASA’s Goddard Institute for Space Studies, which runs one of the world’s top climate models, says modelers themselves have a “tacit knowledge” of the uncertainties inherent in their work. But this knowledge rarely surfaces in public discussions, resulting in “an aura of exactitude that can be misleading.”

Steve Rayner, director of the James Martin Institute, says, “What climate models do well is give a broad picture. What they are absolutely lousy at is giving specific forecasts for particular places or times.” And yet that is what modelers are increasingly doing.

At a meeting at Cambridge University in Britain last summer, Larry Smith singled out for criticism the British government’s Met Office Hadley Centre for Climate Prediction and Research, which runs another of the world’s premier climate models. He accuses the Centre of making detailed climate projections for regions of Britain, when global climate models disagree strongly about how climate change will affect the islands.

James Murphy, Hadley’s head of climate prediction, says: “I find it far-fetched that a planner is going to rush off with a climate scientist’s probability distribution and make an erroneous decision because they assumed they could trust some percentile of the distribution to its second decimal point.”

But some say the Hadley Centre invites just such a response in some of its leaflets. One of its reports, “New Science for Managing Climate Risks,” distributed to policymakers at the Bali climate conference last December, included “climate model predictions” forecasting that by 2030 the River Orinoco’s flow will have declined by 18.7 percent, the Zambezi by 34.9 percent, and the Amazon by 13.4 percent.

Many in the modeling community are growing wary of such spurious certainty. Last year, a panel on climate modeling assembled by the UN’s World Climate Research Program under the chairmanship of Jagadish Shukla of the George Mason University at Calverton, Maryland, concluded that current models “have serious limitations in simulating regional features, for example rainfall, mid-latitude storms, organized tropical convection, ocean mixing, and ecosystem dynamics.”

Regional projections, the panel said, “are sufficiently uncertain to compromise the goal of providing society with reliable predictions of regional climate change.” Many of the predictions were “laughable,” according to the panel. Concern is greatest about predicting climate in the tropics, including hurricane formation. This seriously undermines the credence that can be placed on a headline-grabbing prediction in May that the future might see fewer Atlantic hurricanes (albeit sometimes more intense).

This might not matter too much if politicians and policymakers had a healthily skeptical view of climate models. But most do not, a meeting of modelers held in Oxford heard in February. Policymakers often hide behind models and modelers, using them to claim scientific probity for their actions. One speaker likened modern climate modelers to the ancient oracles. “They are part of the tradition of goats’ entrails and tea leaves. They are a way of objectifying advice, cloaking sensible ideas in a false aura of scientific certainty.”

We saw that when European governments at the recent Bali climate conference cited the UN’s Intergovernmental Panel on Climate Change as reporting that keeping carbon dioxide concentrations in the atmosphere below 450 parts per million would prevent warming above 2 degrees Celsius. And that that was a “safe” level of warming. Neither statement is in any IPCC reports, and its scientists have repeatedly stated that what might be regarded as a safe degree of warming is ultimately a political and not a scientific question.

None of this should be taken to suggest either that climate models are not valuable tools, or that they are exaggerating the significance of man-made climate change. In fact, they may well inadvertently be under-estimating the pace of change. Most models suggest that climate change in the coming decades will be gradual — a smooth line on a graph. But our growing knowledge of the history of natural climate change suggests that change often happens in sudden leaps.

For instance, there was a huge step-change in the world’s climate 4,200 years ago. Catastrophic droughts simultaneously shattered human societies across the Middle East, India, China, and the interior of North America. “Models have great difficulty in predicting such sudden events, and in explaining them,” says Euan Nisbet of Royal Holloway, the University of London. “But geology tells us that catastrophe has happened in the past, and is likely to happen again.”

As Pasky Pascual of the U.S. Environmental Protection Agency put it at the Oxford meeting: “All models are wrong; some are wronger.” But they are our best handle on likely climate change in the coming decades.

Acting on the findings of Shukla’s panel, modelers from around the world met at the summit in Reading in May to “prepare a blueprint to launch a revolution in climate prediction.” They said that to meet the demands for reliable local forecasts they needed more than a thousand times more computing power than they currently have, and called for the establishment of a new billion-dollar global climate modeling center, a “Manhattan Project for climate,” to deliver the goods.