18 Jan 2011: Analysis

Can We Trust Climate Models?
Increasingly, the Answer is ‘Yes’

Forecasting what the Earth’s climate might look like a century from now has long presented a huge challenge to climate scientists. But better understanding of the climate system, improved observations of the current climate, and rapidly improving computing power are slowly leading to more reliable methods.

by michael d. lemonick

A chart appears on page 45 of the 2007 Synthesis Report of the Intergovernmental Panel on Climate Change (IPCC), laying out projections for what global temperature and sea level should look like by the end of this century. Both are projected to rise, which will come as no surprise to anyone who’s been paying even the slightest attention to the headlines over the past decade or so. In both cases, however, the projections span a wide range of possibilities. The temperature, for example, is likely to rise anywhere from 1.8 C to 6.4 C (3.2 F to 11.5 F), while sea level could increase by as little as 7 inches or by as much as 23 — or anywhere in between.

It all sounds appallingly vague, and the fact that it’s all based on computer models probably doesn’t reassure the general public all that much. For many people, “model” is just another way of saying “not the real world.” In fairness, the wide range of possibilities in part reflects uncertainty about human behavior: The chart lays out different possible scenarios based on how much CO2 and other greenhouse gases humans might emit over the coming century. Whether the world adopts strict emissions controls or decides to ignore the climate problem entirely will make a huge difference to how much warming is likely to happen.

But even when you factor out the vagaries of politics and economics, and assume future emissions are known perfectly, the projections from climate models still cover a range of temperatures, sea levels, and other manifestations of climate change. And while there’s just one climate, there’s more than one way to simulate it. The IPCC’s numbers come from averaging nearly two dozen individual models produced by institutions including the National Center for Atmospheric Research (NCAR), the Geophysical Fluid Dynamics Laboratory (GFDL), the U.K.’s Met Office, and more. All of these models have features in common, but they’re constructed differently — and all of them leave some potentially important climate processes out entirely. So the question remains: How much can we really trust climate models to tell us about the future?

The answer, says Keith Dixon, a modeler at GFDL, is that it all depends on questions you’re asking. “If you want to know ‘is climate change something that should be on my radar screen?’” he says, “then you end up with some very solid results. The climate is warming, and we can say why. Looking to the 21st century, all reasonable projections of what humans will be doing suggest that not only will the climate continue to warm, you have a good chance of it accelerating. Those are global-scale issues, and they’re very solid.”

The reason they’re solid is that, right from the emergence of the first crude versions back in the 1960s, models have been at their heart a series of equations that describe airflow, radiation and energy balance as the Sun
The problem is that warming causes changes that act to accelerate or slow the warming.
warms the Earth and the Earth sends some of that warmth back out into space. “It literally comes down to mathematics,” says Peter Gleckler, a research scientist with the Program for Climate Model Diagnosis and Intercomparison at Livermore National Laboratory, and the basic equations are identical from one model to another. “Global climate models,” he says, echoing Dixon, “are designed to deal with large-scale flow of the atmosphere, and they do very well with that.”

The problem is that warming causes all sorts of changes — in the amount of ice in the Arctic, in the kind of vegetation on land, in ocean currents, in permafrost and cloud cover and more — that in turn can either cause more warming, or cool things off. To model the climate accurately, you have to account for all of these factors. Unfortunately, says James Hurrell, who led the NCAR’s most recent effort to upgrade its own climate model, you can’t. “Sometimes you don’t include processes simply because you don’t understand them well enough,” he says. “Sometimes it’s because they haven’t even been discovered yet.”

A good example of the former, says Dixon, is the global carbon cycle — the complex interchange of carbon between oceans, atmosphere, and biosphere. Since atmospheric carbon dioxide is driving climate change, it’s obviously important, but until about 15 years ago, it was too poorly understood to be included in the models. “Now,” says Dixon, “we’re including it — we’re simulating life, not just physics.” Equations representing ocean dynamics and sea ice also have been added to climate models as scientists have understood these crucial processes better.

Other important phenomena, such as changes in clouds, are still too complex to model accurately. “We can’t simulate individual cumulus clouds,” says Dixon, because they’re much smaller than the 200-kilometer grid boxes that make up climate models’ representation of the world. The same applies to aerosols — tiny particles, including natural dust and manmade soot — that float around in the atmosphere and can cool or warm the planet, depending on their size and composition.

But there’s no one right way to model these small-scale phenomena. “We don’t have the observations and don’t have the theory,” says Gleckler. The best they can do on this point is to simulate the net effect of all the clouds or aerosols in a grid box, a process known as “parameterization.” Different
‘It’s not a science for which everything is known, by definition,’ says one expert.
modeling centers go about it in different ways, which, unsurprisingly, leads to varying results. “It’s not a science for which everything is known, by definition,” says Gleckler. “Many groups around the world are pursuing their own research pathways to develop improved models.” If the past is any guide, modelers will be able to abandon parameterizations one by one, replacing them with mathematical representations of real physical processes.

Sometimes, modelers don’t understand a process well enough to include it at all, even if they know it could be important. One example is a caveat that appears on that 2007 IPCC chart. The projected range of sea-level rise, it warns, explicitly excludes “future rapid dynamical changes in ice flow.” In other words, if land-based ice in Greenland and Antarctica starts moving more quickly toward the sea than it has in the past — something glaciologists knew was possible, but hadn’t yet been documented — these estimates would be incorrect. And sure enough, satellites have now detected such movements. “The last generation of NCAR models,” says Hurrell, “had no ice sheet dynamics at all. The model we just released last summer does, but the representation is relatively crude. In a year or two, we’ll have a more sophisticated update.”

Sophistication only counts, however, if the models end up doing a reasonable job of representing the real world. It’s not especially useful to wait until 2100 to find out, so modelers do the next best thing: They perform “hindcasts,” which are the inverse of forecasts. “We start the models from the middle of the 1800s,” says Dixon, “and let them run through the present.” If a model reproduces the overall characteristics of the real-world climate record reasonably well, that’s a good sign.

What the models don’t try to do is to match the timing of short-term climate variations we’ve experienced. A model might produce a Dust Bowl like that of the 1930s, but in the model it might happen in the 1950s. It should produce the ups and downs of El Niño and La Niña currents in the Pacific with about the right frequency and intensity, but not necessarily at the same times as they happen in the real Pacific. Models should show slowdowns and accelerations in the overall warming trend, the result of natural fluctuations, at about the rate they happen in the real climate. But they won’t necessarily show the specific flattening of global warming we’ve observed during the past decade — a temporary slowdown that had skeptics declaring the end of climate change.

It’s also important to realize that climate represents what modelers call a boundary condition. Blizzards in the Sahara are outside the boundaries of our current climate, and so are stands of palm trees in Greenland next year. But within those boundaries, things can bounce around a great deal from year to year or decade to decade. What modelers aim to produce is a virtual climate that resembles the real one in a statistical sense, with El Niños, say, appearing about as often as they do in reality, or hundred-year storms coming once every hundred years or so.

This is one essential difference between weather forecasting and climate projection. Both use computer models, and in some cases, even the very same models. But weather forecasts start out with the observed state of the
Many decisions about how to adapt to climate change can’t wait for better climate models.
atmosphere and oceans at this very moment, then project it forward. It’s not useful for our day-to-day lives to know that September has this average high or that average low; we want to know what the actual temperature will be tomorrow, and the day after, and next week. Because the atmosphere is chaotic, anything less than perfect knowledge of today’s conditions (which is impossible, given that observations are always imperfect) will make the forecast useless after about two weeks.

Since climate projections go out not days or weeks, but decades, modelers don’t even try to make specific forecasts. Instead, they look for changes in averages — in boundary conditions. They want to know if Septembers in 2050 will be generally warmer than Septembers in 2010, or whether extreme weather events — droughts, torrential rains, floods — will become more or less frequent. Indeed, that’s the definition of climate: the average conditions in a particular place.

“Because models are put together by different scientists using different codes, each one has its strengths and weaknesses,” says Dixon. “Sometimes one [modeling] group ends up with too much or too little sea ice but does very well with El Niño and precipitation in the continental U.S., for example,” while another nails the ice but falls down on sea-level rise. When you average many models together, however, the errors tend to cancel.

Even when models reproduce the past reasonably well, however, it doesn’t guarantee that they’re equally reliable at projecting the future. That’s in part because some changes in climate are non-linear, which is to say that a small nudge can produce an unexpectedly large result. Again, ice sheets are a good example: If you look at melting alone, it’s pretty straightforward to calculate how much extra water will enter the sea for every degree of temperature rise. But because meltwater can percolate down to lubricate the undersides of glaciers, and because warmer oceans can lift the ends of glaciers up off the sea floor and remove a natural brake, the ice itself can end up getting dumped into the sea, unmelted. A relatively small temperature rise can thus lead to an unexpectedly large increase in sea level. That particular non-linearity was already suspected, if not fully understood, but there could be others lurking in the climate system.

Beyond that, says Dixon, if three-fourths of the models project that the Sahel (the area just south of the Sahara) will get wetter, for example, and a fourth says it will dry out, “there’s a tendency to go with the majority. But we can’t rule out without a whole lot of investigation whether the minority is doing something right. Maybe they have a better representation of rainfall patterns.” Even so, he says, if you have the vast majority coming up with similar results, and you go back to the underlying theory, and it makes physical sense, that tends to give you more confidence they’re right. The best confidence-builder of all, of course, is when a trend projected by models shows up in observations — warmer springs and earlier snowmelt in the Western U.S., for example, which not only makes physical sense in a warming world, but which is clearly happening.

Climate Forecasts: The Case
For Living with Uncertainty

Climate Forecasts: The Case For Living with Uncertainty
As climate science advances, predictions about the extent of future warming and its effects are likely to become less — not more — precise, journalist Fred Pearce writes. That may make it more difficult to convince the public of the reality of climate change, but it hardly diminishes the urgency of taking action.
And the models are constantly being improved. Climate scientists are already using modified versions to try and predict the actual timing of El Ninos and La Niñas over the next few years. They’re just beginning to wrestle with periods of 10, 20 and even 30 years in the future, the so-called decadal time span where both changing boundary conditions and natural variations within the boundaries have an influence on climate. “We’ve had a modest amount of skill with El Niños,” says Hurrell, “where 15-20 years ago we weren’t so skillful. That’s where we are with decadal predictions right now. It’s going to improve significantly.”

After two decades of evaluating climate models, Gleckler doesn’t want to downplay the shortcomings that remain in existing models. “But we have better observations as of late,” he says, “more people starting to focus on these things, and better funding. I think we have better prospects for making some real progress from now on.”

POSTED ON 18 Jan 2011 IN Business & Innovation Climate Energy Science & Technology North America North America 


While this article is informative, especially considering what modelers look for in outputs from the computer in 'hindcasting', the basic premise present in the title is over-simplified and misleading.

What aspects of the model outputs are we assessing? If we are looking to support the idea of human-induced increasing of the greenhouse effect, it seems as though we can have some confidence from computer simulations of climate models. One of the author's sources makes that clear. Other than that aspect of climate, there is no clear evidence, either in this article or elsewhere, that model outputs are trustworthy.

It would be interesting if the author scoured the climate science literature from 20 to 25 years ago to see what types of predictions were made then and how well the climate models could produce the climate we've experienced in that time. The fact that this point is never brought up in these pieces makes me think such predictions were not good.

And while some will point to changes in our understanding since 20 years ago, I'll point out that the equations cited in the above piece have been the same for over 100 years. We have not made substantial strides in the theoretical side of this science to the point where we have a completely different set of equations modeling the balance of energy and conservation of momentum in the climate system. The first and second editions of Washington and Parkinson's 'Introduction to Three-Dimensional Climate Modeling' (spanning 20 years in publication) have almost the exact same chapter on the theory of climate models. So a direct comparison might be flawed on the hardware side of the issue, but not the actual model side.

If we are going to understand how to move forward in a policy discussion concerning the nature of the 'climate problem', then nuance is necessary. To make blanket statements about the level with which we can trust 'models' shows a lack of determination in painting the most complete picture of our situation currently. If we are going to most effectively tackle this situation, we ought to make sure we aren't cutting corners for the sake of narrative.

Posted by maxwell on 18 Jan 2011

RE: maxwell

fwiw, the relatively crude predictions models made in the 70's have largely panned out to hold true. One thing they didn't anticipate was such a large increase of human injected CO2 into the atmosphere. Ironically, that is still one of things we haven't been able to anticipate, as we are increasing CO2 emissions toward the high end of the A1 emissions scenarios.

The reason the basic equations haven't changed is because the basic principles of large scale airflow, radiation and energy balance are well understood and described by mathematics. Where they're trying to make substantial strides are in the smaller more chaotic processes like cloud formation and aerosol feedbacks. This is where more observations, better theory and more powerful computing are making models more accurate.

Posted by Sc0tt on 18 Jan 2011

Interesting but disappointing article.

None of these GCMs have been validated, therefore they are totally useless for policy making.

The statement that errors in one GCM cancel out errors in another GCM is not science, it's not even science fiction.

Posted by Baa Humbug on 19 Jan 2011

Let me get this right......

What's being said here is based on the limited knowledge we have of the variables and processes which influence climate, the models are trustworthy. When we understand better the processes or even identify additional processes that influence climate the models may be more trustworthy or on the other hand they may be useless.

By the way, error + error = 2 errors

Posted by Invicta on 20 Jan 2011

Ask Lehman Brothers what happens when people put too much trust in computer models incorporating multiple non-linear equations.

Anybody who puts much faith in computer models requiring simultaneous solution of a host of non-linear equations attempting to describe an immensely complex, possibly chaotic system not fully understood by humans either knows nothing about computer modeling or is incredibly gullible.

The now infamous "HARRY_READ_ME.txt" file:

Help yourself:

Posted by John G. on 20 Jan 2011


'fwiw, the relatively crude predictions models made in the 70's have largely panned out to hold true.'

Again, since you're not specifically defining which predictions to which you are referring, it is almost impossible to either substantiate or refute this claim. I will point you to my comment in which I agree that the human induced greenhouse effect seems to be modeled well, but we don't how 'trustworthy' GCM's are with respect to almost all other predictions. The predictions aren't made available to interested public in any transparent way.

The continuation of broad, meaningless statements concerning the ability models to correctly predict climate, lacking any specificity, is beginning to erode public trust in models in general. We already see comments here pointing to some kind of relation between climate models and economic models, even though, as you say, the basic physics that go into GCM are very, very well understand. So much better understand than the closed-form equations that go into economic models. There really shouldn't be any comparison. But because climate scientists are making noise in the media about outcomes of their models that they refuse to substantiate with real data in a meaningful way, the public is leaning more and more to the direction of climate science being pseudo-science, as much as economic modeling is pseudo-science. More and more people believe catastrophic claims 'overblown' or 'exaggerated'.

Because the narrative, highlighted by the above article, continues to lack any specific message with respect to the ability of these models to predict current climate systems, people don't trust science anymore. Does that seem like a good method for making policy decisions? The real life data says 'no'.

Posted by maxwell on 21 Jan 2011

There is one thing, not exactly a minor thing really in the great scheme of things, which may have completely escaped the attention of modelers.

And that is "when" we are.

The probability is quite high that we are at the end Holocene, the Holocene being the interglacial we live in and the one in which all of human civilization has occurred.

Five of the past 6 such post Mid Pleistocene interglacials have each lasted about half of a precession cycle. The precession cycle oscillates between 19 and 23 thousand years, and we are at the 23kyr point now, which makes half 11,500 years, or the present age of the Holocene.

Which is what makes such discussion quite relevant.

The ends of the post MPT interglacials have been quite the wild climate ride. The most recent one, the Eemian interglacial posted at least two strong thermal excursions and quite rapidly, the final one scoring a +6 meter rise in sea level above present, accompanying something like a 4-5C temperature excursion. Some fairly credible research suggests this may have been more like +20 meters. MIS-11, the Holsteinian interglacial, scored a +21.3 meter rise. One might be tempted to think of this as the natural climate "noise" within which we are challenged to recognize our anthropogenic "signal" from.

The 2007 IPCC AR4 report worst case AGW prognostication is 0.59 meters, which we will round to 0.6 meters for comparison.

In order to recognize a signal from background noise you need to at least equal the noise, and the AR4 0.6 meter "signal" comes in at just 10% of the low end of the last end interglacial's "noise".

How do the new, improved models do with this signal to noise ratio?

Posted by sentient on 21 Jan 2011

Who and what benchmarks the models? In physics the benchmark should be the experiment. For climate models the benchmark should be the observation. A proposal: Let the most important models predict the climate change 20 years from now, write their predictions down and then compare the outcome with the prediction 20 years later. To make it more diagnostically conclusive, choose different prediction categories: temperature,moisture, precipitation, occurrence of droughts, floods ...

Furthermore make use of the latest earth observation satellites and compare the model values against the values observed by satellite. This eliminates bad models.

Posted by Benchmarking models on 22 Jan 2011

Comments have been closed on this feature.
michael d. lemonickABOUT THE AUTHOR
Michael D. Lemonick is the senior writer at Climate Central, a nonpartisan organization whose mission is to communicate climate science to the public. Prior to joining Climate Central, he was a senior writer at Time magazine, where he covered science and the environment for more than 20 years. He has also written four books on astronomical topics and has taught science journalism at Princeton University for the past decade. In other articles for Yale Environment 360, Lemonick has written about the impacts of climate change in the U.S. and how satellite technology is used to track melting ice.



A Delicate Balance: Protecting
Northwest’s Glass Sponge Reefs

Rare and extensive reefs of glass sponges are found only one place on earth – a stretch of the Pacific Northwest coast. Now, efforts are underway to identify and protect these fragile formations before they are obliterated by fishing vessels that trawl the bottom.

An Up-Close View of Bristol Bay’s
Astonishing Sockeye Salmon Runs

The first runner-up in the 2015 Yale Environment 360 Video Contest captures stunning images of the abundant sockeye salmon runs in Bristol Bay, Alaska, and tells the story of a 70-year-old project that has been studying the millions of salmon that annually pour into the region’s rivers to spawn.

Global Extinction Rates: Why
Do Estimates Vary So Wildly?

Is it 150 species a day or 24 a day or far less than that? Prominent scientists cite dramatically different numbers when estimating the rate at which species are going extinct. Why is that?

Probing the Rich Inner Lives
Of the Planet’s Wild Animals

Scientist Carl Safina has examined our steadily evolving understanding of the complex interactions among the more social members of the animal world. In an interview with Yale Environment 360, he talks about why it’s vital to our humanity to empathize more deeply with wild creatures.

Resilience: A New Conservation
Strategy for a Warming World

As climate change puts ecosystems and species at risk, conservationists are turning to a new approach: preserving those landscapes that are most likely to endure as the world warms.


MORE IN Analysis

How ‘Natural Geoengineering’
Can Help Slow Global Warming

by oswald j. schmitz
An overlooked tool in fighting climate change is enhancing biodiversity to maximize the ability of ecosystems to store carbon. Key to that strategy is preserving top predators to control populations of herbivores, whose grazing reduces the amount of CO2 that ecosystems absorb.

Why Paris Worked: A Different
Approach to Climate Diplomacy

by david victor
A more flexible strategy, a willingness to accept nonbinding commitments, and smart leadership by the French all helped secure a climate deal in Paris. The real work lies ahead, but Paris created a strong, if long overdue, foundation on which to begin building a carbon-free future.

Turning Point: Landmark Deal
On Climate Is Reached in Paris

by fred pearce
In what could be a turning point, the world’s nations reached an agreement in Paris that would commit them to cutting emissions and keeping global warming below 2 degrees. Although the pledges are not binding, the deal includes a review process to determine if countries are meeting their commitments.

Will Paris Conference Finally
Achieve Real Action on Climate?

by fred pearce
The emission pledges from the world’s nations still fall short of the goal for limiting global warming. But as negotiators convene in Paris this week, there is cautious optimism that a significant international agreement on climate can be reached.

Will Indonesian Fires Spark
Reform of Rogue Forest Sector?

by lisa palmer
Massive fires in Indonesia caused by the burning of forests and peatlands for agriculture have shrouded large areas of Southeast Asia in smoke this fall. But analysts say international anger over the fires could finally lead to a reduction in Indonesia’s runaway deforestation.

How China and U.S. Became
Unlikely Partners on Climate

by orville schell
Amid tensions between the U.S. and China, one issue has emerged on which the two nations are finding common ground: climate change. Their recent commitments on controlling emissions have created momentum that could help international climate talks in Paris in December.

Will the Paris Climate Talks
Be Too Little and Too Late?

by fred pearce
At the upcoming U.N. climate conference, most of the world’s major nations will pledge to make significant reductions in greenhouse gas emissions. But serious doubts remain as to whether these promised cuts will be nearly enough to avoid the most severe impacts of climate change.

Global Extinction Rates: Why
Do Estimates Vary So Wildly?

by fred pearce
Is it 150 species a day or 24 a day or far less than that? Prominent scientists cite dramatically different numbers when estimating the rate at which species are going extinct. Why is that?

Why the Fossil Fuel Divestment
Movement May Ultimately Win

by marc gunther
The fossil fuel divestment campaign has so far persuaded only a handful of universities and investment funds to change their policies. But if the movement can help shift public opinion about climate change, its organizers say, it will have achieved its primary goal.

Alien Islands: Why Killing Rats
Is Essential to Save Key Wildlife

by ted williams
Alien rats introduced by ships are decimating populations of birds and other wildlife on islands from the sub-Antarctic to California. Effective programs to eradicate the rats are underway but are encountering opposition from animal activists and some green groups.

e360 digest
Yale Environment 360 is
a publication of the
Yale School of Forestry
& Environmental Studies


Donate to Yale Environment 360
Yale Environment 360 Newsletter



About e360
Submission Guidelines

E360 en Español

Universia partnership
Yale Environment 360 articles are now available in Spanish and Portuguese on Universia, the online educational network.
Visit the site.


e360 Digest
Video Reports


Business & Innovation
Policy & Politics
Pollution & Health
Science & Technology


Antarctica and the Arctic
Central & South America
Middle East
North America

e360 VIDEO

Tribal people and ranchers join together to stop a project that would haul coal across their Montana land.
Watch the video.


The latest
from Yale
Environment 360
is now available for mobile devices at e360.yale.edu/mobile.

e360 VIDEO

The 2015 Yale e360 Video Contest winner documents a Northeastern town's bitter battle over a wind farm.
Watch the video.

e360 VIDEO

A 2015 Yale e360 Video Contest winner captures stunning images of wild salmon runs in Alaska.
Watch the video.

e360 VIDEO

Food waste
An e360 video series looks at the staggering amount of food wasted in the U.S. – a problem with major human and environmental costs.
Watch the video.

e360 VIDEO

Colorado wildfires
An e360 video goes onto the front lines with Colorado firefighters confronting deadly blazes fueled by a hotter, drier climate.
Watch the video.


A three-part series Tainted Harvest looks at the soil pollution crisis in China, the threat it poses to the food supply, and the complexity of any cleanup.
Read the series.