18 Jan 2011: Analysis

Can We Trust Climate Models?
Increasingly, the Answer is ‘Yes’

Forecasting what the Earth’s climate might look like a century from now has long presented a huge challenge to climate scientists. But better understanding of the climate system, improved observations of the current climate, and rapidly improving computing power are slowly leading to more reliable methods.

by michael d. lemonick

A chart appears on page 45 of the 2007 Synthesis Report of the Intergovernmental Panel on Climate Change (IPCC), laying out projections for what global temperature and sea level should look like by the end of this century. Both are projected to rise, which will come as no surprise to anyone who’s been paying even the slightest attention to the headlines over the past decade or so. In both cases, however, the projections span a wide range of possibilities. The temperature, for example, is likely to rise anywhere from 1.8 C to 6.4 C (3.2 F to 11.5 F), while sea level could increase by as little as 7 inches or by as much as 23 — or anywhere in between.

It all sounds appallingly vague, and the fact that it’s all based on computer models probably doesn’t reassure the general public all that much. For many people, “model” is just another way of saying “not the real world.” In fairness, the wide range of possibilities in part reflects uncertainty about human behavior: The chart lays out different possible scenarios based on how much CO2 and other greenhouse gases humans might emit over the coming century. Whether the world adopts strict emissions controls or decides to ignore the climate problem entirely will make a huge difference to how much warming is likely to happen.

But even when you factor out the vagaries of politics and economics, and assume future emissions are known perfectly, the projections from climate models still cover a range of temperatures, sea levels, and other manifestations of climate change. And while there’s just one climate, there’s more than one way to simulate it. The IPCC’s numbers come from averaging nearly two dozen individual models produced by institutions including the National Center for Atmospheric Research (NCAR), the Geophysical Fluid Dynamics Laboratory (GFDL), the U.K.’s Met Office, and more. All of these models have features in common, but they’re constructed differently — and all of them leave some potentially important climate processes out entirely. So the question remains: How much can we really trust climate models to tell us about the future?

The answer, says Keith Dixon, a modeler at GFDL, is that it all depends on questions you’re asking. “If you want to know ‘is climate change something that should be on my radar screen?’” he says, “then you end up with some very solid results. The climate is warming, and we can say why. Looking to the 21st century, all reasonable projections of what humans will be doing suggest that not only will the climate continue to warm, you have a good chance of it accelerating. Those are global-scale issues, and they’re very solid.”

The reason they’re solid is that, right from the emergence of the first crude versions back in the 1960s, models have been at their heart a series of equations that describe airflow, radiation and energy balance as the Sun
The problem is that warming causes changes that act to accelerate or slow the warming.
warms the Earth and the Earth sends some of that warmth back out into space. “It literally comes down to mathematics,” says Peter Gleckler, a research scientist with the Program for Climate Model Diagnosis and Intercomparison at Livermore National Laboratory, and the basic equations are identical from one model to another. “Global climate models,” he says, echoing Dixon, “are designed to deal with large-scale flow of the atmosphere, and they do very well with that.”

The problem is that warming causes all sorts of changes — in the amount of ice in the Arctic, in the kind of vegetation on land, in ocean currents, in permafrost and cloud cover and more — that in turn can either cause more warming, or cool things off. To model the climate accurately, you have to account for all of these factors. Unfortunately, says James Hurrell, who led the NCAR’s most recent effort to upgrade its own climate model, you can’t. “Sometimes you don’t include processes simply because you don’t understand them well enough,” he says. “Sometimes it’s because they haven’t even been discovered yet.”

A good example of the former, says Dixon, is the global carbon cycle — the complex interchange of carbon between oceans, atmosphere, and biosphere. Since atmospheric carbon dioxide is driving climate change, it’s obviously important, but until about 15 years ago, it was too poorly understood to be included in the models. “Now,” says Dixon, “we’re including it — we’re simulating life, not just physics.” Equations representing ocean dynamics and sea ice also have been added to climate models as scientists have understood these crucial processes better.

Other important phenomena, such as changes in clouds, are still too complex to model accurately. “We can’t simulate individual cumulus clouds,” says Dixon, because they’re much smaller than the 200-kilometer grid boxes that make up climate models’ representation of the world. The same applies to aerosols — tiny particles, including natural dust and manmade soot — that float around in the atmosphere and can cool or warm the planet, depending on their size and composition.

But there’s no one right way to model these small-scale phenomena. “We don’t have the observations and don’t have the theory,” says Gleckler. The best they can do on this point is to simulate the net effect of all the clouds or aerosols in a grid box, a process known as “parameterization.” Different
‘It’s not a science for which everything is known, by definition,’ says one expert.
modeling centers go about it in different ways, which, unsurprisingly, leads to varying results. “It’s not a science for which everything is known, by definition,” says Gleckler. “Many groups around the world are pursuing their own research pathways to develop improved models.” If the past is any guide, modelers will be able to abandon parameterizations one by one, replacing them with mathematical representations of real physical processes.

Sometimes, modelers don’t understand a process well enough to include it at all, even if they know it could be important. One example is a caveat that appears on that 2007 IPCC chart. The projected range of sea-level rise, it warns, explicitly excludes “future rapid dynamical changes in ice flow.” In other words, if land-based ice in Greenland and Antarctica starts moving more quickly toward the sea than it has in the past — something glaciologists knew was possible, but hadn’t yet been documented — these estimates would be incorrect. And sure enough, satellites have now detected such movements. “The last generation of NCAR models,” says Hurrell, “had no ice sheet dynamics at all. The model we just released last summer does, but the representation is relatively crude. In a year or two, we’ll have a more sophisticated update.”

Sophistication only counts, however, if the models end up doing a reasonable job of representing the real world. It’s not especially useful to wait until 2100 to find out, so modelers do the next best thing: They perform “hindcasts,” which are the inverse of forecasts. “We start the models from the middle of the 1800s,” says Dixon, “and let them run through the present.” If a model reproduces the overall characteristics of the real-world climate record reasonably well, that’s a good sign.

What the models don’t try to do is to match the timing of short-term climate variations we’ve experienced. A model might produce a Dust Bowl like that of the 1930s, but in the model it might happen in the 1950s. It should produce the ups and downs of El Niño and La Niña currents in the Pacific with about the right frequency and intensity, but not necessarily at the same times as they happen in the real Pacific. Models should show slowdowns and accelerations in the overall warming trend, the result of natural fluctuations, at about the rate they happen in the real climate. But they won’t necessarily show the specific flattening of global warming we’ve observed during the past decade — a temporary slowdown that had skeptics declaring the end of climate change.

It’s also important to realize that climate represents what modelers call a boundary condition. Blizzards in the Sahara are outside the boundaries of our current climate, and so are stands of palm trees in Greenland next year. But within those boundaries, things can bounce around a great deal from year to year or decade to decade. What modelers aim to produce is a virtual climate that resembles the real one in a statistical sense, with El Niños, say, appearing about as often as they do in reality, or hundred-year storms coming once every hundred years or so.

This is one essential difference between weather forecasting and climate projection. Both use computer models, and in some cases, even the very same models. But weather forecasts start out with the observed state of the
Many decisions about how to adapt to climate change can’t wait for better climate models.
atmosphere and oceans at this very moment, then project it forward. It’s not useful for our day-to-day lives to know that September has this average high or that average low; we want to know what the actual temperature will be tomorrow, and the day after, and next week. Because the atmosphere is chaotic, anything less than perfect knowledge of today’s conditions (which is impossible, given that observations are always imperfect) will make the forecast useless after about two weeks.

Since climate projections go out not days or weeks, but decades, modelers don’t even try to make specific forecasts. Instead, they look for changes in averages — in boundary conditions. They want to know if Septembers in 2050 will be generally warmer than Septembers in 2010, or whether extreme weather events — droughts, torrential rains, floods — will become more or less frequent. Indeed, that’s the definition of climate: the average conditions in a particular place.

“Because models are put together by different scientists using different codes, each one has its strengths and weaknesses,” says Dixon. “Sometimes one [modeling] group ends up with too much or too little sea ice but does very well with El Niño and precipitation in the continental U.S., for example,” while another nails the ice but falls down on sea-level rise. When you average many models together, however, the errors tend to cancel.

Even when models reproduce the past reasonably well, however, it doesn’t guarantee that they’re equally reliable at projecting the future. That’s in part because some changes in climate are non-linear, which is to say that a small nudge can produce an unexpectedly large result. Again, ice sheets are a good example: If you look at melting alone, it’s pretty straightforward to calculate how much extra water will enter the sea for every degree of temperature rise. But because meltwater can percolate down to lubricate the undersides of glaciers, and because warmer oceans can lift the ends of glaciers up off the sea floor and remove a natural brake, the ice itself can end up getting dumped into the sea, unmelted. A relatively small temperature rise can thus lead to an unexpectedly large increase in sea level. That particular non-linearity was already suspected, if not fully understood, but there could be others lurking in the climate system.

Beyond that, says Dixon, if three-fourths of the models project that the Sahel (the area just south of the Sahara) will get wetter, for example, and a fourth says it will dry out, “there’s a tendency to go with the majority. But we can’t rule out without a whole lot of investigation whether the minority is doing something right. Maybe they have a better representation of rainfall patterns.” Even so, he says, if you have the vast majority coming up with similar results, and you go back to the underlying theory, and it makes physical sense, that tends to give you more confidence they’re right. The best confidence-builder of all, of course, is when a trend projected by models shows up in observations — warmer springs and earlier snowmelt in the Western U.S., for example, which not only makes physical sense in a warming world, but which is clearly happening.

Climate Forecasts: The Case
For Living with Uncertainty

Climate Forecasts: The Case For Living with Uncertainty
As climate science advances, predictions about the extent of future warming and its effects are likely to become less — not more — precise, journalist Fred Pearce writes. That may make it more difficult to convince the public of the reality of climate change, but it hardly diminishes the urgency of taking action.
READ MORE
And the models are constantly being improved. Climate scientists are already using modified versions to try and predict the actual timing of El Ninos and La Niñas over the next few years. They’re just beginning to wrestle with periods of 10, 20 and even 30 years in the future, the so-called decadal time span where both changing boundary conditions and natural variations within the boundaries have an influence on climate. “We’ve had a modest amount of skill with El Niños,” says Hurrell, “where 15-20 years ago we weren’t so skillful. That’s where we are with decadal predictions right now. It’s going to improve significantly.”

After two decades of evaluating climate models, Gleckler doesn’t want to downplay the shortcomings that remain in existing models. “But we have better observations as of late,” he says, “more people starting to focus on these things, and better funding. I think we have better prospects for making some real progress from now on.”

POSTED ON 18 Jan 2011 IN Business & Innovation Climate Energy Science & Technology North America North America 

COMMENTS


While this article is informative, especially considering what modelers look for in outputs from the computer in 'hindcasting', the basic premise present in the title is over-simplified and misleading.

What aspects of the model outputs are we assessing? If we are looking to support the idea of human-induced increasing of the greenhouse effect, it seems as though we can have some confidence from computer simulations of climate models. One of the author's sources makes that clear. Other than that aspect of climate, there is no clear evidence, either in this article or elsewhere, that model outputs are trustworthy.

It would be interesting if the author scoured the climate science literature from 20 to 25 years ago to see what types of predictions were made then and how well the climate models could produce the climate we've experienced in that time. The fact that this point is never brought up in these pieces makes me think such predictions were not good.

And while some will point to changes in our understanding since 20 years ago, I'll point out that the equations cited in the above piece have been the same for over 100 years. We have not made substantial strides in the theoretical side of this science to the point where we have a completely different set of equations modeling the balance of energy and conservation of momentum in the climate system. The first and second editions of Washington and Parkinson's 'Introduction to Three-Dimensional Climate Modeling' (spanning 20 years in publication) have almost the exact same chapter on the theory of climate models. So a direct comparison might be flawed on the hardware side of the issue, but not the actual model side.

If we are going to understand how to move forward in a policy discussion concerning the nature of the 'climate problem', then nuance is necessary. To make blanket statements about the level with which we can trust 'models' shows a lack of determination in painting the most complete picture of our situation currently. If we are going to most effectively tackle this situation, we ought to make sure we aren't cutting corners for the sake of narrative.

Posted by maxwell on 18 Jan 2011


RE: maxwell

fwiw, the relatively crude predictions models made in the 70's have largely panned out to hold true. One thing they didn't anticipate was such a large increase of human injected CO2 into the atmosphere. Ironically, that is still one of things we haven't been able to anticipate, as we are increasing CO2 emissions toward the high end of the A1 emissions scenarios.

The reason the basic equations haven't changed is because the basic principles of large scale airflow, radiation and energy balance are well understood and described by mathematics. Where they're trying to make substantial strides are in the smaller more chaotic processes like cloud formation and aerosol feedbacks. This is where more observations, better theory and more powerful computing are making models more accurate.

Posted by Sc0tt on 18 Jan 2011


Interesting but disappointing article.

None of these GCMs have been validated, therefore they are totally useless for policy making.

The statement that errors in one GCM cancel out errors in another GCM is not science, it's not even science fiction.

Posted by Baa Humbug on 19 Jan 2011


Let me get this right......

What's being said here is based on the limited knowledge we have of the variables and processes which influence climate, the models are trustworthy. When we understand better the processes or even identify additional processes that influence climate the models may be more trustworthy or on the other hand they may be useless.

By the way, error + error = 2 errors

Posted by Invicta on 20 Jan 2011


Ask Lehman Brothers what happens when people put too much trust in computer models incorporating multiple non-linear equations.

Anybody who puts much faith in computer models requiring simultaneous solution of a host of non-linear equations attempting to describe an immensely complex, possibly chaotic system not fully understood by humans either knows nothing about computer modeling or is incredibly gullible.

The now infamous "HARRY_READ_ME.txt" file:
http://www.anenglishmanscastle.com/HARRY_READ_ME.txt

Help yourself:
http://www.giss.nasa.gov/tools/modelE/modelEsrc/


Posted by John G. on 20 Jan 2011


Sco0t,

'fwiw, the relatively crude predictions models made in the 70's have largely panned out to hold true.'

Again, since you're not specifically defining which predictions to which you are referring, it is almost impossible to either substantiate or refute this claim. I will point you to my comment in which I agree that the human induced greenhouse effect seems to be modeled well, but we don't how 'trustworthy' GCM's are with respect to almost all other predictions. The predictions aren't made available to interested public in any transparent way.

The continuation of broad, meaningless statements concerning the ability models to correctly predict climate, lacking any specificity, is beginning to erode public trust in models in general. We already see comments here pointing to some kind of relation between climate models and economic models, even though, as you say, the basic physics that go into GCM are very, very well understand. So much better understand than the closed-form equations that go into economic models. There really shouldn't be any comparison. But because climate scientists are making noise in the media about outcomes of their models that they refuse to substantiate with real data in a meaningful way, the public is leaning more and more to the direction of climate science being pseudo-science, as much as economic modeling is pseudo-science. More and more people believe catastrophic claims 'overblown' or 'exaggerated'.

Because the narrative, highlighted by the above article, continues to lack any specific message with respect to the ability of these models to predict current climate systems, people don't trust science anymore. Does that seem like a good method for making policy decisions? The real life data says 'no'.

Posted by maxwell on 21 Jan 2011


There is one thing, not exactly a minor thing really in the great scheme of things, which may have completely escaped the attention of modelers.

And that is "when" we are.

The probability is quite high that we are at the end Holocene, the Holocene being the interglacial we live in and the one in which all of human civilization has occurred.

Five of the past 6 such post Mid Pleistocene interglacials have each lasted about half of a precession cycle. The precession cycle oscillates between 19 and 23 thousand years, and we are at the 23kyr point now, which makes half 11,500 years, or the present age of the Holocene.

Which is what makes such discussion quite relevant.

The ends of the post MPT interglacials have been quite the wild climate ride. The most recent one, the Eemian interglacial posted at least two strong thermal excursions and quite rapidly, the final one scoring a +6 meter rise in sea level above present, accompanying something like a 4-5C temperature excursion. Some fairly credible research suggests this may have been more like +20 meters. MIS-11, the Holsteinian interglacial, scored a +21.3 meter rise. One might be tempted to think of this as the natural climate "noise" within which we are challenged to recognize our anthropogenic "signal" from.

The 2007 IPCC AR4 report worst case AGW prognostication is 0.59 meters, which we will round to 0.6 meters for comparison.

In order to recognize a signal from background noise you need to at least equal the noise, and the AR4 0.6 meter "signal" comes in at just 10% of the low end of the last end interglacial's "noise".

How do the new, improved models do with this signal to noise ratio?

Posted by sentient on 21 Jan 2011


Who and what benchmarks the models? In physics the benchmark should be the experiment. For climate models the benchmark should be the observation. A proposal: Let the most important models predict the climate change 20 years from now, write their predictions down and then compare the outcome with the prediction 20 years later. To make it more diagnostically conclusive, choose different prediction categories: temperature,moisture, precipitation, occurrence of droughts, floods ...

Furthermore make use of the latest earth observation satellites and compare the model values against the values observed by satellite. This eliminates bad models.

Posted by Benchmarking models on 22 Jan 2011


Comments have been closed on this feature.
michael d. lemonickABOUT THE AUTHOR
Michael D. Lemonick is the senior writer at Climate Central, a nonpartisan organization whose mission is to communicate climate science to the public. Prior to joining Climate Central, he was a senior writer at Time magazine, where he covered science and the environment for more than 20 years. He has also written four books on astronomical topics and has taught science journalism at Princeton University for the past decade. In other articles for Yale Environment 360, Lemonick has written about the impacts of climate change in the U.S. and how satellite technology is used to track melting ice.
MORE BY THIS AUTHOR

 
 

RELATED ARTICLES


Rebuilding the Natural World:
A Shift in Ecological Restoration

From forests in Queens to wetlands in China, planners and scientists are promoting a new approach that incorporates experiments into landscape restoration projects to determine what works to the long-term benefit of nature and what does not.
READ MORE

How Rise of Citizen Science
Is Democratizing Research

New technology is dramatically increasing the role of non-scientists in providing key data for researchers. In an interview with Yale Environment 360, Caren Cooper of the Cornell Lab of Ornithology talks about the tremendous benefits — and potential pitfalls — of the expanding realm of citizen science.
READ MORE

Solar Geoengineering: Weighing
Costs of Blocking the Sun’s Rays

With prominent scientists now calling for experiments to test whether pumping sulfates into the atmosphere could safely counteract global warming, critics worry that the world community may be moving a step closer to deploying this controversial technology.
READ MORE

Documenting the Swift Change
Wrought by Global Warming

Photographer Peter Essick has traveled the world documenting the causes and consequences of climate change. In a Yale Environment 360 photo essay, we present a gallery of images Essick took while on assignment in Antarctica, Greenland, and other far-flung locales.
READ MORE

New Green Vision: Technology
As Our Planet’s Last Best Hope

The concept of ecological modernism, which sees technology as the key to solving big environmental problems, is gaining adherents and getting a lot of buzz these days. While mainstream conservationists may be put off by some of the new movement’s tenets, they cannot afford to ignore the issues it is raising.
READ MORE

 

MORE IN Analysis


UN Panel Looks to Renewables
As the Key to Stabilizing Climate

by fred pearce
In its latest report, the UN's Intergovernmental Panel on Climate Change makes a strong case for a sharp increase in low-carbon energy production, especially solar and wind, and provides hope that this transformation can occur in time to hold off the worst impacts of global warming.
READ MORE

Will Increased Food Production
Devour Tropical Forest Lands?

by william laurance
As global population soars, efforts to boost food production will inevitably be focused on the world’s tropical regions. Can this agricultural transformation be achieved without destroying the remaining tropical forests of Africa, South America, and Asia?
READ MORE

New Satellite Boosts Research
On Global Rainfall and Climate

by nicola jones
Although it may seem simple, measuring rainfall worldwide has proven to be a difficult job for scientists. But a recently launched satellite is set to change that, providing data that could help in understanding whether global rainfall really is increasing as the planet warms.
READ MORE

UN Climate Report Is Cautious
On Making Specific Predictions

by fred pearce
The draft of the latest report from the Intergovernmental Panel on Climate Change warns that the world faces serious risks from warming and that the poor are especially vulnerable. But it avoids the kinds of specific forecasts that have sparked controversy in the past.
READ MORE

Rebuilding the Natural World:
A Shift in Ecological Restoration

by richard conniff
From forests in Queens to wetlands in China, planners and scientists are promoting a new approach that incorporates experiments into landscape restoration projects to determine what works to the long-term benefit of nature and what does not.
READ MORE

In the Pastures of Colombia,
Cows, Crops and Timber Coexist

by lisa palmer
As an ambitious program in Colombia demonstrates, combining grazing and agriculture with tree cultivation can coax more food from each acre, boost farmers’ incomes, restore degraded landscapes, and make farmland more resilient to climate change.
READ MORE

Soil as Carbon Storehouse:
New Weapon in Climate Fight?

by judith d. schwartz
The degradation of soils from unsustainable agriculture and other development has released billions of tons of carbon into the atmosphere. But new research shows how effective land restoration could play a major role in sequestering CO2 and slowing climate change.
READ MORE

Is Weird Winter Weather
Related to Climate Change?

by fred pearce
Scientists are trying to understand if the unusual weather in the Northern Hemisphere this winter — from record heat in Alaska to unprecedented flooding in Britain — is linked to climate change. One thing seems clear: Shifts in the jet stream play a key role and could become even more disruptive as the world warms.
READ MORE

Amid Elephant Slaughter,
Ivory Trade in U.S. Continues

by adam welz
In the last year, the U.S. government and nonprofits have put a spotlight on the illegal poaching of Africa’s elephants and Asia’s insatiable demand for ivory. But the media coverage has ignored a dirty secret: The U.S. has its own large ivory trade that has not been adequately regulated.
READ MORE

Monitoring Corporate Behavior:
Greening or Merely Greenwash?

by fred pearce
Companies with bad environmental records are increasingly turning to a little-known nonprofit called TFT to make sure they meet commitments to improve their practices. It remains to be seen if this is just a PR move or a turning point for corporate conduct.
READ MORE


e360 digest
Yale
Yale Environment 360 is
a publication of the
Yale School of Forestry
& Environmental Studies
.

SEARCH e360



Donate to Yale Environment 360
Yale Environment 360 Newsletter

CONNECT

Twitter: YaleE360
e360 on Facebook
Donate to e360
View mobile site
Bookmark
Share e360
Subscribe to our newsletter
Subscribe to our feed:
rss


ABOUT

About e360
Contact
Submission Guidelines
Reprints

e360 video contest
Yale Environment 360 is sponsoring a contest to honor the best environmental videos.
Find more contest information.


DEPARTMENTS

Opinion
Reports
Analysis
Interviews
Forums
e360 Digest
Podcasts
Video Reports

TOPICS

Biodiversity
Business & Innovation
Climate
Energy
Forests
Oceans
Policy & Politics
Pollution & Health
Science & Technology
Sustainability
Urbanization
Water

REGIONS

Antarctica and the Arctic
Africa
Asia
Australia
Central & South America
Europe
Middle East
North America

E360 en Español

Universia partnership
Yale Environment 360 articles are now available in Spanish and Portuguese on Universia, the online educational network.
Visit the site.

e360 MOBILE

Mobile
The latest
from Yale
Environment 360
is now available for mobile devices at e360.yale.edu/mobile.

e360 VIDEO

Warriors of Qiugang
The Warriors of Qiugang, a Yale Environment 360 video that chronicles the story of a Chinese village’s fight against a polluting chemical plant, was nominated for a 2011 Academy Award for Best Documentary (Short Subject). Watch the video.


header image
Top Image: aerial view of Iceland. © Google & TerraMetrics.

e360 VIDEO

Colorado River Video
In a Yale Environment 360 video, photographer Pete McBride documents how increasing water demands have transformed the Colorado River, the lifeblood of the arid Southwest. Watch the video.

 

OF INTEREST



Yale