Snow + Ice

NCAR develops method to predict sea ice changes years in advance

BOULDER – Climate scientists at the National Center for Atmospheric Research (NCAR) present evidence in a new study that they can predict whether the Arctic sea ice that forms in the winter will grow, shrink, or hold its own over the next several years. The team of scientists has found that changes in the North Atlantic ocean circulation could allow overall winter sea ice extent to remain steady in the near future, with continued loss in some regions balanced by possible growth in others, including in the Barents Sea. "We know that over the long term, winter sea ice will continue to retreat," said NCAR scientist Stephen Yeager, lead author of the new study, which appears in the journal Geophysical Research Letters. "But we are predicting that the rate will taper off for several years in the future before resuming. We are not implying some kind of recovery from the effects of human-caused global warming; it's really just a slow down in winter sea ice loss." The research was funded largely by the National Science Foundation, NCAR's sponsor, with additional support from the National Oceanic and Atmospheric Administration and the U.S. Department of Energy. The researchers tested how well they were able to predict winter sea ice changes by "hindcasting" past decades and then comparing their retrospective predictions to observations of what really happened. This image shows how the model stacked up to real life for the period of 1997–2007. [Enlarge] (©UCAR. This image is freely available for media & nonprofit use.) Yeager is among a growing number of scientists trying to predict how the climate may change over a few years to a few decades, instead of the more typical span of many decades or even centuries. This type of "decadal prediction" provides information over a timeframe that is useful for policy makers, regional stakeholders, and others. Decadal prediction relies on the idea that some natural variations in the climate system, such as changes in the strength of ocean currents, unfold predictably over several years. At times, their impacts can overwhelm the general warming trend caused by greenhouse gases released into the atmosphere by humans. Yeager's past work in this area has focused on decadal prediction of sea surface temperatures. A number of recent studies linking changes in the North Atlantic ocean circulation to sea ice extent led Yeager to think that it would also be possible to make decadal predictions for Arctic winter sea ice cover using the NCAR-based Community Earth System Model. Linking ocean circulation and sea ice The key is accurately representing the Atlantic Meridional Overturning Circulation (AMOC) in the model. AMOC sweeps warm surface waters from the tropics toward the North Atlantic, where they cool and then sink before making a return south in deep ocean currents.  AMOC can vary in intensity. When it's strong, more warm water is carried farther toward the North Atlantic and Arctic oceans, accelerating sea ice loss. When weak, the warm water largely stays farther south, and its effects on sea ice are reversed. The variations in AMOC's vigor—from weak to strong or vice versa—occur over multiple years to decades, giving scientists some ability to predict in advance how it will affect winter sea ice, in particular.  For the decade of 2007‑2017, the research team predicts that there may be some growth of winter sea ice in the Arctic Ocean, particularly on the Atlantic side, where scientists have the most confidence in the model's ability. The image also shows possible sea ice loss in the North Pacific. [Enlarge] For an image of predicted winter sea ice change from 2013‑2023, click here. (©UCAR. This image is freely available for media & nonprofit use.) AMOC now appears to be weakening. Yeager and his co-authors, NCAR scientists Alicia Karspeck and Gokhan Danabasoglu, found in their new study that this change in the ocean is likely to be enough to temporarily mask the impacts of human-caused climate change and stall the associated downward trend in winter sea ice extent in the Arctic, especially on the Atlantic side, where AMOC has the most influence. The limits of a short satellite record The amount of sea ice covering the Arctic typically grows to its maximum in late February after the long, dark winter. The sea ice minimum typically occurs at the end of the summer season, in late September. The new study addresses only winter sea ice, which is less vulnerable than summer ice to variations in weather activity that cannot be predicted years in advance, such as storms capable of breaking up the ice crust. Despite their success incorporating AMOC conditions into winter sea ice "hindcasts," the scientists are cautious about their predictions of future conditions. Because satellite images of sea ice extend only back to 1979, the scientists had a relatively short data record for verifying decadal-scale predictions against actual conditions. Additionally, AMOC itself has been measured directly only since 2004, though observations of other variables that are thought to change in tandem with AMOC—such as sea surface height and ocean density in the Labrador Sea, as well as sea surface temperature in the far North Atlantic—go back much farther. "The sea ice record is so short that it's difficult to use statistics alone to build confidence in our predictions," Yeager said. "Much of our confidence stems from the fact that our model does well at predicting slow changes in ocean heat transport and sea surface temperature in the subpolar North Atlantic, and these appear to impact the rate of sea ice loss. So, we think that we understand the mechanisms underpinning our sea ice prediction skill." About the article Title: Predicted slow-down in the rate of Atlantic sea ice lossAuthors: Stephen G. Yeager, Alicia Karspeck, and Gokhan DanabasogluPublication: Geophysical Research Letters WriterLaura Snider, Senior Science Writer and Public Information Officer

Shrinking sea ice: Modeling the Arctic's future

September 16, 2015 | The National Snow and Ice Data Center announced yesterday that sea ice in the Arctic had dwindled to the fourth lowest extent since satellites began capturing images of the area in 1979. All nine of the lowest sea ice extents have occurred in the last nine years, according to the NSIDC. News of another lean year for end-of-summer sea ice has people wondering anew just how long we have left before Arctic Septembers become ice free.  NCAR climate modelers Marika Holland and David Bailey took a crack at answering that question a few years ago, and their results are illustrated in the animation above. Created by Tim Scheitlin of NCAR's Visualization Lab, the video shows that all the ice could disappear in some Septembers as early as mid-century if human-caused climate change continues unabated. The degree to which sea ice melts during any particular sun-filled Arctic summer depends on a range of factors, from the temperature of the ocean surface to how many storms hit the region to how much ice accumulated during the preceding winter. Regardless of how much melt eventually occurs, the process typically wraps up in mid-to-late September, just as the Sun prepares to set on the North Pole for the first time since rising the previous March. With the return of darkness the ice begins to regrow. In the last several decades, global climate models have becoming increasingly sophisticated in the way they handle these changes in ice from season to season and year to year. The NCAR-based Community Climate System Model (now the Community Earth System Model, or CESM) was one of the first to take the varying nature of ice thickness into account as well as the way sea ice moves across the ocean surface, for example. And though it is now recognized as one of the best in the world for simulating sea ice, NCAR scientists and their collaborators are continuing to refine the model by adding additional detail, such as how pools of melt water or layers of snow on the ice surface may affect the rate of melting. Global climate models, like CESM, are typically used to show trends over multiple decades, as in the visualization above. But researchers at NCAR and elsewhere are now working to see if the models can be used to make shorter-term decadal predictions of sea ice changes, a timeline that is important to policy makers, residents of the region, and others. More about NCAR's sea ice research Arctic ice melt could pause in near future, then resume again Study may answer longstanding questions about little ice age Arctic ice more vulnerable to sunny weather, new study shows Permafrost threatened by rapid retreat of Arctic sea ice, NCAR study finds Writer/ContactLaura Snider  

Snowfall measurement: a flaky history

Matt Kelsch • January 28, 2014 | As this week’s blizzard rumbled toward the U.S. Northeast, many media outlets posted the top-10 snow events for major cities. An unusual number of snowfalls in those top 10 lists have been within the last 20 years, even in cities that have records going back to the 1800s. Why is that? Could it be climate change? Are other factors involved? Matt Kelsch has taken 6-hourly snow readings at the official weather station for Boulder, Colorado many times during more than 25 years of volunteer work as the NOAA/National Weather Service cooperative climate observer for Boulder. (Photo courtesy Matt Kelsch, UCAR.) As a hydrometeorological instructor in UCAR’s COMET program and a weather observer for the National Weather Service, I am keenly interested in weather trends. In this case, climate change is an important factor to explore, since we know that the heaviest precipitation events have intensified in many parts of the world (see related story: Torrents and droughts and twisters - oh my!). But when we turn to snowstorms in the Northeast, or elsewhere in the U.S., there is an additional factor at work when comparing modern numbers with historical ones. Quite simply, our measuring techniques have changed, and we are not necessarily comparing apples to apples. In fact, the apparent trend toward bigger snowfalls is at least partially the result of new—and more accurate—ways of measuring snowfall totals. Climate studies carefully select a subset of stations with consistent snow records, or avoid the snowfall variable altogether. Official measurement of snowfall these days uses a flat, usually white, surface called a snowboard (which pre-dates the popular winter sport equipment of the same name). The snowboard depth measurement is done ideally every 6 hours, but not more frequently, and the snow is cleared after each measurement. At the end of the snowfall, all of the measurements are added up for the storm total.  NOAA’s cooperative climate observers and thousands of volunteers with the Community Collaborative Rain, Hail and Snow (CoCoRaHS), a nationwide observer network, are trained in this method. This practice first became standard at airports starting in the 1950s, but later at other official climate reporting sites, such as Manhattan’s Central Park, where 6-hourly measurements did not become routine until the 1990s. Earlier in our weather history, the standard practice was to record snowfall amounts less frequently, such as every 12 or 24 hours, or even to take just one measurement of depth on the ground at the end of the storm. You might think that one or two measurements per day should add up to pretty much the same as measurements taken every 6 hours during the storm. It’s a logical assumption, but you would be mistaken. Snow on the ground gets compacted as additional snow falls. Therefore, multiple measurements during a storm typically result in a higher total than if snowfall is derived from just one or two measurements per day. That can make quite a significant difference. It turns out that it’s not uncommon for the snow on the ground at the end of a storm to be 15 to 20 percent less than the total that would be derived from multiple snowboard measurements.  As the cooperative climate observer for Boulder, Colorado, I examined the 15 biggest snowfalls of the last two decades, all measured at the NOAA campus in Boulder. The sum of the snowboard measurements averaged 17 percent greater than the maximum depth on the ground at the end of the storm. For a 20-inch snowfall, that would be a boost of 3.4 inches—enough to dethrone many close rivals on the top-10 snowstorm list that were not necessarily lesser storms! Another common practice at the cooperative observing stations prior to 1950 did not involve measuring snow at all, but instead took the liquid derived from the snow and applied a 10:1 ratio (every inch of liquid equals ten inches of snow). This is no longer the official practice and has become increasingly less common since 1950. But it too introduces a potential low bias in historic snowfalls because in most parts of the country (and in the recent blizzard in the Northeast) one inch of liquid produces more than 10 inches of snow. This means that many of the storms from the 1980s or earlier would probably appear in the record as bigger storms if the observers had used the currently accepted methodology. Now, for those of you northeasterners with aching backs from shoveling, I am not saying that your recent storm wasn’t big in places like Boston, Portland, or Long Island. But I am saying that some of the past greats—the February Blizzard of 1978, the Knickerbocker storm of January 1922, and the great Blizzard of March 1888—are probably underestimated. So keep in mind when viewing those lists of snowy greats: the older ones are not directly comparable with those in recent decades. It’s not as bad as comparing apples to oranges, but it may be like comparing apples to crabapples. Going forward, we can look for increasingly accurate snow totals. Researchers at NCAR and other organizations are studying new approaches for measuring snow more accurately (see related story: Snowfall, inch by inch).   But we can’t apply those techniques to the past. For now, all we can say is that snowfall measurements taken more than about 20 or 30 years ago may be unsuitable for detecting trends – and perhaps snowfall records from the past should not be melting away quite as quickly as it appears. Update • January 29, 2015 | Thanks to thoughtful feedback by several colleagues, this article has been updated. Paragraph 3 now includes a description of how climate studies handle the data inconsistencies. Paragraph 9 was added to describe the pre-1950s practice, no longer in wide use, of recording liquid water content only, and not snow depth. Matt Kelsch is a hydrometeorologist in UCAR's COMET Program. He specializes in weather and climate events involving water, such as floods, droughts, rain, hail, or snow. Kelsch develops and delivers educational materials designed for both domestic and international groups including National Weather Service forecasters, the military, the World Meteorological Organization, university students and faculty, government agencies, and private industry.      

Emperor penguins on the decline?

July 30, 2014 | Emperor penguins, the large, charismatic birds known from their frequent film and TV appearances, are in danger. A collaborative research project is drawing attention to the impending plight of the emperors. By 2100, according to a new study, their numbers will have fallen by around 19% and will continue to decline, qualifying the species for endangered status. Emperor penguin communities entirely ring the continent of Antarctica. Of the 45 known colonies, only one has been extensively studied for decades, and most of the others have never been visited by humans, nor are they likely to be. Emperors live on sea ice off the coast of the continent, and the amount of ice plays a major role in determining the health of a colony. Too much ice and the penguins have a long, debilitating walk to the sea and food; too little ice and the colony is more exposed and vulnerable to predation. A colony of emperor penguins at Australia's Snow Hill Island, October 2009. (Photo by Jenny Varley, Wikimedia Commons.) Stéphanie Jenouvrier (Woods Hole Oceanographic Institution) is an expert on penguin life, and she wanted to project the size of emperor populations into the future as Earth’s climate warms. The problem, she says, was her “limited background on climate science.” Meanwhile, at NCAR, senior scientist Marika Holland is a climate scientist with a longstanding specialty in modeling sea ice changes, although she has never been to Antarctica and has never seen an emperor penguin. Aware of Holland’s previous work, Jenouvrier contacted her and Julienne Stroeve at the University of Colorado’s Cooperative Institute for Research in Environmental Sciences. The three of them collaborated on preliminary studies published in 2009. Jenouvrier received a fellowship from CIRES and worked in Boulder for almost a year, collaborating closely with Holland, Stroeve, Mark Serreze at the CIRES National Snow and Ice Data Center, and other scientists on a follow-up study, published in 2012, and on their most extensive update, recently published in Nature Climate Change. The biologists used long-term data from the one well-studied emperor colony, off the coast of Terre Adélie, to estimate the relationship between sea ice and rates of breeding success and survival of chicks. They used the record of penguin population and sea ice concentrations at Terre Adélie to estimate vital rates (births/deaths) and population dynamics at each colony. Learning each other’s languages: biology and climate The next challenge was to project sea ice changes over the rest of the 21st century and relate that to the health of each penguin community. Sea ice off Antarctica does not behave uniformly: although the total area of sea ice around the continent has increased somewhat in recent years, the trends vary by region during the course of a year and over longer periods. Sea ice must therefore be studied in the relatively small segments that host individual colonies, in order to assess the viability of penguin populations. At first, says Holland, the biologists and climate modelers spoke two different languages, which was “a bit of a barrier.” The frequent interchanges during Jenouvrier’s year in Boulder helped bridge that gap, she adds. The sea ice scientists began with a group of 20 or so climate models and settled on a widely used midrange scenario of emissions produced for the Intergovernmental Panel on Climate Change called SRES A1B. Once the penguin population models and sea ice change models were set, climate projections were fed into the penguin population models. Due to inherent uncertainties in the models, Jenouvrier ran tens of thousands of computer simulations to achieve the results that the team published. They found that sea ice will generally decline and its variability will increase by the end of this century. As a result, the simulations indicate that emperor populations will increase by around 10% through midcentury, but then decline to 19% below current levels by 2100. One group of 7 colonies facing the Ross Sea will still be non-threatened by that time, although with a reduced population. On the other side of the continent, facing the Indian Ocean and Weddell Sea, 10 colonies will face quasi-extinction. Most of the rest will qualify as endangered. The collaborative nature of a study like this, Holland says, allows the expertise of NCAR scientists to inform such other fields as biology and economics to better understand the global system. The researchers conclude that the emperor penguin is “fully deserving of Endangered status due to climate change, and can act as an iconic example of a new global conservation paradigm for species threatened by future climate change.” WriterHarvey LeifertContactDavid Hosansky, NCAR & UCAR Communications Collaborating institutionsNational Center for Atmospheric ResearchUniversity of AmsterdamUniversity College LondonUniversity of Colorado/Cooperative Institute for Research in Environmental SciencesUniversity of La RochelleWoods Hole Oceanographic Institution FundersAlexander von Humboldt FoundationEuropean Research CouncilGrayce B. Kerr FundNational Oceanic and Atmospheric AdministrationNational Science FoundationPenzance Endowed Fund in Support of Assistant ScientistsWoods Hole Oceanographic Institution   Dive deeper Stéphanie Jenouvrier, Marika Holland, Julienne Stroeve, Mark Serreze, Christophe Barbraud, Henri Weimerskirch and Hal Caswell, Projected continent-wide declines of the emperor penguin under climate change, Nature Climate Change (2014), doi:10.1038/nclimate2280 In Graphic Terms Annual mean change of sea ice concentrations (SIC) between the twentieth and twenty-first centuries and conservation status of emperor penguin colonies by 2100. SIC projections were obtained from a subset of atmosphere-ocean general circulation models. Dot numbers refer to each colony evaluated, with dot color showing conservation status (red = quasi-extinct, orange = endangered, yellow = vulnerable, green = not threatened).  (Figure 1 from Jenouvrier et al., Projected continent-wide declines of the emperor penguin under climate change, doi:10.1038/nclimate2280; image courtesy Nature Climate Change.)    

Snowfall, inch by inch

Bob Henson • March 18, 2013 | The past month has seen a remarkable run of challenging snow forecasts across the United States. From Denver to the Washington-Boston corridor, a series of winter storms brought millions of people either much more or much less snow than they expected—sometimes depending on which side of town they were on. But if predicting snow is a tough business (as we explored in our post of March 4), measuring it is no piece of cake either. Ethan Gutmann uses old-fashioned technology—a ruler—to verify measurements from a laser-based snow measuring system at NCAR’s Marshall Field Site. (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.) We took a close look at the pitfalls of snow measurement and some promising technologies in this 2011 AtmosNews story. Here, I’ll provide a few updates. You can dive deeper into the variables that shape snow accumulation in our interactive feature “Accumulated wisdom: How to measure snow.” A fine line between snowy and sloppy One reason why winter-storm forecasting and observing has been such a struggle lately is borderline temperatures. Sometimes significant amounts of snow will fall into surface air that’s close to, or even above, freezing. The common result: trees and lawns plastered with snow, but sidewalks and roads merely wet or slushy. Given a patchwork like this, what exactly does it mean to say that “ten inches of snow fell today," especially if there’s little or no snow in sight where you are? Every six hours, hundreds of U.S. observing stations report the amount of snow that’s fallen on a flat snowboard over that time span. The board, typically a plank of wood painted white, is then cleared for the next accumulation period. (Some experts are shifting toward the name “snow measuring board,” to avoid confusion with the winter recreation device.) Also, at least every 24 hours, stations report the total amount of snow on the ground. This is usually done by measuring snow depth with a yardstick at several points within a few feet of a reporting station, then averaging the result. It’s easy to see why the total snowpack on the ground probably won’t correspond precisely to the amounts measured on snowboards every six hours. For example: As snow accumulates, the weight of upper layers compacts lower ones, which reduces the height of layers toward the bottom of the snowpack. Snow typically melts less quickly on grass (and more quickly on pavement) than on a snowboard. It’s also easy to see why consistency in snow measurement is critical. When the pace of checking and clearing a snowboard goes up, the total snow measured in a given storm will go up as well, since there’s less time for the snow on the board to be compacted or to melt in between measurements. Engaging animations help the 15,000-plus volunteers with the Community Collaborative Rain, Hail, and Snow Network to sharpen their snow-measuring skills. (Image from “How to Measure New Snow Depth,” by Noah Besser/Parker Street, courtesy CoCoRaHS.) Unfortunately, as we discussed in 2011, our nation’s snow climatology is checkered with inconsistency. Measurements have varied over the years by location, frequency, and technique, which can make it difficult for scientists to pin down long-term trends in snowfall. Efforts such as the U.S. Climate Reference Network—a set of more than 100 gold-standard reporting stations across the nation—will be a substantial help going forward. More than 20,000 Americans are also pitching in through the National Weather Service’s Cooperative Observer Program and the Community Collaborative Rain, Hail and Snow Network, which uses animations and other materials to train volunteers in measuring snow depth and liquid content. Still, when using snowfall data from years past, it can take careful research to unravel and correct for biases and errors. It’s also important to keep in mind that a record snowfall might be achieved more readily now than in the past because of the higher frequency of snow measurements in many locations.  Automation’s promise There’s also been major progress in technologies that allow snow depth to be observed automatically. NCAR’s Ethan Gutmann is among the scientists using lasers and GPS signals to assess snowpack in great detail over areas roughly 300 feet wide, or about the size of a football field. Within this area, the system can measure snow at more than 1,000 points in an hour’s time, assessing depths of up to ten feet with an accuracy of 0.5 inch or better. Gutmann is hoping to find funding for a laser-based system that could provide such measurements over a much larger area, roughly a square mile. In the meantime, he’s investigating options for research in Antarctica and Colorado with a mid-sized laser system that could collect thousands of measurements per second over an area equivalent to a few city blocks, about 1,000 feet wide. Scott Landolt checks one of several automated precipitation gauges at NCAR’s Marshall Field site. The circular metallic fence helps keep wind from blowing snow away from the gauge and distorting the measurements. (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.) Other automated systems can measure snowfall, and the amount of liquid within it, in situ (at a single point). Many of these gauges are put through their paces at NCAR’s Marshall Field Site. Here, various types of in-situ precipitation gauges are thoroughly tested, and customized fences and other windshields are evaluated for their ability to minimize wind effects on gauge measurements. Roy Rasmussen and colleagues summarized their ongoing research (sponsored by NOAA, the Federal Aviation Administration, and NCAR) in a 2012 overview paper for the Bulletin of the American Meteorological Society. “While some progress has been made, measuring snow remains a significant challenge,” notes the article. Among the hurdles: finding a windshield design that’s compact and affordable yet effective preventing snow from bridging over and blocking the gauge opening creating automated sensors that can detect the lightest snowfall rates This winter the World Meteorological Organization has been taking a closer look at automated snow measurement through its Solid Precipitation Intercomparison Experiment (WMO-SPICE). Participanting locations include the NCAR Marshall Field Site (including a NOAA Climate Reference Station there) as well as sites in Australia, Canada, Chile, Finland, Norway, Poland, Russia, Switzerland, and New Zealand. The focus in WMO-SPICE on accurate detection of precipitation amount, intensity, and type, as well as the amount of snow actually on the ground—which, as noted, can be a very different animal. Snow observing, 1960s-style John Cahir knows how difficult it can be to measure snow. He’s a retired meteorology professor from Pennsylvania State University who spends part of each year working on forecaster training modules with UCAR’s COMET Program. Cahir served as a weather observer at sea for the U.S. Navy and at Worcester, Massachusetts, in the 1960s for what was then the U.S. Weather Bureau (now the National Weather Service). “Worcester was a really interesting place to observe snow,” says Cahir. The city’s official elevation is 480 feet, but the airport is at 1,000 feet. “It’s a windy site, and it tends to be colder than Boston, both because it’s at higher elevation and it’s about 40 miles inland.” Worcester’s long-term snowfall average is about 70 inches per year, compared to Boston’s 44 inches. Snow measuring boards weren’t part of the observing process in Cahir’s day. “We took measurements strictly by going around with a ruler and sticking it into a representative number of spots,” he says. (Though no longer standard for measuring how much snow has fallen in a given period, this technique remains in use for measuring total snow cover, as noted in the main article above). Drifting only made things worse, says Cahir: “There were times when it was very, very difficult to estimate what the snowfall really was.” One fellow observer confessed to Cahir that he would sometimes call his wife in town—hundreds of feet below the airport—and ask her to take a more readily obtainable report from their own yard.

Predicting the snows that matter most

Bob Henson • March 4, 2013 | Snow in February isn’t exactly stop-the-press news. But last month delivered some memorable accumulations and blizzard conditions to several parts of the United States, including New England and the Great Plains. Did these onslaughts catch people off guard? As a whole, last month’s major snowfalls were amply warned, including the one that walloped New England on February 6–9 and the twin hits to Kansas, Missouri, Oklahoma, and Texas in late February. The National Weather Service (NWS), private firms, and broadcast meteorologists all emphasized that life-threatening conditions could occur and that snow amounts might approach or even top all-time records in some spots. Which, in fact, they did. Some examples: Heaviest snowfall ever recorded     •  Portland, Maine (31.9”, February 8–9) Second heaviest     •  Concord, New Hampshire (24.0”, February 8–9)     •  Wichita, Kansas (14.2”, February 20–21) Third heaviest     •  Amarillo, Texas (19.0”, February 25) Fifth heaviest     •  Boston, Massachusetts (24.9”, February 9) One reason why these eye-popping snowfall totals didn’t come as a surprise is the growth of ensemble prediction. Little more than a decade ago, U.S. forecasters had access to only a handful of fresh runs of computer models every few hours to guide their snow forecasts. Today, there’s not only a broader range of models, but some of these models are run multiple times, side by side, with small changes in the starting-point conditions that mimic the gaps in our less-than-perfect weather observing network. Such ensembles are helping forecasters deal with such high-impact threats as the “Snowquester” winter storm expected to strike the Washington, D.C., area this week. Epic snow or epic fail? Let’s check the plumes The 23 members of the Short Range Ensemble Forecast system (SREF) predicted a widely varying range of outcomes for Boston three days before the blizzard of February 8–9. The individual model runs produced snowfall amounts that ranged from 3.5 to 51 inches. However, the mean, or average, of an entire ensemble can serve as useful forecast guidance. The observed total in Boston of 24.9 inches ended up very close to the ensemble mean. (Graphic courtesy NOAA.) When an entire ensemble is plotted on one graph, it’s easy to see the dangers of relying on a single model run. At right is experimental “plume” output from the Short Range Ensemble Forecast system, operated by NOAA’s Storm Prediction Center. SREF carries out 22 model runs every six hours. All are based on variations of the Weather Research and Forecasting model (WRF), a multiagency effort in which NCAR has played a major role. Click on “SREF Info” on the plumes page for more about the plumes. Several days before Boston’s big snow, SREF showed the potential for massive accumulations—as well as for a bust. As shown in the graphic above, the predictions varied from a modest 3.5 inches to a snow lover’s dream of 51 inches. What’s a forecaster to do? Sometimes—though not always—the average of a large ensemble is the best route. The assumption here is that errors would tend to be equally distributed on either side of the eventual outcome. In the case of Boston, the ensemble mean, or average, shown in the graphic (24 inches) came less than an inch short of the final total, an impressive forecast indeed. It’s also clear that relying on a handful of individual model runs, which was par for the course until recent years, can produce a forecast destined to be spectacularly wrong. Ensembles aren’t magic, though. Sometimes a subtle twist in unfolding weather whose importance will grow over time goes unseen by all ensemble members. As a result, most or all of the members may err in the same direction. Or a weather situation may play into the known biases of the particular models that make up a given ensemble. A savvy forecaster will keep such circumstances in mind while watching to see how the spread of ensemble solutions behaves over successive model runs as a weather event gets closer in time. If the spread stays wide, there may be more inherent uncertainty in the atmosphere’s behavior for this storm. If the spread begins to shift toward the high or low end, then the current ensemble mean might not be the best forecast.  Getting edgy about snow totals One complication in snow forecasting is that accumulations depend critically on the snow-to-liquid ratio, or the amount of snowfall per unit of moisture. This value is shaped by the temperature and moisture amounts at the heights and locations where the snow is being made. A commonly cited average for lower elevations in the United States is 12-to-1 (1 inch of water yielding 12 inches of snow). But the ratios vary strongly by region, and even from storm to storm in a given place. Fluffy dendrites—what we picture as classically shaped snowflakes—may have ratios of 20-to-1 or higher. Moreover, the edges of a big snowstorm are still notoriously difficult areas to forecast. Sometimes only a few miles can separate heavy rain from heavy snow, with a highly dynamic transition zone in between. This is a familiar phemonenon near the New England coast, but it can also occur well inland. Five days in advance of the February 25 blizzard, forecasters at the Oklahoma City NWS office provided a heads-up that major snow could be expected across the northwest half of the state (top graphic). The forecasted amounts rose as the storm took shape, with final accumulations (bottom graphic) topping 18 inches in parts of far northwest Oklahoma and the Texas Panhandle. (Top graphic courtesy NWS/Norman; bottom graphic courtesy NWS/Amarillo.)   Oklahoma City found itself on the wet side of such a transition zone last week. Forecasters correctly pegged a dramatic transition from extremely heavy snow and gale-force winds in far northwest Oklahoma to lighter but still hazardous amounts toward the center of the state (see top graphic at left), with rain predominating farther to the southeast. The night before the storm, the SREF plume (not shown) called for 1 to 10 inches in Oklahoma City, with an ensemble mean of about 4 inches. But the rain/snow line stayed a few tens of miles further west than expected during the height of the storm (see bottom graphic at left), which left Oklahoma City with less than an inch of snowfall, outside the range of all SREF ensemble members from the night before. Even though the regional forecast was excellent, weathercasters who’d emphasized the local risks (more or less in line with NWS forecasts and warnings) found themselves subjected to biting critiques. Similar issues may occur in this week’s Snowquester event, with the rain/snow transition line expected to set up near or just east of Washington, D.C. That could give the city’s western suburbs 6 inches or more of snow while the eastern suburbs get little or nothing. Not surprisingly, the most recent SREF plume at this writing (produced late Monday morning, March 4) show predicted amounts at Washington’s Reagan National Airport ranging wildly—from around 5 to 23 inches. In fairness, predicting snowfall amounts has never been for the faint of heart. If a forecast calls for 3 to 6 inches of snow, and people wake up to find a foot on the ground, the mistake is painfully obvious. With rainfall, the forecast isn’t so easy for a layperson to verify. If a deluge of 5 inches follows a prediction of 2 to 3 inches of rain, it takes more than a glance out the window to confirm that the rainfall was heavier than expected.It's also worth noting that measuring snow accurately is a challenge of its own, as discussed in this AtmosNews post on March 18. More resolution, more often Higher-resolution, high-frequency models may soon provide more help to pin down features like the ones that robbed Oklahoma City of its expected snowfall. In 2011 the NWS introduced the Rapid Refresh model (NOAA/NCEP RAP). Based on the research-oriented WRF model, it’s run every hour and extends out to 18 hours. (See graphics produced by NCAR’s Real-time Weather Data service.) The High-Resolution Rapid Refresh (HRRR) brings the RAP's 13-kilometer resolution down to 3 km, which is sharp enough to depict and track individual thunderstorms and other similarly sized features. Though still classified as experimental until its scheduled 2015 adoption by NWS, the HRRR is available to forecasters and run hourly by NOAA’s Earth System Research Laboratory. Northwest of Boston, the town of Billerica, Massachusetts, was hammered by more than two feet of snow on February 8–9, 2013. (Wikimedia Commons photo by Game Freak2600.) All of the big winter events noted above were captured several days in advance by longer-range forecast models. However, there was some noteworthy international disagreement beyond three or so days. The flagship model of the European Centre for Medium-Range Weather Forecasts (ECMWF) was the first to consistently project that the February 8–9 storm would be close enough to the Northeast coast to cause major problems. The ECMWF called for a major New England storm as much as a week in advance, while the NWS’s Global Forecast System model (GFS) took several more days to lock into a big-snow forecast. This hits the same bell rung by Hurricane Sandy, when the ECMWF landed on a consistent forecast for major Northeast impacts several days ahead of the GFS. Interestingly, both the Sandy superstorm and the blizzard involved a moisture-laden system from low latitudes getting swept up by an upper-level storm diving into the eastern U.S. “Seperately, the two waves that preceded the blizzard were not that impressive,” says Ed Szoke (Cooperative Institute for Research in the Atmosphere). "But the southern wave carried a lot of subtropical moisture. It’s possible that latent heat released from that moisture was an important factor in how the blizzard developed.” Cliff Mass (University of Washington) recently took a closer look at ECMWF-versus-GFS performance during the February 8–9 blizzard, as well as the broader implications of the two models’ contrasting behavior. The European model also sounded the earliest alarm for the impending Snowquester, according to Szoke. “The ECMWF was all over this as a big storm for the mid-Atlantic, focused in the North Carolina/Virginia area, while the GFS and other models tended to take the storm out to sea before deepening it. Eventually the GFS came on board, but all of the models have shifted north a bit toward the D.C. area.”

Scientists deploy lasers, GPS technology to improve snow measurements

BOULDER—Equipped with specialized lasers and GPS technology, scientists at the National Center for Atmospheric Research (NCAR) are working with colleagues to solve a critical wintertime weather mystery: how to accurately measure the amount of snow on the ground. Ethan Gutmann examines a laser instrument for measuring snow. (©UCAR, Photo by Carlye Calvin. This image is freely available for media & nonprofit use.*) Transportation crews, water managers, and others who make vital safety decisions need precise measurements of how snow depth varies across wide areas. But traditional measuring devices such as snow gauges or yardsticks often are inadequate for capturing snow totals that can vary even within a single field or neighborhood. Now scientists are finding that prototype devices that use light pulses, satellite signals, and other technologies offer the potential to almost instantly measure large areas of snow. In time, such devices might even provide a global picture of snow depth. “We’ve been measuring rain accurately for centuries, but snow is much harder because of the way it’s affected by wind and sun and other factors,” says NCAR scientist Ethan Gutmann. “It looks like new technology will finally give us the ability to say exactly how much snow is on the ground.” NCAR is conducting the research with several collaborating organizations, including the National Oceanic and Atmospheric Administration (NOAA) and the University of Colorado Boulder. The work is supported by NCAR’s sponsor, the National Science Foundation. Uncertain depths Emergency managers rely on snowfall measurements when mobilizing snow plows or deciding whether to shut down highways and airports during major storms. They also use snow totals when determining whether a region qualifies for disaster assistance. In mountainous areas, officials need accurate reports of snowpack depth to assess the threat of avalanches or floods, and to anticipate the amount of water available from spring and summer runoff. More accurate measurements can also help meteorologists and hydrologists better understand snow physics and hydrological processes. But traditional approaches to measuring snow can greatly underreport or overreport snow totals, especially in severe conditions. Snow gauges may miss almost a third of the snow in a windy storm, even when they are protected by specialized fencing designed to cut down on the wind’s impacts. Snow probes or yardsticks can reveal snow depth within limited areas. But such tools require numerous in-person measurements at different locations, a method that may not keep up with totals during heavy snowfalls. Snow dunes. The three-dimensional features of a snow field above treeline is revealed by laser measurements. The laser, installed by NCAR at a test site in the Rocky Mountains, measures snow at more than 1,000 points across an area almost the size of a football field. (©UCAR, Image by Ethan Gutmann, NCAR. This image is freely available for media & nonprofit use.*)) Weather experts also sometimes monitor the amount of snow that collects on flat, white pieces of wood known as snow boards, but this is a time-intensive approach that requires people to check the boards and clear them off every few hours. The nation’s two largest volunteer efforts—The National Weather Service’s Cooperative Observer Program and the Community Collaborative Rain, Hail, and Snow Network (CoCoRaHS)—each involve thousands of participants nationwide using snow boards, but their reports are usually filed just once a day. More recently, ultrasonic devices have been deployed in some of the world’s most wintry regions. Much like radar, these devices measure the length of time needed for a pulse of ultrasonic energy to bounce off the surface of the snow and return to the transmitter. However, the signal can be affected by shifting atmospheric conditions, including temperature, humidity, and winds. Testing new approaches The specialized laser instruments under development at NCAR can correct for such problems. Once set up at a location, they can automatically measure snow depth across large areas. Unlike ultrasonic instruments, lasers rely on light pulses that are not affected by atmospheric conditions. New tests by Gutmann indicate that a laser instrument installed high above treeline in the Rocky Mountains west of Boulder can measure 10 feet or more of snow with an accuracy as fine as half an inch or better. The instrument, in a little over an hour,  measures snow at more than 1,000 points across an area almost the size of a football field to produce a three-dimensional image of the snowpack and its variations in depth. Gutmann’s next step, if he can secure the needed funding, will be to build and test a laser instrument that can measure snow over several square miles. Measuring such a large area would require a new instrument capable of taking over 12,000 measurements per second. Ethan Gutmann walks past a snow gauge at a research site in Colorado. (©UCAR, Photo by Carlye Calvin. This image is freely available for media & nonprofit use.*) “If we’re successful, all of a sudden these types of instruments will reveal a continually updated picture of snow across an entire basin,” he says. One limitation for the lasers, however, is the light pulses cannot penetrate through objects such as trees and buildings. This could require development of networks of low-cost laser installations that would each record snow depths within a confined area. Alternatively, future satellites equipped with such lasers might be capable of mapping the entire world from above. Gutmann and Kristine Larson, a colleague at the University of Colorado, are also exploring how to use GPS sensors for snowfall measurements. GPS sensors record both satellite signals that reach them directly and signals that bounce off the ground. When there is snow on the ground, the GPS signal bounces off the snow with a different frequency than when it is bare soil, enabling scientists to determine how high the surface of the snow is above the ground. Such units could be a cost-efficient way of measuring snow totals because meteorologists could tap into the existing global network of ground-based GPS receivers. However, researchers are seeking to fully understand how both the density of the snow and the roughness of its surface alter GPS signals. “Our hope is to develop a set of high-tech tools that will enable officials to continually monitor snow depth, even during an intense storm,” Larson says. “While we still have our work cut out for us, the technology is very promising.” “I think this technology has great potential to benefit emergency managers and other decision makers, as well as forecasters," Gutmann says.

Study may answer longstanding questions about Little Ice Age

BOULDER—A new international study may answer contentious questions about the onset and persistence of Earth’s Little Ice Age, a period of widespread cooling that lasted for hundreds of years until the late 19th century. Gifford Miller collects vegetation samples on Baffin Island. (Photo courtesy University of Colorado Boulder.) The study, led by the University of Colorado Boulder with co-authors at the National Center for Atmospheric Research (NCAR) and other organizations, suggests that an unusual, 50-year-long episode of four massive tropical volcanic eruptions triggered the Little Ice Age between 1275 and 1300 A.D. The persistence of cold summers following the eruptions is best explained by a subsequent expansion of sea ice and a related weakening of Atlantic currents, according to computer simulations conducted for the study. The study, which used analyses of patterns of dead vegetation, ice and sediment core data, and powerful computer climate models, provides new evidence in a longstanding scientific debate over the onset of the Little Ice Age. Scientists have theorized that the Little Ice Age was caused by decreased summer solar radiation, erupting volcanoes that cooled the planet by ejecting sulfates and other aerosol particles that reflected sunlight back into space, or a combination of the two. “This is the first time anyone has clearly identified the specific onset of the cold times marking the start of the Little Ice Age,” says lead author Gifford Miller of the University of Colorado Boulder. “We also have provided an understandable climate feedback system that explains how this cold period could be sustained for a long period of time. If the climate system is hit again and again by cold conditions over a relatively short period—in this case, from volcanic eruptions—there appears to be a cumulative cooling effect.” "Our simulations showed that the volcanic eruptions may have had a profound cooling effect,” says NCAR scientist Bette Otto-Bliesner, a co-author of the study. “The eruptions could have triggered a chain reaction, affecting sea ice and ocean currents in a way that lowered temperatures for centuries." The study appears this week in Geophysical Research Letters. The research team includes co-authors from the University of Iceland, the University of California Irvine, and the University of Edinburgh in Scotland. The study was funded in part by the National Science Foundation, NCAR’s sponsor, and the Icelandic Science Foundation. Far-flung regions of ice Scientific estimates regarding the onset of the Little Ice Age range from the 13th century to the 16th century, but there is little consensus, Miller says. Although the cooling temperatures may have affected places as far away as South America and China, they were particularly evident in northern Europe. Advancing glaciers in mountain valleys destroyed towns, and paintings from the period depict people ice-skating on the Thames River in London and canals in the Netherlands, places that were ice-free before and after the Little Ice Age. “The dominant way scientists have defined the Little Ice Age is by the expansion of big valley glaciers in the Alps and in Norway,” says Miller, a fellow at CU’s Institute of Arctic and Alpine Research. “But the time in which European glaciers advanced far enough to demolish villages would have been long after the onset of the cold period.” Miller and his colleagues radiocarbon-dated roughly 150 samples of dead plant material with roots intact, collected from beneath receding margins of ice caps on Baffin Island in the Canadian Arctic. They found a large cluster of “kill dates” between 1275 and 1300 A.D., indicating the plants had been frozen and engulfed by ice during a relatively sudden event. The team saw a second spike in plant kill dates at about 1450 A.D., indicating the quick onset of a second major cooling event. To broaden the study, the researchers analyzed sediment cores from a glacial lake linked to the 367-square-mile Langjökull ice cap in the central highlands of Iceland that reaches nearly a mile high. The annual layers in the cores—which can be reliably dated by using tephra deposits from known historic volcanic eruptions on Iceland going back more than 1,000 years—suddenly became thicker in the late 13th century and again in the 15th century due to increased erosion caused by the expansion of the ice cap as the climate cooled. “That showed us the signal we got from Baffin Island was not just a local signal, it was a North Atlantic signal,” Miller says. “This gave us a great deal more confidence that there was a major perturbation to the Northern Hemisphere climate near the end of the 13th century.” The team used the Community Climate System Model, which was developed by scientists at NCAR and the Department of Energy with colleagues at other organizations, to test the effects of volcanic cooling on Arctic sea ice extent and mass. The model, which simulated various sea ice conditions from about 1150 to 1700 A.D., showed several large, closely spaced eruptions could have cooled the Northern Hemisphere enough to trigger the expansion of Arctic sea ice. The model showed that sustained cooling from volcanoes would have sent some of the expanding Arctic sea ice down along the eastern coast of Greenland until it eventually melted in the North Atlantic. Since sea ice contains almost no salt, when it melted the surface water became less dense, preventing it from mixing with deeper North Atlantic water. This weakened heat transport back to the Arctic and created a self-sustaining feedback on the sea ice long after the effects of the volcanic aerosols subsided, according to the simulations. The researchers set solar radiation at a constant level in the climate models. The simulations indicated that the Little Ice Age likely would have occurred without decreased summer solar radiation at the time, Miller says. About the article Title: Abrupt onset of the Little Ice Age triggered by volcanism and sustained by sea-ice/ocean feedbacks Authors: Gifford Miller, Áslaug Geirsdóttir, Yafang Zhong, Darren J. Larsen, Bette L. Otto-Bliesner, Marika M. Holland, David A. Bailey, Kurt A. Refsnider, Scott J. Lehman, John R. Southon, Chance Anderson, Helgi Bjornsson, Thorvaldur Thordarson Publication: Geophysical Research Letters

Melting Greenland ice sheets may threaten northeast United States, Canada

Boulder—Melting of the Greenland ice sheet this century may drive more water than previously thought toward the already threatened coastlines of New York, Boston, Halifax, and other cities in the northeastern United States and Canada, according to new research led by the National Center for Atmospheric Research (NCAR). The study, which is being published Friday in Geophysical Research Letters, finds that if Greenland's ice melts at moderate to high rates, ocean circulation by 2100 may shift and cause sea levels off the northeast coast of North America to rise by about 12 to 20 inches (about 30 to 50 centimeters) more than in other coastal areas. The research builds on recent reports that have found that sea level rise associated with global warming could adversely affect North America, and its findings suggest that the situation is more threatening than previously believed. This visualization, based on new computer modeling, shows that sea level rise may be an additional 10 centimeters (4 inches) higher by populated areas in northeastern North America than previously thought. Extreme northeastern North America and Greenland may experience even higher sea level rise. [ENLARGE] (Graphic courtesy Geophysical Research Letters, modified by UCAR.) News media terms of use* "If the Greenland melt continues to accelerate, we could see significant impacts this century on the northeast U.S. coast from the resulting sea level rise," says NCAR scientist Aixue Hu, the lead author. "Major northeastern cities are directly in the path of the greatest rise." A study in Nature Geoscience in March warned that warmer water temperatures could shift ocean currents in a way that would raise sea levels off the Northeast by about 8 inches (20 cm) more than the average global sea level rise. But it did not include the additional impact of Greenland's ice, which at moderate to high melt rates would further accelerate changes in ocean circulation and drive an additional 4 to 12 inches (about 10 to 30 cm) of water toward heavily populated areas of northeastern North America on top of average global sea level rise. More remote areas in extreme northeastern Canada and Greenland could see even higher sea level rise. Scientists have been cautious about estimating average sea level rise this century in part because of complex processes within ice sheets. The 2007 assessment of the Intergovernmental Panel on Climate Change projected that sea levels worldwide could rise by an average of 7 to 23 inches (18 to 59 cm) this century, but many researchers believe the rise will be greater because of dynamic factors in ice sheets that appear to have accelerated the melting rate in recent years. The new research was funded by the U.S. Department of Energy and by NCAR's sponsor, the National Science Foundation. It was conducted by scientists at NCAR, the University of Colorado at Boulder, and Florida State University. How much meltwater? To assess the impact of Greenland ice melt on ocean circulation, Hu and his coauthors used the Community Climate System Model, an NCAR-based computer model that simulates global climate. They considered three scenarios: the melt rate continuing to increase by 7 percent per year, as has been the case in recent years, or the melt rate slowing down to an increase of either 1 or 3 percent per year. If Greenland's melt rate slows down to a 3 percent annual increase, the study team's computer simulations indicate that the runoff from its ice sheet could alter ocean circulation in a way that would direct about a foot of water toward the northeast coast of North America by 2100. This would be on top of the average global sea level rise expected as a result of global warming. Although the study team did not try to estimate that mean global sea level rise, their simulations indicated that melt from Greenland alone under the 3 percent scenario could raise worldwide sea levels by an average of 21 inches (54 cm). Aixue Hu. [ENLARGE] (©UCAR, photo by Carlye Calvin.) News media terms of use* If the annual increase in the melt rate dropped to 1 percent, the runoff would not raise northeastern sea levels by more than the 8 inches (20 cm) found in the earlier study in Nature Geoscience. But if the melt rate continued at its present 7 percent increase per year through 2050 and then leveled off, the study suggests that the northeast coast could see as much as 20 inches (50 cm) of sea level rise above a global average that could be several feet. However, Hu cautioned that other modeling studies have indicated that the 7 percent scenario is unlikely. In addition to sea level rise, Hu and his co-authors found that if the Greenland melt rate were to defy expectations and continue its 7 percent increase, this would drain enough fresh water into the North Atlantic to weaken the oceanic circulation that pumps warm water to the Arctic. Ironically, this weakening of the meridional overturning circulation would help the Arctic avoid some of the impacts of global warming and lead to at least the temporary recovery of Arctic sea ice by the end of the century. Why the Northeast? The northeast coast of North America is especially vulnerable to the effects of Greenland ice melt because of the way the meridional overturning circulation acts like a conveyer belt transporting water through the Atlantic Ocean. The circulation carries warm Atlantic water from the tropics to the north, where it cools and descends to create a dense layer of cold water. As a result, sea level is currently about 28 inches (71 cm) lower in the North Atlantic than the North Pacific, which lacks such a dense layer. If the melting of the Greenland Ice Sheet were to increase by 3 percent or 7 percent yearly, the additional fresh water could partially disrupt the northward conveyor belt. This would reduce the accumulation of deep, dense water. Instead, the deep water would be slightly warmer, expanding and elevating the surface across portions of the North Atlantic. Unlike water in a bathtub, water in the oceans does not spread out evenly. Sea level can vary by several feet from one region to another, depending on such factors as ocean circulation and the extent to which water at lower depths is compressed. "The oceans will not rise uniformly as the world warms," says NCAR scientist Gerald Meehl, a co-author of the paper. "Ocean dynamics will push water in certain directions, so some locations will experience sea level rise that is larger than the global average. About the article Title: "Transient Response of the MOC and Climate to Potential Melting of the Greenland Ice Sheet in the 21st Century" Authors: Aixue Hu, Gerald Meehl, Weiqing Han, and Jianjun Yin Publication: Geophysical Research Letters

Global warming: Cuts in greenhouse gas emissions would save Arctic ice, reduce sea level rise

BOULDER—The threat of global warming can still be greatly diminished if nations cut emissions of heat-trapping greenhouse gases by 70 percent this century, according to a new analysis. While global temperatures would rise, the most dangerous potential aspects of climate change, including massive losses of Arctic sea ice and permafrost and significant sea level rise, could be partially avoided. The study, led by scientists at the National Center for Atmospheric Research (NCAR), will be published next week in Geophysical Research Letters. It was funded by the Department of Energy and the National Science Foundation, NCAR's sponsor. "This research indicates that we can no longer avoid significant warming during this century," says NCAR scientist Warren Washington, the lead author. "But if the world were to implement this level of emission cuts, we could stabilize the threat of climate change and avoid catastrophe." Warren Washington. [ENLARGE] (©UCAR, photo by Carlye Calvin.) News media terms of use* Avoiding dangerous climate change Average global temperatures have warmed by close to 1 degree Celsius (almost 1.8 degrees Fahrenheit) since the pre-industrial era. Much of the warming is due to human-produced emissions of greenhouse gases, predominantly carbon dioxide. This heat-trapping gas has increased from a pre-industrial level of about 284 parts per million (ppm) in the atmosphere to more than 380 ppm today. With research showing that additional warming of about 1 degree C (1.8 degrees F) may be the threshold for dangerous climate change, the European Union has called for dramatic cuts in emissions of carbon dioxide and other greenhouse gases. The U.S. Congress is also debating the issue. To examine the impact of such cuts on the world's climate, Washington and his colleagues ran a series of global supercomputer studies with the NCAR-based Community Climate System Model. They assumed that carbon dioxide levels could be held to 450 ppm at the end of this century. That figure comes from the U.S. Climate Change Science Program, which has cited 450 ppm as an attainable target if the world quickly adapts conservation practices and new green technologies to cut emissions dramatically. In contrast, emissions are now on track to reach about 750 ppm by 2100 if unchecked. The team's results showed that if carbon dioxide were held to 450 ppm, global temperatures would increase by 0.6 degrees C (about 1 degree F) above current readings by the end of the century. In contrast, the study showed that temperatures would rise by almost four times that amount, to 2.2 degrees C (4 degrees F) above current readings, if emissions were allowed to continue on their present course. New computer simulations show the extent that average air temperatures at Earth's surface could warm by 2080-2099 compared to 1980-1999, if (top) greenhouse gases emissions continue to climb at current rates, or if (bottom) society cuts emissions by 70 percent. In the latter case, temperatures rise by less than 2°C (3.6°F) across nearly all of Earth's populated areas. However, unchecked emissions could lead to warming of 3°C (5.4°F) or more across parts of Europe, Asia, North America, and Australia. [ENLARGE] (Graphic courtesy Geophysical Research Letters, modified by UCAR.) News media terms of use* Holding carbon dioxide levels to 450 ppm would have other impacts, according to the climate modeling study: Sea level rise due to thermal expansion as water temperatures warmed would be 14 centimeters (about 5.5 inches) instead of 22 centimeters (8.7 inches). Significant additional sea level rise would be expected in either scenario from melting ice sheets and glaciers. Arctic ice in the summertime would shrink by about a quarter in volume and stabilize by 2100, as opposed to shrinking at least three-quarters and continuing to melt. Some research has suggested the summertime ice will disappear altogether this century if emissions continue on their current trajectory. Arctic warming would be reduced by almost half, helping preserve fisheries and populations of sea birds and Arctic mammals in such regions as the northern Bering Sea. Significant regional changes in precipitation, including decreased precipitation in the U.S. Southwest and an increase in the U.S. Northeast and Canada, would be cut in half if emissions were kept to 450 ppm. The climate system would stabilize by about 2100, instead of continuing to warm. The research team used supercomputer simulations to compare a business-as-usual scenario to one with dramatic cuts in carbon dioxide emissions beginning in about a decade. The authors stressed that they were not studying how such cuts could be achieved nor advocating a particular policy. "Our goal is to provide policymakers with appropriate research so they can make informed decisions," Washington says. "This study provides some hope that we can avoid the worst impacts of climate change--if society can cut emissions substantially over the next several decades and continue major cuts through the century." About the article Title: "How Much Climate Change Can Be Avoided by Mitigation?" Authors: Warren Washington, Reto Knutti, Gerald Meehl, Haiyan Teng, Claudia Tebaldi, David Lawrence, Lawrence Buja, Gary Strand Publication: Geophysical Research Letters


Subscribe to Snow + Ice