Computer Modeling

UCAR collaboration with The Weather Company to improve weather forecasts worldwide

BOULDER, Colo. — The University Corporation for Atmospheric Research (UCAR) today announced a new collaboration with The Weather Company, an IBM business, to improve global weather forecasting. The partnership brings together cutting-edge computer modeling developed at the National Center for Atmospheric Research (NCAR) with The Weather Company's meteorological science and IBM's advanced compute equipment."This is a major public-private partnership that will advance weather prediction and generate significant benefits for businesses making critical decisions based on weather forecasts," said UCAR President Antonio J. Busalacchi. "We are gratified that taxpayer investments in the development of weather models are now helping U.S. industries compete in the global marketplace."UCAR, a nonprofit consortium of 110 universities focused on research and training in the atmospheric and related Earth system sciences, manages NCAR on behalf of the National Science Foundation.With the new agreement, The Weather Company will develop a global forecast model based on the Model for Prediction Across Scales (MPAS), an innovative software platform developed by NCAR and the Los Alamos National Laboratory.The Model for Prediction Across Scales (MPAS) enables forecasters to combine a global view of the atmosphere with a higher-resolution view of a particular region, such as North America. (@UCAR. This image is freely available for media & nonprofit use.)MPAS offers a unique way of simulating the global atmosphere while providing users with more flexibility when focusing on specific regions of interest. Unlike traditional three-dimensional models that calculate atmospheric conditions at multiple points within a block-shaped grid, it uses a hexagonal mesh resembling a honeycomb that can be stretched wide in some regions and compressed for higher resolution in others. This enables forecasters to simultaneously capture far-flung atmospheric conditions that can influence local weather, as well as small-scale features such as vertical wind shear that can affect thunderstorms and other severe weather.Drawing on the computational power of GPUs — graphics processing units — such as those being used in a powerful new generation of IBM supercomputers, and on the expertise of NCAR and The Weather Company, the new collaboration is designed to push the capabilities of MPAS to yield more accurate forecasts with longer lead times. The results of NCAR's work will be freely available to the meteorological community. Businesses, from airlines to retailers, as well as the general public, stand to benefit.Mary Glackin, head of weather science and operations for The Weather Company, said, "As strong advocates for science, we embrace strong public-private collaborations that understand the value science brings to society, such as our continued efforts with UCAR to advance atmospheric and computational sciences.""As this partnership shows, society is on the cusp of a new era in weather prediction, with more precise short-range forecasts as well as longer-term forecasts of seasonal weather patterns," Busalacchi said. "These forecasts are important for public health and safety, as well as enabling companies to leverage economic opportunities in ways that were never possible before."About The Weather CompanyThe Weather Company, an IBM Business, helps people make informed decisions and take action in the face of weather. The company offers weather data and insights to millions of consumers, as well as thousands of marketers and businesses via Weather’s API, its business solutions division, and its own digital products from The Weather Channel (weather.com) and Weather Underground (wunderground.com).

High-resolution regional modeling (no supercomputer needed)

Annual precipitation over Colorado as modeled by the low-resolution, global Community Earth System Model (top) compared to the high-resolution, regional Weather Research and Forecasting model (below). (Images courtesy Ethan Gutmann, NCAR.) February 13, 2017 | In global climate models, the hulking, jagged Rocky Mountains are often reduced to smooth, blurry bumps. It's a practical reality that these models, which depict the entire planet, typically need to be run at a relatively low resolution due to constraints on supercomputing resources. But the result, a virtual morphing of peaks into hills, affects the ability of climate models to accurately project how precipitation in mountainous regions may change in the future — information that is critically important to water managers.To address the problem, hydrologists have typically relied on two methods to "downscale" climate model data to make them more useful. The first, which uses statistical techniques, is fast and doesn't require a supercomputer, but it makes many unrealistic assumptions. The second, which uses a high-resolution weather model like the Weather Research and Forecasting model (WRF), is much more realistic but requires vast amounts of computing resources.Now hydrologists at the National Center for Atmospheric Research (NCAR) are developing an in-between option: The Intermediate Complexity Atmospheric Research Model (ICAR) gives researchers increased accuracy using only a tiny fraction of the computing resources."ICAR is about 80 percent as accurate as WRF in the mountainous areas we studied," said NCAR scientist Ethan Gutmann, who is leading the development of ICAR. "But it only uses 1 percent of the computing resources. I can run it on my laptop."Drier mountains, wetter plainsHow much precipitation falls in the mountains — and when — is vitally important for communities in the American West and elsewhere that rely on snowpack to act as a frozen reservoir of sorts. Water managers in these areas are extremely interested in how a changing climate might affect snowfall and temperature, and therefore snowpack, in these regions.But since global climate models with low resolution are not able to accurately represent the complex topography of mountain ranges, they are unsuited for answering these questions.For example, as air flows into Colorado from the west, the Rocky Mountains force that air to rise, cooling it and causing moisture to condense and fall to the ground as snow or rain. Once these air masses clear the mountains, they are drier than they otherwise would have been, so there is less moisture available to fall across Colorado's eastern plains.Low-resolution climate models are not able to capture this mechanism — the lifting of air over the mountains — and so in Colorado, for example, they often simulate mountains that are drier than they should be and plains that are wetter. For a regional water manger, these small shifts could mean the difference between full reservoirs and water shortages."Climate models are useful for predicting large-scale circulation patterns around the whole globe, not for predicting precipitation in the mountains or in your backyard," Gutmann said.Precipitation in millimeters over Colorado between Oct. 1 and May 1 as simulated by the Weather Research and Forecasting model (WRF), the Intermediate Complexity Atmospheric Research model (ICAR), and the observation-based Parameter-Elevation Regressions on Independent Slopes Model. (Images courtesy Ethan Gutmann.)A modeling middle groundA simple statistical fix for these known problems may include adjusting precipitation data to dry out areas known to be too wet and moisten areas known to be too dry. The problem is that these statistical downscaling adjustments don't capture the physical mechanisms responsible for the errors. This means that any impact of a warming climate on the mechanisms themselves would not be accurately portrayed using a statistical technique.That's why using a model like WRF to dynamically downscale the climate data produces more reliable results — the model is actually solving the complex mathematical equations that describe the dynamics of the atmosphere. But all those incredibly detailed calculations also take an incredible amount of computing.A few years ago, Gutmann began to wonder if there was a middle ground. Could he make a model that would solve the equations for just a small portion of the atmospheric dynamics that are important to hydrologists — in this case, the lifting of air masses over the mountains — but not others that are less relevant?"I was studying statistical downscaling techniques, which are widely used in hydrology, and I thought, 'We should be able to do better than this,'" he said. "'We know what happens when you lift air up over a mountain range, so why don’t we just do that?'"Gutmann wrote the original code for the model that would become ICAR in just a few months, but he spent the next four years refining it, a process that's still ongoing.100 times as fastLast year, Gutmann and his colleagues — Martyn Clark and Roy Rasmussen, also of NCAR; Idar Barstad, of Uni Research Computing in Bergen, Norway; and Jeffrey Arnold, of the U.S. Army Corps of Engineers — published a study comparing simulations of Colorado created by ICAR and WRF against observations.The authors found that ICAR and WRF results were generally in good agreement with the observations, especially in the mountains and during the winter. One of ICAR's weaknesses, however, is in simulating storms that build over the plains in the summertime. Unlike WRF, which actually allows storms to form and build in the model, ICAR estimates the number of storms likely to form, given the atmospheric conditions, a method called parameterization.Even so, ICAR, which is freely available to anyone who wants to use it, is already being run by teams in Norway, Austria, France, Chile, and New Zealand."ICAR is not perfect; it's a simple model," Gutmann said. "But in the mountains, ICAR can get you 80 to 90 percent of the way there at 100 times the speed of WRF. And if you choose to simplify some of the physics in ICAR, you can get it close to 1,000 times faster."About the articleTitle: The Intermediate Complexity Atmospheric Research Model (ICAR)Authors: Ethan Gutmann, Idar Barstad, Martyn Clark, Jeffrey Arnold, and Roy RasmussenJournal: Journal of Hydrometeorology, DOI: 10.1175/JHM-D-15-0155.1Funders:U.S. Army Corps of EngineersU.S. Bureau of ReclamationCollaborators:Uni Research Computing in NorwayU.S. Army Corps of EngineersWriter/contact: Laura Snider, Senior Science Writer

NCAR-based climate model joins seasonal forecasting effort

January 12, 2017 | An NCAR-based computer model known for global climate projections decades into the future recently joined a suite of other world-class models being used to forecast what may lie just a few months ahead.The Community Earth System Model has long been an invaluable tool for scientists investigating how the climate may change in the long term — decades or even centuries into the future. Last summer, CESM became the newest member of the North American Multi-Model Ensemble (NMME), an innovative effort that combines some techniques typically used in weather forecasting with those used in climate modeling to predict temperature and precipitation seasons in advance. The result is a bridge that helps span the gap between two-week forecasts and decades-long projections.The forecasted temperature anomalies (departures from average) over North America made by the entire NMME suite (top) and by CESM (middle). Observed temperature anomalies for the same period (bottom). Click to enlarge. (Images courtesy NOAA.) But NMME also builds another bridge: this one between operational forecasters, who issue the forecasts society depends on, and researchers. Now a collection of nine climate models, the NMME has proven it produces more accurate seasonal forecasts than any one model alone. It was adopted in May by the National Oceanic and Atmospheric Administration (NOAA) as one of the agency's official seasonal forecasting tools."What is so important about NMME is that it's bringing research to bear on operational forecasts," said Ben Kirtman, a professor of atmospheric sciences at the University of Miami who leads the NMME project. "The marriage between real-time prediction and research has fostered new understandings, identified new problems that we hadn't thought about before, and really opened up new lines of research."A new way to start a climate model runWeather models and climate models have a lot of things in common; for one, they both use mathematical equations to represent the physical processes going on in the atmosphere. Weather models, which are concerned with what’s likely to happen in the immediate future, depend on being fed accurate initial conditions to produce good forecasts. Even if a weather model could perfectly mimic how the atmosphere works, it would need to know what the atmosphere actually looks like now — the temperature and pressure at points across the country, for example — to determine what the atmosphere will look like tomorrow.Climate modelers, on the other hand, are often interested in broad changes over many decades, so the exact weather conditions at the beginning of a simulation are usually not as important. In fact, their impact is quickly drowned out by larger-scale trends that unfold over long time periods.In recent years, however, scientists have become interested in whether climate models — which simulate changes in ocean circulation patterns, sea surface temperatures, and other large-scale phenomena that have lingering impacts on weather patterns — could be initialized with accurate starting conditions and then used to make skillful seasonal forecasts.The NMME project is exploring this question. The global climate models that make up NMME project are all being initialized monthly to create multiple forecasts that stretch a year in advance. Along with CESM, those models include the NCAR-based Community Climate System Model, Version 4, which is being initialized by Kirtman's team at the University of Miami. (See a full list of models below.)Taken together, the individual model forecasts reveal information to forecasters about the amount of uncertainty in the seasonal forecast. If individual forecasts vary substantially, the future is less certain. If they agree, forecasters can have more confidence.The forecasted precipitation anomalies (departures from average) over North America made by the entire NMME suite (top) and by CESM (middle). Observed precipitation anomalies for the same period (bottom). Click to enlarge. (Images courtesy NOAA.)A valuable collection of dataCESM's first seasonal forecast as part of NMME, which was issued for July, August, and September 2016, was perhaps the most accurate of any in the ensemble. The forecast — which called for conditions to be warmer and drier than average across most of the United States — was issued after more than a year of work by NCAR scientists Joseph Tribbia and Julie Caron.All of the models in the NMME suite must be calibrated by running "hindcasts." By comparing the model's prediction of a historical season with what actually happened, the scientists can identify if the model is consistently off in some areas. For example, the model might generally predict that seasons will be wetter or cooler than they actually are for certain regions of the country. These tendencies can then be statistically corrected in future forecasts."We ran 10 predictions every month for a 33-year period and ran each prediction out for one year," Tribbia said. "You can learn a lot about how your model performs when you have so many runs."Once CESM was calibrated, it joined the NMME operational suite of models. But the data generated by the rigorous hindcasting process wasn't cast aside once the calibration was finished. Instead, every modeling group has saved not only monthly data, but also high-frequency daily data that are being stored at NCAR.The trove of historical predictions, along with the new predictions being generated in real-time, are an incredible resource for scientists interested in improving the techniques for initializing climate models and exploring what types of things can, and cannot, be predicted in advance."Predictability research can be a challenge. The NMME dataset allows you to check yourself in a robust way," Kirtman said. "If you think you've found a source of predictability in the hindcast mode, you can then try to do it in real time. It's really exciting — and it really holds your feet to the fire."This year, as much as 18.5 terabytes of NMME data were downloaded from NCAR monthly, according to NCAR's Eric Nienhouse, who oversees the data archive.Now that CESM is an active part of NMME, Tribbia and Caron will also be diving into the data."Now the fun begins," Caron said. "We get to start looking at the data to see how we're doing, and what we might change in the future to make our seasonal forecasts better."Models that make up NMME:NCEP CFSv2: National Centers for Environmental Prediction Climate Forecast System Version 2 (NOAA)CMC1 CanCM3: Canadian Meteorological Centre/Canadian Centre for Climate Modeling and AnalysisCMC2 CanCM4: Canadian Meteorological Centre/Canadian Centre for Climate Modeling and AnalysisGFDL FLOR: Geophysical Fluid Dynamics Laboratory Forecast-oriented Low Ocean Resolution (NOAA)GFDL CM2.1: Geophysical Fluid Dynamics Laboratory Coupled Climate Model Version 2.1 (NOAA)NCAR CCSM4: National Center for Atmospheric Research Community Climate System Model Version 4NASA GEOS5: NASA Goddard Earth Observing System Model Version 5NCAR CESM: National Center for Atmospheric Research Community Earth System ModelIMME: National Centers for Environmental Prediction International Multi-Model Ensemble (NOAA)Writer/contact:Laura Snider, Senior Science Writer  

Two NCAR scientists honored by American Geophysical Union

BOULDER, Colo. — Martyn Clark, senior scientist at the National Center for Atmospheric Research (NCAR), will be honored next week as a Fellow of the American Geophysical Union (AGU) for his exceptional contribution to Earth science.Clark is an expert in the numerical modeling and prediction of hydrologic processes. His current research includes developing new modeling methods to improve streamflow forecasts and better understand climate change impacts on regional water resources. Clark, who grew up in Christchurch, New Zealand, has authored or co-authored 135 journal articles since receiving his Ph.D. from the University of Colorado in 1998.NCAR Senior Scientist Martyn Clark (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.)"This well-deserved honor reflects Martyn's eminent work in the increasingly critical area of water-resource prediction and management," said NCAR Director James W. Hurrell.Clark said he was delighted to see NCAR's hydrologic modeling recognized. "Hydrology is beginning to play a much stronger role in addressing important interdisciplinary science questions about Earth System change, such as how changes in the terrestrial water cycle affect biological productivity and how groundwater can buffer water stress in ecosystems and human societies. It's exciting to advance modeling capabilities in these areas."NCAR Senior Scientist Bette Otto-Bliesner. (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.)Clark is among 60 individuals from eight countries recognized as Fellows this year; only one in one thousand AGU members receive this recognition in any given year. Nearly 40 percent of this year's fellows are from the 110 member colleges and universities of the University Corporation for Atmospheric Research (UCAR), which manages NCAR. This year's class will be honored next Wednesday at the 2016 AGU Fall Meeting in San Francisco.NCAR Senior Scientist Bette Otto-Bliesner, who was named an AGU Fellow last year, is being honored by her peers in the Paleoceanography and Paleoclimatology Focus Group and Ocean Sciences Section by being asked to give the 2016 Emiliani Lecture. She will give the lecture next Wednesday at the AGU Fall Meeting on the topic of "Resolving Some Puzzles of Climate Evolution Since the Last Glacial Maximum: A Melding of Paleoclimate Modeling and Data."The AGU, dedicated to advancing Earth and space sciences for the benefit of society, is a not-for-profit, professional organization representing 60,000 members in more than 140 countries. 

High-res model captures explosive increase in hurricane strength

Nov. 1, 2016 | Last fall, Hurricane Patricia exploded from a Category 1 to a record-breaking Category 5 storm in just 24 hours.Patricia's rapid intensification off the coast of Mexico blindsided forecasters, whose models vastly underestimated how strong the hurricane would become. Patricia — and more recently Hurricane Matthew, which also jumped from Category 1 to Category 5 in less than a day — highlight a weakness in predictive capabilities. While we've made great strides in forecasting a hurricane's track, forecasting its intensity remains a challenge.New research using a sophisticated weather model based at the National Center for Atmospheric Research (NCAR) offers some clues about how these forecasts can be improved.The scientists — Ryder Fox, an undergraduate researcher at the New Mexico Institute for Mining and Technology, and Falko Judt, an NCAR postdoctoral researcher — found that an advanced version of the Weather Research and Forecasting model (WRF-ARW) could accurately forecast Hurricane Patricia's rapid intensification when run at a high enough resolution."Because Patricia was so out of bounds — the hurricane broke records for high wind speed and low pressure — we didn't think our model would actually be able to capture its peak intensity," Judt said. "The fact that the model nailed it took us by surprise."Hurricane Patricia approaches the west coast of Mexico on Oct. 23, 2015. (Image courtesy NASA.)   Judt and Fox think that the model's resolution was one important key to its success. The scientists ran WRF-ARW with a 1-kilometer (0.6-mile) resolution on the Yellowstone system at the NCAR-Wyoming Supercomputing Center. The models being used to actually forecast Patricia at the time had resolutions between 3 and 15 kilometers."Going to 1-kilometer resolution may be especially important for very strong storms, because they tend to have an eyewall that's really small," Judt said. "Patricia's eye was just 13 kilometers across at its most intense."Still, the researchers caution that more simulations are needed to be sure that the model's ability to capture Hurricane Patricia's intensity wasn't a fluke."We're not sure yet that, if we ran the same model for Hurricane Matthew, we would forecast that storm correctly," Judt said. "There are so many things that can go wrong with hurricane forecasting."To address this uncertainty, Judt and Fox have begun running the model additional times, each with slightly tweaked starting conditions. The preliminary results show that while each model run is distinct, each one also captures the rapid intensification of the storm. This relative harmony among the ensemble of model runs suggests that WRF-ARW does a good job of reproducing the storm-friendly environmental conditions that Patricia formed in."The set-up that nature created may have allowed for a storm to intensify no matter what," Judt said. "The sea surface was downright hot, the air was really moist, and the wind shear, at times, was virtually zero. It was a very ripe environment."Fox began working with Judt through SOARS, the Significant Opportunities in Atmospheric Research program, which pairs young researchers with NCAR mentors. An undergraduate-to-graduate bridge program, SOARS is designed to broaden participation in the atmospheric and related sciences."The SOARS program means everything — not just to my ability to do this type of research, but also to grow as a scientist and to find my place within the scientific community," said Fox, who published the research results as an article in Physics Today.Fox hopes the research on accurate modeling of Hurricane Patricia may lead to improved early warning systems that could help prevent loss of life."My personal passion regarding severe weather research lies in improved early warning systems," Fox said, "which optimally lead to lower death counts."

Advanced computer model focuses on Hurricane Matthew

Oct. 6, 2016 | As Hurricane Matthew churns toward the southeastern U.S. coast, scientists at the National Center for Atmospheric Research (NCAR) are testing an advanced research computer model to see how well it can predict the powerful storm's track and intensity.The Model for Prediction Across Scales (MPAS) uses an innovative software approach that allows scientists to focus on regional conditions while still capturing far-flung atmospheric processes that can influence the storm in question. This is a contrast to the forecast models typically used to track hurricanes today, which cannot simultaneously capture both global and local atmospheric processes.The experimental MPAS model simulates Hurricane Matthew hitting the Southeast. To see a range of model output, visit the MPAS tropical cyclone website. MPAS is able to do both because it uses a flexible mesh that allows it to zoom into higher resolution in some areas — over hurricane breeding grounds, for example — while zooming out over the rest of Earth. This ability to vary resolution across the globe requires a small fraction of the computer power needed to have high resolution everywhere.By testing MPAS during hurricane season, the research team can determine the adjustments that need to be made to the model while gaining insights into how to improve hurricane forecasting in the future."This is an experimental effort," said Chris Davis, a senior scientist and director of NCAR's Mesoscale and Microscale Meteorology Laboratory. "We're doing this to see if we can find systematic biases in the model so we can improve simulations of the tropics in general and hurricanes in particular."Davis and the other members of the research team, including NCAR scientists David Ahijevych, Sang-Hun Park, Bill Skamarock, and Wei Wang, are running MPAS once a day on NCAR's Yellowstone supercomputer, inputting various ocean and atmospheric conditions to see how it performs. The work is supported by the National Science Foundation and the Korea Institute of Science and Technology Information.Even though they are just tests, Davis said the MPAS simulations are often comparable with official forecast models such as those run by the National Hurricane Center and the European Centre for Medium-Range Weather Forecasts. As Matthew was in its early stages, in fact, MPAS did a better job than other models in simulating the northward movement of the storm from the Caribbean Sea toward the Florida coast.The scientists will analyze how MPAS performed and share the results with colleagues in the meteorological community. It's a step in an ongoing research effort to better predict the formation and behavior of hurricanes."We run the model even when the tropics are quiet, but an event like Matthew gives us a special opportunity to see what contributes to errors in tropical cyclone prediction," Davis said. "While a major hurricane can have catastrophic impacts, we hope to learn from it and make computer models even better in the future."Funders:National Science FoundationKorea Institute of Science and Technology InformationWriter/contact:David Hosansky, Manager of Media Relations

40 Earths: NCAR's Large Ensemble reveals staggering climate variability

Sept. 29, 2016 | Over the last century, Earth's climate has had its natural ups and downs. Against the backdrop of human-caused climate change, fluctuating atmosphere and ocean circulation patterns have caused the melting of Arctic sea ice to sometimes speed up and sometimes slow down, for example. And the back-and-forth formation of El Niño and La Niña events in the Pacific has cause d some parts of the world to get wetter or drier while some parts get warmer or cooler, depending on the year.But what if the sequence of variability that actually occurred over the last century was just one way that Earth's climate story could have plausibly unfolded? What if tiny — even imperceptible — changes in Earth's atmosphere had kicked off an entirely different sequence of naturally occurring climate events?"It's the proverbial butterfly effect," said Clara Deser, a senior climate scientist at the National Center for Atmospheric Research (NCAR). "Could a butterfly flapping its wings in Mexico set off these little motions in the atmosphere that cascade into large-scale changes to atmospheric circulation?"To explore the possible impact of miniscule perturbations to the climate — and gain a fuller understanding of the range of climate variability that could occur — Deser and her colleague Jennifer Kay, an assistant professor at the University of Colorado Boulder and an NCAR visiting scientist, led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. With each simulation, the scientists modified the model's starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.The result, called the CESM Large Ensemble, is a staggering display of Earth climates that could have been along with a rich look at future climates that could potentially be."We gave the temperature in the atmosphere the tiniest tickle in the model — you could never measure it — and the resulting diversity of climate projections is astounding," Deser said. "It's been really eye-opening for people."The dataset generated during the project, which is freely available, has already proven to be a tremendous resource for researchers across the globe who are interested in how natural climate variability and human-caused climate change interact. In a little over a year, about 100 peer-reviewed scientific journal articles have used data from the CESM Large Ensemble.Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society's Journal of Climate. (© 2016 AMS.) A community effortRunning a complex climate model like the CESM several dozen times takes a vast amount of computing resources, which makes such projects rare and difficult to pull off. With that in mind, Deser and Kay wanted to make sure that the data resulting from the Large Ensemble were as useful as possible. To do that, they queried scientists from across the community who might make use of the project results — oceanographers, geochemists, atmospheric scientists, biologists, socioeconomic researchers — about what they really wanted."It took a village to make this ensemble happen and for it to be useful to and usable by the broad climate community," Kay said. "The result is a large number of ensemble members, in a state-of-the-art climate model, with outputs asked for by the community, that is publicly available and relatively easy to access — it's no wonder it's getting so much use."Scientists have so far relied on the CESM Large Ensemble to study everything from oxygen levels in the ocean to potential geoengineering scenarios to possible changes in the frequency of moisture-laden atmospheric rivers making landfall. In fact, so many researchers have found the Large Ensemble so useful that Kay and Deser were honored with the 2016 CESM Distinguished Achievement Award, which recognizes significant contributions to the climate modeling community.The award citation noted the pair was chosen because "the Large Ensemble represents one of NCAR's most significant contributions to the U.S. climate research community. … At a scientific level, the utility of the Large Ensemble cannot be overstated."The power of multiple runs: Looking forward — and backwardClearly, the CESM Large Ensemble is useful for looking forward: What is the range of possible futures we might expect in the face of a changing climate? How much warmer will summers become? When will summer Arctic sea ice disappear? How will climate change affect ocean life?But the Large Ensemble is also an extremely valuable tool for understanding our past. This vast storehouse of data helps scientists evaluate observations and put them in context: How unusual is a particular heat wave? Is a recent change in rainfall patterns the result of global warming or could it be from solely natural causes?With only a single model run, scientists are limited in what they can conclude when an observation doesn't match up with a model's projection. For example, if the Arctic sea ice extent were to expand, even though the model projected a decline, what would that mean? Is the physics underlying the model wrong? Or does the model incorrectly capture the natural variability? In other words, if you ran the model more times, with slightly different starting conditions, would one of the model runs correctly project the growth in sea ice?The Large Ensemble helps answer that question. Armed with 40 different simulations, scientists can characterize the range of historic natural variability. With this information, they can determine if observations fit within the envelope of natural variability outlined in the model, instead of comparing them to a single run.Creating an envelope of what can be considered natural also makes it possible to see when the signal of human-caused climate change has pushed an observation beyond the natural variability. The Large Ensemble can also clarify the climate change "signal" in the model. That's because averaging together the 40 ensemble members can effectively cancel out the natural variability — a La Niña in one model run might cancel out an El Niño in another, for example — leaving behind only changes due to climate change."This new ability to separate natural internal variability from externally driven trends is absolutely critical for moving forward our understanding of climate and climate change," said Galen McKinley, a professor of atmospheric and oceanic sciences at the University of Wisconsin–Madison.McKinley used the Large Ensemble — which she called a "transformative tool" — to study changes in the ocean's ability to take up carbon dioxide in a warming climate.The two components of the climate systemThe CESM Large Ensemble is not the first ensemble of climate simulations, though it is perhaps the most comprehensive and widely used. Scientists have long understood that it makes sense to look at more than one model run. Frequently, however, scientists have done this by comparing simulations from different climate models, collectively called a multi-model ensemble.This method gives a feel for the diversity of possible outcomes, but it doesn't allow researchers to determine why two model simulations might differ: Is it because the models themselves represent the physics of the Earth system differently? Or is it because the models have different representations of the natural variability or different sensitivities to changing carbon dioxide concentrations?The Large Ensemble helps resolve this dilemma. Because each member is run using the same model, the differences between runs can be attributed to differences in natural variability alone. The Large Ensemble also offers context for comparing simulations in a multi-model ensemble. If the simulations appear to disagree about what the future may look like—but they still fit within the envelope of natural variability characterized by the Large Ensemble—that could be a clue that the models do not actually disagree on the fundamentals. Instead, they may just be representing different sequences of natural variability.This ability to put model results in context is important, not just for scientists but for policy makers, according to Noah Diffenbaugh, a climate scientist at Stanford University who has used the Large Ensemble in several studies, including one that looks at the contribution of climate change to the recent, severe California drought.“It’s pretty common for real-world decision makers to look at the different simulations from different models, and throw up their hands and say, 'These models don't agree so I can't make decisions,'" he said. "In reality, it may not be that the models are disagreeing. Instead, we may be seeing the actual uncertainty of the climate system. There is some amount of natural uncertainty that we can't reduce — that information is really important for making robust decisions, and the Large Ensemble is giving us a window that we haven’t had before.”Deser agrees that it's important to communicate to the public that, in the climate system, there will always be this "irreducible" uncertainty."We’re always going to have these two components to the climate system: human-induced changes and natural variability. You always have to take both into account," Deser said. "In the future, it will all depend on how the human-induced component is either offset — or augmented — by the sequence of natural variability that unfolds."About the articleTitle: The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate VariabilityAuthors:  J. E. Kay, C. Deser, A. Phillips, A. Mai, C. Hannay, G. Strand, J. M. Arblaster, S. C. Bates, G. Danabasoglu, J. Edwards, M. Holland, P. Kushner, J.-F. Lamarque, D. Lawrence, K. Lindsay, A. Middleton, E. Munoz, R. Neale, K. Oleson, L. Polvani, and M. VertensteinJournal: Bulletin of the American Meteorological Society, DOI: 10.1175/BAMS-D-13-00255.1Funders: National Science FoundationU.S. Department of EnergyIn the news: Stories about research using the CESM Large EnsembleCauses of California drought linked to climate change, Stanford scientists sayStanford University (UCAR Member)The difficulty of predicting an ice-free ArcticUniversity of Colorado Boulder (UCAR Member)Widespread loss of ocean oxygen to become noticeable in 2030sNCARCornell Scientist Predicts Climate Change Will Prompt Earlier Spring Start DateCornell University (UCAR Member)The 2-degree goal and the question of geoengineeringNCAR New climate model better predicts changes to ocean-carbon sinkUniversity of Wisconsin Madison (UCAR Member)Future summers could regularly be hotter than the hottest on recordNCARExtreme-Weather Winters Becoming More CommonStanford (UCAR Member)More frequent extreme precipitation ahead for western North AmericaPacific Northwest National LaboratoryCloudy With A Chance of WarmingUniversity of Colorado Boulder (UCAR Member)Climate change already accelerating sea level rise, study finds NCARLess ice, more water in Arctic Ocean by 2050s, new CU-Boulder study findsUniversity of Colorado Boulder (UCAR Member)California 2100: More frequent and more severe droughts and floods likelyPacific Northwest National Laboratory Searing heat waves detailed in study of future climateNCAR Did climate change, El Nino make Texas floods worse?Utah State University (UCAR Member)Writer/contact:Laura Snider, Senior Science Writer and Public Information Officer

US taps NCAR technology for new water resources forecasts

BOULDER, Colo. — As the National Oceanic and Atmospheric Administration (NOAA) this month launches a comprehensive system for forecasting water resources in the United States, it is turning to technology developed by the National Center for Atmospheric Research (NCAR) and its university and agency collaborators.WRF-Hydro, a powerful NCAR-based computer model, is the first nationwide operational system to provide continuous predictions of water levels and potential flooding in rivers and streams from coast to coast. NOAA's new Office of Water Prediction selected it last year as the core of the agency's new National Water Model."WRF-Hydro gives us a continuous picture of all of the waterways in the contiguous United States," said NCAR scientist David Gochis, who helped lead its development. "By generating detailed forecast guidance that is hours to weeks ahead, it will help officials make more informed decisions about reservoir levels and river navigation, as well as alerting them to dangerous events like flash floods."WRF-Hydro (WRF stands for Weather Research and Forecasting) is part of a major Office of Water Prediction initiative to bolster U.S. capabilities in predicting and managing water resources. By teaming with NCAR and the research community, NOAA's National Water Center is developing a new national water intelligence capability, enabling better impacts-based forecasts for management and decision making.The new WRF-Hydro computer model simulates streams and other aspects of the hydrologic system in far more detail than previously possible. (Image by NOAA Office of Water Prediction.) Unlike past streamflow models, which provided forecasts every few hours and only for specific points along major river systems, WRF-Hydro provides continuous forecasts of millions of points along rivers, streams, and their tributaries across the contiguous United States. To accomplish this, it simulates the entire hydrologic system — including snowpack, soil moisture, local ponded water, and evapotranspiration — and rapidly generates output on some of the nation's most powerful supercomputers.WRF-Hydro was developed in collaboration with NOAA and university and agency scientists through the Consortium of Universities for the Advancement of Hydrologic Science, the U.S. Geological Survey, Israel Hydrologic Service, and Baron Advanced Meteorological Services. Funding came from NOAA, NASA, and the National Science Foundation, which is NCAR's sponsor."WRF-Hydro is a perfect example of the transition from research to operations," said Antonio (Tony) J. Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation (NSF). "It builds on the NSF investment in basic research in partnership with other agencies, helps to accelerate collaboration with the larger research community, and culminates in support of a mission agency such as NOAA. The use of WRF-Hydro in an operational setting will also allow for feedback from operations to research. In the end this is a win-win situation for all parties involved, chief among them the U.S. taxpayers.""Through our partnership with NCAR and the academic and federal water community, we are bringing the state of the science in water forecasting and prediction to bear operationally," said Thomas Graziano, director of NOAA’s new Office of Water Prediction at the National Weather Service.Filling in the water pictureThe continental United States has a vast network of rivers and streams, from major navigable waterways such as the Mississippi and Columbia to the remote mountain brooks flowing from the high Adirondacks into the Hudson River. The levels and flow rates of these watercourses have far-reaching implications for water availability, water quality, and public safety.Until now, however, it has not been possible to predict conditions at all points in the nation's waterways. Instead, computer models have produced a limited picture by incorporating observations from about 4,000 gauges, generally on the country's bigger rivers. Smaller streams and channels are largely left out of these forecast models, and stretches of major rivers for tens of miles are often not predicted — meaning that schools, bridges, and even entire towns can be vulnerable to unexpected changes in river levels.To fill in the picture, NCAR scientists have worked for the past several years with their colleagues within NOAA, other federal agencies, and universities to combine a range of atmospheric, hydrologic, and soil data into a single forecasting system.The resulting National Water Model, based on WRF-Hydro, simulates current and future conditions on rivers and streams along points two miles apart across the contiguous United States. Along with an hourly analysis of current hydrologic conditions, the National Water Model generates three predictions: an hourly 0- to 15-hour short-range forecast, a daily 0- to 10-day medium-range forecast, and a daily 0- to 30-day long-range water resource forecast.The National Water Model predictions using WRF-Hydro offer a wide array of benefits for society. They will help local, state, and federal officials better manage reservoirs, improve navigation along major rivers, plan for droughts, anticipate water quality problems caused by lower flows, and monitor ecosystems for issues such as whether conditions are favorable for fish spawning. By providing a national view, this will also help the Federal Emergency Management Agency deploy resources more effectively in cases of simultaneous emergencies, such as a hurricane in the Gulf Coast and flooding in California."We've never had such a comprehensive system before," Gochis said. "In some ways, the value of this is a blank page yet to be written."A broad spectrum of observationsWRF-Hydro is a powerful forecasting system that incorporates advanced meteorological and streamflow observations, including data from nearly 8,000 U.S. Geological Survey streamflow gauges across the country. Using advanced mathematical techniques, the model then simulates current and future conditions for millions of points on every significant river, stream, tributary, and catchment in the United States.In time, scientists will add additional observations to the model, including snowpack conditions, lake and reservoir levels, subsurface flows, soil moisture, and land-atmosphere interactions such as evapotranspiration, the process by which water in soil, plants, and other land surfaces evaporates into the atmosphere.Scientists over the last year have demonstrated the accuracy of WRF-Hydro by comparing its simulations to observations of streamflow, snowpack, and other variables. They will continue to assess and expand the system as the National Water Model begins operational forecasts.NCAR scientists maintain and update the open-source code of WRF-Hydro, which is available to the academic community and others. WRF-Hydro is widely used by researchers, both to better understand water resources and floods in the United States and other countries such as Norway, Germany, Romania, Turkey, and Israel, and to project the possible impacts of climate change."At any point in time, forecasts from the new National Water Model have the potential to impact 300 million people," Gochis said. "What NOAA and its collaborator community are doing is trying to usher in a new era of bringing in better physics and better data into forecast models for improving situational awareness and hydrologic decision making."CollaboratorsBaron Advanced Meteorological Services Consortium of Universities for the Advancement of Hydrologic ScienceIsrael Hydrologic ServiceNational Center for Atmospheric ResearchNational Oceanic and Atmospheric AdministrationU.S. Geological SurveyFundersNational Science FoundationNational Aeronautics and Space AdministrationNational Oceanic and Atmospheric Administration

NCAR weather ensemble offers glimpse at forecasting's future

July 1, 2016 | Last spring, scientists at the National Center for Atmospheric Research (NCAR) flipped the switch on a first-of-its-kind weather forecasting system. For more than a year, NCAR's high-resolution, real-time ensemble forecasting system has been ingesting 50,000 to 70,000 observations every six hours and creating a whopping 90,000 weather maps each day.The system has become a favorite among professional forecasters and casual weather wonks: Typically more than 200 people check out the site each day with more than a thousand coming during major weather events. During this experimental period, the NCAR ensemble has also become a popular source of guidance within the National Weather Service, where it has already been referenced several hundred times by forecasters at more than 50 different offices. But perhaps more important, the data accumulated from running the system daily — and there is lots of it — is being used by researchers at universities across the country to study a range of topics, from predicting hail size to anticipating power outages for utilities."We wanted to demonstrate that a real-time system of this scale was feasible," said NCAR scientist Craig Schwartz. "But it's also a research project that can help the community learn more about the predictability of different kinds of weather events."Schwartz is a member of the team that designed and operates the system, along with NCAR colleagues Glen Romine, Ryan Sobash, and Kate Fossell.This animation shows the forecast for accumulated snowfall made by each of the NCAR ensemble's 10 members for the 48-hour period beginning on Jan. 22, 2016. In the run-up to the blizzard, which ultimately dropped more than 30 inches of snow on parts of the Mid-Atlantic, more than 1,000 people visited the NCAR ensemble's website. (©UCAR. This animation is freely available for media & nonprofit use.)Testing a unique toolNCAR's high-resolution ensemble forecasting system is unique in the country for a couple of reasons, both of which are revealed in its name: It's an ensemble, and it's high resolution.Instead of producing a single forecast, the system produces an "ensemble" of 10 forecasts, each with slightly different (but equally likely) starting conditions. The degree to which the forecasts look the same or different tells scientists something about the probability that a weather event, like rain, hail, or wind, will actually occur.By comparing the actual outcomes to the forecasted probabilities, scientists can study the predictability of particular weather events under different circumstances. The forecasting system's high resolution (the grid points are just 3 kilometers apart) allows it to simulate small-scale weather phenomena, like the creation of individual storms from convection — the process of moist, warm air rising and then condensing into clouds. The combination of fine grid spacing and ensemble predictions in the NCAR system offers a sneak peek at what the future of weather forecasting might look like, and weather researchers across the country have noticed. Cliff Mass, a professor of atmospheric sciences at the University of Washington whose specialty is forecasting, said: "It's extremely important for the United States to have a convection-allowing ensemble system to push our forecasting capabilities forward. We were delighted that NCAR demonstrated that this could be done."'The cat's meow'The treasure trove of accruing weather data generated by running the NCAR ensemble is already being used by researchers both at NCAR and in the broader community. Jim Steenburgh, for instance, is a researcher at the University of Utah who is using the system to understand the predictability of mountain snowstorms."NCAR's ensemble not only permits the 'formation' of clouds, it can also capture the topography of the western United States," he said. "The mountains control the weather to some degree, so you need to be able to resolve the mountains' effects on precipitation."Steenburgh has also been using the ensemble with his students. "We’re teaching the next generation of weather forecasters," he said. "In the future, these high-resolution ensemble forecasts will be the tools they need to use, and this gives them early, hands-on experience." Like Steenburgh, Lance Bosart, an atmospheric researcher at the University of Albany, State University of New York, has used the ensemble both in his own research — studying the variability of convective events — and with his students. He said having 10 members in the ensemble forecast helps students easily see the great spread of possibilities, and the visual emphasis of the user interface makes it easy for students to absorb the information."What makes it an invaluable tool is the graphical display," he said. "It's visually compelling. You don't have to take a lot of time to explain what you're looking at; you can get right into explaining the science. I like to say it's the cat's meow." Setting an exampleThe NCAR ensemble is also enabling the researchers running it to further their own research. "We're collecting statistics on the misfit between the model predictions and observations and then we're trying to use that to improve our model physics," Romine said. The ensemble project is also teaching the team about the strengths and weaknesses of the way they've chosen to kick off, or "initialize," each of the ensemble members. "The NCAR ensemble happens to produce a pretty good forecast, but we realize there are some shortcomings," Schwartz said. "For example, if we were trying to make the best forecast in the world, we would probably not be initializing the model the way we are. But then we wouldn’t learn as much from a research perspective." The NCAR ensemble began as a yearlong trial, but the project is continuing to run for now. The team would like to keep the system online until next summer, but they don't yet have the computing resources they need to run it past September.If the system does continue to run, the researchers who are using it say there's still more that they and their students can learn from the project. And if not, there's loads of data already collected that are still waiting to be mined. In any case, Mass says NCAR's ensemble has been a valuable project. "It set a really good example for the nation," he said.Community members interested in collaborating or helping support the NCAR ensemble project are encouraged to contact the team at ensemble@ucar.edu.Writer/contact:Laura Snider, Senior Science Writer and Public Information Officer 

Climate modeling 101: Explanations without equations

A new book breaks down climate models into easy-to-understand concepts. (Photo courtesy Springer.) June 21, 2016 | Climate scientists tell us it's going to get hotter. How much it rains and where it rains is likely to shift. Sea level rise is apt to accelerate. Oceans are on their way to becoming more acidic and less oxygenated. Floods, droughts, storms, and other extreme weather events are projected to change in frequency or intensity.  But how do they know what they know? For climate scientists, numerical models are the tools of the trade. But for the layperson — and even for scientists in other fields — climate models can seem mysterious. What does "numerical" even mean? Do climate models take other things besides the atmosphere into account?How do scientists know if a model is any good? * Two experts in climate modeling, Andrew Gettelman of the National Center for Atmospheric Research and Richard Rood of the University of Michigan, have your answers and more, free of charge. In a new open-access book, "Demystifying Climate Models," the pair lay out the fundamentals. In 282 pages, the scientists explain the basics of climate science, how that science is translated into a climate model, and what those models can tell us (as well as what they can't) — all without using a single equation. *Find the answers on pages 8, 13, and 161, respectively, of the book. AtmosNews sat down with Gettelman to learn more about the book, which anyone can download at http://www.demystifyingclimate.org.   NCAR scientist Andrew Gettelman has written a new book on climate modeling with Richard Rood of the University of Michigan. (Courtesy photo. This image is freely available for media & nonprofit use.) What was the motivation to write this book? There isn't really another book that sets out the philosophy and structure of models. There are textbooks, but inside you'll find a lot of physics and chemistry: information about momentum equations, turbulent fluxes — which is useful if you want to build your own model. And then there are books on climate change for the layperson, and they devote maybe a paragraph to climate modeling. There's not much in the middle. This book provides an introduction for the beginning grad student, or someone in another field who is interested in using model output, or anyone who is just curious how climate works and how we simulate it. What are some of the biggest misperceptions about climate models that you hear? One is that people say climate models are based on uncertain science. But that's not true at all. If we didn't know the science, my cellphone wouldn't work. Radios wouldn't work. GPS wouldn't work. That's because the energy that warms the Earth, which radiates from the Sun, and is absorbed and re-emitted by Earth's surface — and also by greenhouse gases in the atmosphere — is part of the same spectrum of radiation that makes up radio waves. If we didn't understand electromagnetic waves, we couldn't have created the technology we rely on today. The same is true for the science that underlies other aspects of climate models. (Learn more on page 38 of the book.) But we don't understand everything, right? We have understood the basic physics for hundreds of years. The last piece of it, the discovery that carbon dioxide warms the atmosphere, was put in place in the late 19th, early 20th century. Everything else — the laws of motion, the laws of thermodynamics — was all worked out between the 17th and 19th centuries. (Learn more on page 39 of the book.) We do still have uncertainty in our modeling systems. A big part of this book is about how scientists understand that uncertainty and actually embrace it as part of their work. If you know what you don't know and why, you can use that to better understand the whole climate system. Can we ever eliminate the uncertainty? Not entirely. In our book, we break down uncertainty into three categories: model uncertainty (How good are the models at reflecting how the Earth really works?), initial condition uncertainty (How well do we understand what the Earth system looks like right now?), and scenario uncertainty (What will future emissions look like?) To better understand, it might help to think about the uncertainty that would be involved if you had a computer model that could simulate making a pizza. Instead of trying to figure out what Earth's climate would look like in 50 or 100 years, this model would predict what your pizza would look like when it was done.  The first thing you want to know is how well the model reflects the reality of how a pizza is made. For example, does the model take into account all the ingredients you need to make the pizza, and how they will each evolve? The cheese melts, the dough rises, and the pepperoni shrinks. How well can the model approximate each of those processes? This is model uncertainty. The second thing you'd want to know is if you can input all the pizza's "initial conditions" into the model. Some initial conditions — like how many pepperoni slices are on the pizza and where — are easy to observe, but others are not. For example, kneading the pizza dough creates small pockets of air, but you don’t know exactly where they are. When the dough is heated, the air expands and forms big bubbles in the crust. If you can't tell the model where the air pockets are, it can't accurately predict where the crust bubbles will form when the pizza is baked. The same is true for a climate model. Some parts of the Earth, like the deep oceans and the polar regions, are not easy to observe with enough detail, leaving scientists to estimate what the conditions there are like and leading to the second type of uncertainty in the model results.  Finally, the pizza-baking model also has to deal with "scenario uncertainty," because it doesn't know how long the person baking the pizza will keep it in the oven, or at what temperature. Without understanding the choices the human will make, the model can't say for sure if the dough will be soft, crispy, or burnt. With climate models, over long periods of time, like a century, we've found that this scenario uncertainty is actually the dominant one. In other words, we don't know how much carbon dioxide humans around the world going to emit in the years and decades to come, and it turns out that that's what matters most.  (Learn more about uncertainty on page 10 of the book.) Any other misperceptions you frequently hear? People always say, "If we can't predict the weather next week, how can we know what the climate will be like in 50 years?" Generally speaking, we can't perfectly predict the weather because we don't have a full understanding of all the current conditions. We don't have observations for every grid point on a weather model or for large parts of the ocean, for example. But climate is not concerned about the exact weather on a particular day 50 or 100 years from now. Climate is the statistical distribution of weather, not a particular point on that distribution. Climate prediction is focused on the statistics of this distribution, and that is governed by conservation of energy and mass on long time scales, something we do understand. (Learn more on page 6 of the book. Read more common misperceptions at http://www.demystifyingclimate.org/misperceptions.) Did you learn anything about climate modeling while working on the book? My background is the atmosphere. I sat down and wrote the whole section on the atmosphere in practically one sitting. But I had to learn about the other aspects of models, the ocean and the land, which work really differently. The atmosphere has only one boundary, a bottom boundary. We just have to worry about how it interacts with mountains and other bumps on the surface. But the ocean has three hard boundaries: the bottom and the sides, like a giant rough bathtub. It also has a boundary with the atmosphere on the top. Those boundaries really change how the ocean moves. And the land is completely different because it doesn't move at all. Writing this book really gave me a new appreciation for some of the subtleties of other parts of the Earth System and the ways my colleagues model them. (Learn more on page 13 of the book.) What was the most fun part of writing the book for you? I think having to force myself to think in terms of analogies that are understandable to a variety of people. I can describe a model using a whole bunch of words most people don't use every day, like "flux." It was a fun challenge to come up with words that would accurately describe the models and the science but that were accessible to everyone.

Pages

Subscribe to Computer Modeling