Computer Modeling

Two NCAR scientists honored by American Geophysical Union

BOULDER, Colo. — Martyn Clark, senior scientist at the National Center for Atmospheric Research (NCAR), will be honored next week as a Fellow of the American Geophysical Union (AGU) for his exceptional contribution to Earth science.Clark is an expert in the numerical modeling and prediction of hydrologic processes. His current research includes developing new modeling methods to improve streamflow forecasts and better understand climate change impacts on regional water resources. Clark, who grew up in Christchurch, New Zealand, has authored or co-authored 135 journal articles since receiving his Ph.D. from the University of Colorado in 1998.NCAR Senior Scientist Martyn Clark (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.)"This well-deserved honor reflects Martyn's eminent work in the increasingly critical area of water-resource prediction and management," said NCAR Director James W. Hurrell.Clark said he was delighted to see NCAR's hydrologic modeling recognized. "Hydrology is beginning to play a much stronger role in addressing important interdisciplinary science questions about Earth System change, such as how changes in the terrestrial water cycle affect biological productivity and how groundwater can buffer water stress in ecosystems and human societies. It's exciting to advance modeling capabilities in these areas."NCAR Senior Scientist Bette Otto-Bliesner. (©UCAR. Photo by Carlye Calvin. This image is freely available for media & nonprofit use.)Clark is among 60 individuals from eight countries recognized as Fellows this year; only one in one thousand AGU members receive this recognition in any given year. Nearly 40 percent of this year's fellows are from the 110 member colleges and universities of the University Corporation for Atmospheric Research (UCAR), which manages NCAR. This year's class will be honored next Wednesday at the 2016 AGU Fall Meeting in San Francisco.NCAR Senior Scientist Bette Otto-Bliesner, who was named an AGU Fellow last year, is being honored by her peers in the Paleoceanography and Paleoclimatology Focus Group and Ocean Sciences Section by being asked to give the 2016 Emiliani Lecture. She will give the lecture next Wednesday at the AGU Fall Meeting on the topic of "Resolving Some Puzzles of Climate Evolution Since the Last Glacial Maximum: A Melding of Paleoclimate Modeling and Data."The AGU, dedicated to advancing Earth and space sciences for the benefit of society, is a not-for-profit, professional organization representing 60,000 members in more than 140 countries. 

High-res model captures explosive increase in hurricane strength

Nov. 1, 2016 | Last fall, Hurricane Patricia exploded from a Category 1 to a record-breaking Category 5 storm in just 24 hours.Patricia's rapid intensification off the coast of Mexico blindsided forecasters, whose models vastly underestimated how strong the hurricane would become. Patricia — and more recently Hurricane Matthew, which also jumped from Category 1 to Category 5 in less than a day — highlight a weakness in predictive capabilities. While we've made great strides in forecasting a hurricane's track, forecasting its intensity remains a challenge.New research using a sophisticated weather model based at the National Center for Atmospheric Research (NCAR) offers some clues about how these forecasts can be improved.The scientists — Ryder Fox, an undergraduate researcher at the New Mexico Institute for Mining and Technology, and Falko Judt, an NCAR postdoctoral researcher — found that an advanced version of the Weather Research and Forecasting model (WRF-ARW) could accurately forecast Hurricane Patricia's rapid intensification when run at a high enough resolution."Because Patricia was so out of bounds — the hurricane broke records for high wind speed and low pressure — we didn't think our model would actually be able to capture its peak intensity," Judt said. "The fact that the model nailed it took us by surprise."Hurricane Patricia approaches the west coast of Mexico on Oct. 23, 2015. (Image courtesy NASA.)   Judt and Fox think that the model's resolution was one important key to its success. The scientists ran WRF-ARW with a 1-kilometer (0.6-mile) resolution on the Yellowstone system at the NCAR-Wyoming Supercomputing Center. The models being used to actually forecast Patricia at the time had resolutions between 3 and 15 kilometers."Going to 1-kilometer resolution may be especially important for very strong storms, because they tend to have an eyewall that's really small," Judt said. "Patricia's eye was just 13 kilometers across at its most intense."Still, the researchers caution that more simulations are needed to be sure that the model's ability to capture Hurricane Patricia's intensity wasn't a fluke."We're not sure yet that, if we ran the same model for Hurricane Matthew, we would forecast that storm correctly," Judt said. "There are so many things that can go wrong with hurricane forecasting."To address this uncertainty, Judt and Fox have begun running the model additional times, each with slightly tweaked starting conditions. The preliminary results show that while each model run is distinct, each one also captures the rapid intensification of the storm. This relative harmony among the ensemble of model runs suggests that WRF-ARW does a good job of reproducing the storm-friendly environmental conditions that Patricia formed in."The set-up that nature created may have allowed for a storm to intensify no matter what," Judt said. "The sea surface was downright hot, the air was really moist, and the wind shear, at times, was virtually zero. It was a very ripe environment."Fox began working with Judt through SOARS, the Significant Opportunities in Atmospheric Research program, which pairs young researchers with NCAR mentors. An undergraduate-to-graduate bridge program, SOARS is designed to broaden participation in the atmospheric and related sciences."The SOARS program means everything — not just to my ability to do this type of research, but also to grow as a scientist and to find my place within the scientific community," said Fox, who published the research results as an article in Physics Today.Fox hopes the research on accurate modeling of Hurricane Patricia may lead to improved early warning systems that could help prevent loss of life."My personal passion regarding severe weather research lies in improved early warning systems," Fox said, "which optimally lead to lower death counts."

Advanced computer model focuses on Hurricane Matthew

Oct. 6, 2016 | As Hurricane Matthew churns toward the southeastern U.S. coast, scientists at the National Center for Atmospheric Research (NCAR) are testing an advanced research computer model to see how well it can predict the powerful storm's track and intensity.The Model for Prediction Across Scales (MPAS) uses an innovative software approach that allows scientists to focus on regional conditions while still capturing far-flung atmospheric processes that can influence the storm in question. This is a contrast to the forecast models typically used to track hurricanes today, which cannot simultaneously capture both global and local atmospheric processes.The experimental MPAS model simulates Hurricane Matthew hitting the Southeast. To see a range of model output, visit the MPAS tropical cyclone website. MPAS is able to do both because it uses a flexible mesh that allows it to zoom into higher resolution in some areas — over hurricane breeding grounds, for example — while zooming out over the rest of Earth. This ability to vary resolution across the globe requires a small fraction of the computer power needed to have high resolution everywhere.By testing MPAS during hurricane season, the research team can determine the adjustments that need to be made to the model while gaining insights into how to improve hurricane forecasting in the future."This is an experimental effort," said Chris Davis, a senior scientist and director of NCAR's Mesoscale and Microscale Meteorology Laboratory. "We're doing this to see if we can find systematic biases in the model so we can improve simulations of the tropics in general and hurricanes in particular."Davis and the other members of the research team, including NCAR scientists David Ahijevych, Sang-Hun Park, Bill Skamarock, and Wei Wang, are running MPAS once a day on NCAR's Yellowstone supercomputer, inputting various ocean and atmospheric conditions to see how it performs. The work is supported by the National Science Foundation and the Korea Institute of Science and Technology Information.Even though they are just tests, Davis said the MPAS simulations are often comparable with official forecast models such as those run by the National Hurricane Center and the European Centre for Medium-Range Weather Forecasts. As Matthew was in its early stages, in fact, MPAS did a better job than other models in simulating the northward movement of the storm from the Caribbean Sea toward the Florida coast.The scientists will analyze how MPAS performed and share the results with colleagues in the meteorological community. It's a step in an ongoing research effort to better predict the formation and behavior of hurricanes."We run the model even when the tropics are quiet, but an event like Matthew gives us a special opportunity to see what contributes to errors in tropical cyclone prediction," Davis said. "While a major hurricane can have catastrophic impacts, we hope to learn from it and make computer models even better in the future."Funders:National Science FoundationKorea Institute of Science and Technology InformationWriter/contact:David Hosansky, Manager of Media Relations

40 Earths: NCAR's Large Ensemble reveals staggering climate variability

Sept. 29, 2016 | Over the last century, Earth's climate has had its natural ups and downs. Against the backdrop of human-caused climate change, fluctuating atmosphere and ocean circulation patterns have caused the melting of Arctic sea ice to sometimes speed up and sometimes slow down, for example. And the back-and-forth formation of El Niño and La Niña events in the Pacific has cause d some parts of the world to get wetter or drier while some parts get warmer or cooler, depending on the year.But what if the sequence of variability that actually occurred over the last century was just one way that Earth's climate story could have plausibly unfolded? What if tiny — even imperceptible — changes in Earth's atmosphere had kicked off an entirely different sequence of naturally occurring climate events?"It's the proverbial butterfly effect," said Clara Deser, a senior climate scientist at the National Center for Atmospheric Research (NCAR). "Could a butterfly flapping its wings in Mexico set off these little motions in the atmosphere that cascade into large-scale changes to atmospheric circulation?"To explore the possible impact of miniscule perturbations to the climate — and gain a fuller understanding of the range of climate variability that could occur — Deser and her colleague Jennifer Kay, an assistant professor at the University of Colorado Boulder and an NCAR visiting scientist, led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. With each simulation, the scientists modified the model's starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.The result, called the CESM Large Ensemble, is a staggering display of Earth climates that could have been along with a rich look at future climates that could potentially be."We gave the temperature in the atmosphere the tiniest tickle in the model — you could never measure it — and the resulting diversity of climate projections is astounding," Deser said. "It's been really eye-opening for people."The dataset generated during the project, which is freely available, has already proven to be a tremendous resource for researchers across the globe who are interested in how natural climate variability and human-caused climate change interact. In a little over a year, about 100 peer-reviewed scientific journal articles have used data from the CESM Large Ensemble.Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society's Journal of Climate. (© 2016 AMS.) A community effortRunning a complex climate model like the CESM several dozen times takes a vast amount of computing resources, which makes such projects rare and difficult to pull off. With that in mind, Deser and Kay wanted to make sure that the data resulting from the Large Ensemble were as useful as possible. To do that, they queried scientists from across the community who might make use of the project results — oceanographers, geochemists, atmospheric scientists, biologists, socioeconomic researchers — about what they really wanted."It took a village to make this ensemble happen and for it to be useful to and usable by the broad climate community," Kay said. "The result is a large number of ensemble members, in a state-of-the-art climate model, with outputs asked for by the community, that is publicly available and relatively easy to access — it's no wonder it's getting so much use."Scientists have so far relied on the CESM Large Ensemble to study everything from oxygen levels in the ocean to potential geoengineering scenarios to possible changes in the frequency of moisture-laden atmospheric rivers making landfall. In fact, so many researchers have found the Large Ensemble so useful that Kay and Deser were honored with the 2016 CESM Distinguished Achievement Award, which recognizes significant contributions to the climate modeling community.The award citation noted the pair was chosen because "the Large Ensemble represents one of NCAR's most significant contributions to the U.S. climate research community. … At a scientific level, the utility of the Large Ensemble cannot be overstated."The power of multiple runs: Looking forward — and backwardClearly, the CESM Large Ensemble is useful for looking forward: What is the range of possible futures we might expect in the face of a changing climate? How much warmer will summers become? When will summer Arctic sea ice disappear? How will climate change affect ocean life?But the Large Ensemble is also an extremely valuable tool for understanding our past. This vast storehouse of data helps scientists evaluate observations and put them in context: How unusual is a particular heat wave? Is a recent change in rainfall patterns the result of global warming or could it be from solely natural causes?With only a single model run, scientists are limited in what they can conclude when an observation doesn't match up with a model's projection. For example, if the Arctic sea ice extent were to expand, even though the model projected a decline, what would that mean? Is the physics underlying the model wrong? Or does the model incorrectly capture the natural variability? In other words, if you ran the model more times, with slightly different starting conditions, would one of the model runs correctly project the growth in sea ice?The Large Ensemble helps answer that question. Armed with 40 different simulations, scientists can characterize the range of historic natural variability. With this information, they can determine if observations fit within the envelope of natural variability outlined in the model, instead of comparing them to a single run.Creating an envelope of what can be considered natural also makes it possible to see when the signal of human-caused climate change has pushed an observation beyond the natural variability. The Large Ensemble can also clarify the climate change "signal" in the model. That's because averaging together the 40 ensemble members can effectively cancel out the natural variability — a La Niña in one model run might cancel out an El Niño in another, for example — leaving behind only changes due to climate change."This new ability to separate natural internal variability from externally driven trends is absolutely critical for moving forward our understanding of climate and climate change," said Galen McKinley, a professor of atmospheric and oceanic sciences at the University of Wisconsin–Madison.McKinley used the Large Ensemble — which she called a "transformative tool" — to study changes in the ocean's ability to take up carbon dioxide in a warming climate.The two components of the climate systemThe CESM Large Ensemble is not the first ensemble of climate simulations, though it is perhaps the most comprehensive and widely used. Scientists have long understood that it makes sense to look at more than one model run. Frequently, however, scientists have done this by comparing simulations from different climate models, collectively called a multi-model ensemble.This method gives a feel for the diversity of possible outcomes, but it doesn't allow researchers to determine why two model simulations might differ: Is it because the models themselves represent the physics of the Earth system differently? Or is it because the models have different representations of the natural variability or different sensitivities to changing carbon dioxide concentrations?The Large Ensemble helps resolve this dilemma. Because each member is run using the same model, the differences between runs can be attributed to differences in natural variability alone. The Large Ensemble also offers context for comparing simulations in a multi-model ensemble. If the simulations appear to disagree about what the future may look like—but they still fit within the envelope of natural variability characterized by the Large Ensemble—that could be a clue that the models do not actually disagree on the fundamentals. Instead, they may just be representing different sequences of natural variability.This ability to put model results in context is important, not just for scientists but for policy makers, according to Noah Diffenbaugh, a climate scientist at Stanford University who has used the Large Ensemble in several studies, including one that looks at the contribution of climate change to the recent, severe California drought.“It’s pretty common for real-world decision makers to look at the different simulations from different models, and throw up their hands and say, 'These models don't agree so I can't make decisions,'" he said. "In reality, it may not be that the models are disagreeing. Instead, we may be seeing the actual uncertainty of the climate system. There is some amount of natural uncertainty that we can't reduce — that information is really important for making robust decisions, and the Large Ensemble is giving us a window that we haven’t had before.”Deser agrees that it's important to communicate to the public that, in the climate system, there will always be this "irreducible" uncertainty."We’re always going to have these two components to the climate system: human-induced changes and natural variability. You always have to take both into account," Deser said. "In the future, it will all depend on how the human-induced component is either offset — or augmented — by the sequence of natural variability that unfolds."About the articleTitle: The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate VariabilityAuthors:  J. E. Kay, C. Deser, A. Phillips, A. Mai, C. Hannay, G. Strand, J. M. Arblaster, S. C. Bates, G. Danabasoglu, J. Edwards, M. Holland, P. Kushner, J.-F. Lamarque, D. Lawrence, K. Lindsay, A. Middleton, E. Munoz, R. Neale, K. Oleson, L. Polvani, and M. VertensteinJournal: Bulletin of the American Meteorological Society, DOI: 10.1175/BAMS-D-13-00255.1Funders: National Science FoundationU.S. Department of EnergyIn the news: Stories about research using the CESM Large EnsembleCauses of California drought linked to climate change, Stanford scientists sayStanford University (UCAR Member)The difficulty of predicting an ice-free ArcticUniversity of Colorado Boulder (UCAR Member)Widespread loss of ocean oxygen to become noticeable in 2030sNCARCornell Scientist Predicts Climate Change Will Prompt Earlier Spring Start DateCornell University (UCAR Member)The 2-degree goal and the question of geoengineeringNCAR New climate model better predicts changes to ocean-carbon sinkUniversity of Wisconsin Madison (UCAR Member)Future summers could regularly be hotter than the hottest on recordNCARExtreme-Weather Winters Becoming More CommonStanford (UCAR Member)More frequent extreme precipitation ahead for western North AmericaPacific Northwest National LaboratoryCloudy With A Chance of WarmingUniversity of Colorado Boulder (UCAR Member)Climate change already accelerating sea level rise, study finds NCARLess ice, more water in Arctic Ocean by 2050s, new CU-Boulder study findsUniversity of Colorado Boulder (UCAR Member)California 2100: More frequent and more severe droughts and floods likelyPacific Northwest National Laboratory Searing heat waves detailed in study of future climateNCAR Did climate change, El Nino make Texas floods worse?Utah State University (UCAR Member)Writer/contact:Laura Snider, Senior Science Writer and Public Information Officer

US taps NCAR technology for new water resources forecasts

BOULDER, Colo. — As the National Oceanic and Atmospheric Administration (NOAA) this month launches a comprehensive system for forecasting water resources in the United States, it is turning to technology developed by the National Center for Atmospheric Research (NCAR) and its university and agency collaborators.WRF-Hydro, a powerful NCAR-based computer model, is the first nationwide operational system to provide continuous predictions of water levels and potential flooding in rivers and streams from coast to coast. NOAA's new Office of Water Prediction selected it last year as the core of the agency's new National Water Model."WRF-Hydro gives us a continuous picture of all of the waterways in the contiguous United States," said NCAR scientist David Gochis, who helped lead its development. "By generating detailed forecast guidance that is hours to weeks ahead, it will help officials make more informed decisions about reservoir levels and river navigation, as well as alerting them to dangerous events like flash floods."WRF-Hydro (WRF stands for Weather Research and Forecasting) is part of a major Office of Water Prediction initiative to bolster U.S. capabilities in predicting and managing water resources. By teaming with NCAR and the research community, NOAA's National Water Center is developing a new national water intelligence capability, enabling better impacts-based forecasts for management and decision making.The new WRF-Hydro computer model simulates streams and other aspects of the hydrologic system in far more detail than previously possible. (Image by NOAA Office of Water Prediction.) Unlike past streamflow models, which provided forecasts every few hours and only for specific points along major river systems, WRF-Hydro provides continuous forecasts of millions of points along rivers, streams, and their tributaries across the contiguous United States. To accomplish this, it simulates the entire hydrologic system — including snowpack, soil moisture, local ponded water, and evapotranspiration — and rapidly generates output on some of the nation's most powerful supercomputers.WRF-Hydro was developed in collaboration with NOAA and university and agency scientists through the Consortium of Universities for the Advancement of Hydrologic Science, the U.S. Geological Survey, Israel Hydrologic Service, and Baron Advanced Meteorological Services. Funding came from NOAA, NASA, and the National Science Foundation, which is NCAR's sponsor."WRF-Hydro is a perfect example of the transition from research to operations," said Antonio (Tony) J. Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation (NSF). "It builds on the NSF investment in basic research in partnership with other agencies, helps to accelerate collaboration with the larger research community, and culminates in support of a mission agency such as NOAA. The use of WRF-Hydro in an operational setting will also allow for feedback from operations to research. In the end this is a win-win situation for all parties involved, chief among them the U.S. taxpayers.""Through our partnership with NCAR and the academic and federal water community, we are bringing the state of the science in water forecasting and prediction to bear operationally," said Thomas Graziano, director of NOAA’s new Office of Water Prediction at the National Weather Service.Filling in the water pictureThe continental United States has a vast network of rivers and streams, from major navigable waterways such as the Mississippi and Columbia to the remote mountain brooks flowing from the high Adirondacks into the Hudson River. The levels and flow rates of these watercourses have far-reaching implications for water availability, water quality, and public safety.Until now, however, it has not been possible to predict conditions at all points in the nation's waterways. Instead, computer models have produced a limited picture by incorporating observations from about 4,000 gauges, generally on the country's bigger rivers. Smaller streams and channels are largely left out of these forecast models, and stretches of major rivers for tens of miles are often not predicted — meaning that schools, bridges, and even entire towns can be vulnerable to unexpected changes in river levels.To fill in the picture, NCAR scientists have worked for the past several years with their colleagues within NOAA, other federal agencies, and universities to combine a range of atmospheric, hydrologic, and soil data into a single forecasting system.The resulting National Water Model, based on WRF-Hydro, simulates current and future conditions on rivers and streams along points two miles apart across the contiguous United States. Along with an hourly analysis of current hydrologic conditions, the National Water Model generates three predictions: an hourly 0- to 15-hour short-range forecast, a daily 0- to 10-day medium-range forecast, and a daily 0- to 30-day long-range water resource forecast.The National Water Model predictions using WRF-Hydro offer a wide array of benefits for society. They will help local, state, and federal officials better manage reservoirs, improve navigation along major rivers, plan for droughts, anticipate water quality problems caused by lower flows, and monitor ecosystems for issues such as whether conditions are favorable for fish spawning. By providing a national view, this will also help the Federal Emergency Management Agency deploy resources more effectively in cases of simultaneous emergencies, such as a hurricane in the Gulf Coast and flooding in California."We've never had such a comprehensive system before," Gochis said. "In some ways, the value of this is a blank page yet to be written."A broad spectrum of observationsWRF-Hydro is a powerful forecasting system that incorporates advanced meteorological and streamflow observations, including data from nearly 8,000 U.S. Geological Survey streamflow gauges across the country. Using advanced mathematical techniques, the model then simulates current and future conditions for millions of points on every significant river, stream, tributary, and catchment in the United States.In time, scientists will add additional observations to the model, including snowpack conditions, lake and reservoir levels, subsurface flows, soil moisture, and land-atmosphere interactions such as evapotranspiration, the process by which water in soil, plants, and other land surfaces evaporates into the atmosphere.Scientists over the last year have demonstrated the accuracy of WRF-Hydro by comparing its simulations to observations of streamflow, snowpack, and other variables. They will continue to assess and expand the system as the National Water Model begins operational forecasts.NCAR scientists maintain and update the open-source code of WRF-Hydro, which is available to the academic community and others. WRF-Hydro is widely used by researchers, both to better understand water resources and floods in the United States and other countries such as Norway, Germany, Romania, Turkey, and Israel, and to project the possible impacts of climate change."At any point in time, forecasts from the new National Water Model have the potential to impact 300 million people," Gochis said. "What NOAA and its collaborator community are doing is trying to usher in a new era of bringing in better physics and better data into forecast models for improving situational awareness and hydrologic decision making."CollaboratorsBaron Advanced Meteorological Services Consortium of Universities for the Advancement of Hydrologic ScienceIsrael Hydrologic ServiceNational Center for Atmospheric ResearchNational Oceanic and Atmospheric AdministrationU.S. Geological SurveyFundersNational Science FoundationNational Aeronautics and Space AdministrationNational Oceanic and Atmospheric Administration

NCAR weather ensemble offers glimpse at forecasting's future

July 1, 2016 | Last spring, scientists at the National Center for Atmospheric Research (NCAR) flipped the switch on a first-of-its-kind weather forecasting system. For more than a year, NCAR's high-resolution, real-time ensemble forecasting system has been ingesting 50,000 to 70,000 observations every six hours and creating a whopping 90,000 weather maps each day.The system has become a favorite among professional forecasters and casual weather wonks: Typically more than 200 people check out the site each day with more than a thousand coming during major weather events. During this experimental period, the NCAR ensemble has also become a popular source of guidance within the National Weather Service, where it has already been referenced several hundred times by forecasters at more than 50 different offices. But perhaps more important, the data accumulated from running the system daily — and there is lots of it — is being used by researchers at universities across the country to study a range of topics, from predicting hail size to anticipating power outages for utilities."We wanted to demonstrate that a real-time system of this scale was feasible," said NCAR scientist Craig Schwartz. "But it's also a research project that can help the community learn more about the predictability of different kinds of weather events."Schwartz is a member of the team that designed and operates the system, along with NCAR colleagues Glen Romine, Ryan Sobash, and Kate Fossell.This animation shows the forecast for accumulated snowfall made by each of the NCAR ensemble's 10 members for the 48-hour period beginning on Jan. 22, 2016. In the run-up to the blizzard, which ultimately dropped more than 30 inches of snow on parts of the Mid-Atlantic, more than 1,000 people visited the NCAR ensemble's website. (©UCAR. This animation is freely available for media & nonprofit use.)Testing a unique toolNCAR's high-resolution ensemble forecasting system is unique in the country for a couple of reasons, both of which are revealed in its name: It's an ensemble, and it's high resolution.Instead of producing a single forecast, the system produces an "ensemble" of 10 forecasts, each with slightly different (but equally likely) starting conditions. The degree to which the forecasts look the same or different tells scientists something about the probability that a weather event, like rain, hail, or wind, will actually occur.By comparing the actual outcomes to the forecasted probabilities, scientists can study the predictability of particular weather events under different circumstances. The forecasting system's high resolution (the grid points are just 3 kilometers apart) allows it to simulate small-scale weather phenomena, like the creation of individual storms from convection — the process of moist, warm air rising and then condensing into clouds. The combination of fine grid spacing and ensemble predictions in the NCAR system offers a sneak peek at what the future of weather forecasting might look like, and weather researchers across the country have noticed. Cliff Mass, a professor of atmospheric sciences at the University of Washington whose specialty is forecasting, said: "It's extremely important for the United States to have a convection-allowing ensemble system to push our forecasting capabilities forward. We were delighted that NCAR demonstrated that this could be done."'The cat's meow'The treasure trove of accruing weather data generated by running the NCAR ensemble is already being used by researchers both at NCAR and in the broader community. Jim Steenburgh, for instance, is a researcher at the University of Utah who is using the system to understand the predictability of mountain snowstorms."NCAR's ensemble not only permits the 'formation' of clouds, it can also capture the topography of the western United States," he said. "The mountains control the weather to some degree, so you need to be able to resolve the mountains' effects on precipitation."Steenburgh has also been using the ensemble with his students. "We’re teaching the next generation of weather forecasters," he said. "In the future, these high-resolution ensemble forecasts will be the tools they need to use, and this gives them early, hands-on experience." Like Steenburgh, Lance Bosart, an atmospheric researcher at the University of Albany, State University of New York, has used the ensemble both in his own research — studying the variability of convective events — and with his students. He said having 10 members in the ensemble forecast helps students easily see the great spread of possibilities, and the visual emphasis of the user interface makes it easy for students to absorb the information."What makes it an invaluable tool is the graphical display," he said. "It's visually compelling. You don't have to take a lot of time to explain what you're looking at; you can get right into explaining the science. I like to say it's the cat's meow." Setting an exampleThe NCAR ensemble is also enabling the researchers running it to further their own research. "We're collecting statistics on the misfit between the model predictions and observations and then we're trying to use that to improve our model physics," Romine said. The ensemble project is also teaching the team about the strengths and weaknesses of the way they've chosen to kick off, or "initialize," each of the ensemble members. "The NCAR ensemble happens to produce a pretty good forecast, but we realize there are some shortcomings," Schwartz said. "For example, if we were trying to make the best forecast in the world, we would probably not be initializing the model the way we are. But then we wouldn’t learn as much from a research perspective." The NCAR ensemble began as a yearlong trial, but the project is continuing to run for now. The team would like to keep the system online until next summer, but they don't yet have the computing resources they need to run it past September.If the system does continue to run, the researchers who are using it say there's still more that they and their students can learn from the project. And if not, there's loads of data already collected that are still waiting to be mined. In any case, Mass says NCAR's ensemble has been a valuable project. "It set a really good example for the nation," he said.Community members interested in collaborating or helping support the NCAR ensemble project are encouraged to contact the team at ensemble@ucar.edu.Writer/contact:Laura Snider, Senior Science Writer and Public Information Officer 

Climate modeling 101: Explanations without equations

A new book breaks down climate models into easy-to-understand concepts. (Photo courtesy Springer.) June 21, 2016 | Climate scientists tell us it's going to get hotter. How much it rains and where it rains is likely to shift. Sea level rise is apt to accelerate. Oceans are on their way to becoming more acidic and less oxygenated. Floods, droughts, storms, and other extreme weather events are projected to change in frequency or intensity.  But how do they know what they know? For climate scientists, numerical models are the tools of the trade. But for the layperson — and even for scientists in other fields — climate models can seem mysterious. What does "numerical" even mean? Do climate models take other things besides the atmosphere into account?How do scientists know if a model is any good? * Two experts in climate modeling, Andrew Gettelman of the National Center for Atmospheric Research and Richard Rood of the University of Michigan, have your answers and more, free of charge. In a new open-access book, "Demystifying Climate Models," the pair lay out the fundamentals. In 282 pages, the scientists explain the basics of climate science, how that science is translated into a climate model, and what those models can tell us (as well as what they can't) — all without using a single equation. *Find the answers on pages 8, 13, and 161, respectively, of the book. AtmosNews sat down with Gettelman to learn more about the book, which anyone can download at http://www.demystifyingclimate.org.   NCAR scientist Andrew Gettelman has written a new book on climate modeling with Richard Rood of the University of Michigan. (Courtesy photo. This image is freely available for media & nonprofit use.) What was the motivation to write this book? There isn't really another book that sets out the philosophy and structure of models. There are textbooks, but inside you'll find a lot of physics and chemistry: information about momentum equations, turbulent fluxes — which is useful if you want to build your own model. And then there are books on climate change for the layperson, and they devote maybe a paragraph to climate modeling. There's not much in the middle. This book provides an introduction for the beginning grad student, or someone in another field who is interested in using model output, or anyone who is just curious how climate works and how we simulate it. What are some of the biggest misperceptions about climate models that you hear? One is that people say climate models are based on uncertain science. But that's not true at all. If we didn't know the science, my cellphone wouldn't work. Radios wouldn't work. GPS wouldn't work. That's because the energy that warms the Earth, which radiates from the Sun, and is absorbed and re-emitted by Earth's surface — and also by greenhouse gases in the atmosphere — is part of the same spectrum of radiation that makes up radio waves. If we didn't understand electromagnetic waves, we couldn't have created the technology we rely on today. The same is true for the science that underlies other aspects of climate models. (Learn more on page 38 of the book.) But we don't understand everything, right? We have understood the basic physics for hundreds of years. The last piece of it, the discovery that carbon dioxide warms the atmosphere, was put in place in the late 19th, early 20th century. Everything else — the laws of motion, the laws of thermodynamics — was all worked out between the 17th and 19th centuries. (Learn more on page 39 of the book.) We do still have uncertainty in our modeling systems. A big part of this book is about how scientists understand that uncertainty and actually embrace it as part of their work. If you know what you don't know and why, you can use that to better understand the whole climate system. Can we ever eliminate the uncertainty? Not entirely. In our book, we break down uncertainty into three categories: model uncertainty (How good are the models at reflecting how the Earth really works?), initial condition uncertainty (How well do we understand what the Earth system looks like right now?), and scenario uncertainty (What will future emissions look like?) To better understand, it might help to think about the uncertainty that would be involved if you had a computer model that could simulate making a pizza. Instead of trying to figure out what Earth's climate would look like in 50 or 100 years, this model would predict what your pizza would look like when it was done.  The first thing you want to know is how well the model reflects the reality of how a pizza is made. For example, does the model take into account all the ingredients you need to make the pizza, and how they will each evolve? The cheese melts, the dough rises, and the pepperoni shrinks. How well can the model approximate each of those processes? This is model uncertainty. The second thing you'd want to know is if you can input all the pizza's "initial conditions" into the model. Some initial conditions — like how many pepperoni slices are on the pizza and where — are easy to observe, but others are not. For example, kneading the pizza dough creates small pockets of air, but you don’t know exactly where they are. When the dough is heated, the air expands and forms big bubbles in the crust. If you can't tell the model where the air pockets are, it can't accurately predict where the crust bubbles will form when the pizza is baked. The same is true for a climate model. Some parts of the Earth, like the deep oceans and the polar regions, are not easy to observe with enough detail, leaving scientists to estimate what the conditions there are like and leading to the second type of uncertainty in the model results.  Finally, the pizza-baking model also has to deal with "scenario uncertainty," because it doesn't know how long the person baking the pizza will keep it in the oven, or at what temperature. Without understanding the choices the human will make, the model can't say for sure if the dough will be soft, crispy, or burnt. With climate models, over long periods of time, like a century, we've found that this scenario uncertainty is actually the dominant one. In other words, we don't know how much carbon dioxide humans around the world going to emit in the years and decades to come, and it turns out that that's what matters most.  (Learn more about uncertainty on page 10 of the book.) Any other misperceptions you frequently hear? People always say, "If we can't predict the weather next week, how can we know what the climate will be like in 50 years?" Generally speaking, we can't perfectly predict the weather because we don't have a full understanding of all the current conditions. We don't have observations for every grid point on a weather model or for large parts of the ocean, for example. But climate is not concerned about the exact weather on a particular day 50 or 100 years from now. Climate is the statistical distribution of weather, not a particular point on that distribution. Climate prediction is focused on the statistics of this distribution, and that is governed by conservation of energy and mass on long time scales, something we do understand. (Learn more on page 6 of the book. Read more common misperceptions at http://www.demystifyingclimate.org/misperceptions.) Did you learn anything about climate modeling while working on the book? My background is the atmosphere. I sat down and wrote the whole section on the atmosphere in practically one sitting. But I had to learn about the other aspects of models, the ocean and the land, which work really differently. The atmosphere has only one boundary, a bottom boundary. We just have to worry about how it interacts with mountains and other bumps on the surface. But the ocean has three hard boundaries: the bottom and the sides, like a giant rough bathtub. It also has a boundary with the atmosphere on the top. Those boundaries really change how the ocean moves. And the land is completely different because it doesn't move at all. Writing this book really gave me a new appreciation for some of the subtleties of other parts of the Earth System and the ways my colleagues model them. (Learn more on page 13 of the book.) What was the most fun part of writing the book for you? I think having to force myself to think in terms of analogies that are understandable to a variety of people. I can describe a model using a whole bunch of words most people don't use every day, like "flux." It was a fun challenge to come up with words that would accurately describe the models and the science but that were accessible to everyone.

UCAR to support EarthCube: Cyberinfrastructure will advance science

BOULDER – EarthCube, a landmark initiative to develop new technological and computational capabilities for geosciences research, will be supported by the University Corporation for Atmospheric Research (UCAR) under a new agreement with the National Science Foundation (NSF). Created by NSF in 2011, EarthCube aims to help researchers across the geosciences from meteorology to seismology better understand our planet in ways that can strengthen societal resilience to natural events. More than 2,500 EarthCube contributors – including scientists, educators, and information professionals – work together on the creation of a common cyberinfrastructure for researchers to collect, access, analyze, share, and visualize all forms of data and related resources. "EarthCube offers the promise to advance geoscience research by creating and delivering critical new capabilities,” said UCAR scientist Mohan Ramamurthy, principal investigator and project director of the new EarthCube office at UCAR. "This is a great opportunity for UCAR to leverage its successful track record in managing large scientific projects that advance our understanding of the planet," said Michael Thompson, interim UCAR president. "The EarthCube project offers the potential to significantly benefit society by helping scientists use the power of diverse big datasets to better understand and predict the natural events, from severe storms to solar disturbances, that affect all of us." EarthCube is designed to foster collaborations across the geosciences. The technology helps scientists in different disciplines better understand the far-reaching influences of natural events, such as how major storms like Sandy (above) affect coastal and inland flooding. This unique view of Sandy was generated with NCAR's VAPOR visualization software, based on detailed computer modeling. (©UCAR. Visualization by Alan Norton, NCAR, based on research by NCAR scientists Mel Shapiro and Thomas Galarneau. This image is freely available for media & nonprofit use. Click here for higher resolution.) UCAR will administer the day-to-day operations of EarthCube under the three-year, $2.8 million agreement with NSF. The EarthCube science support office, currently funded through an NSF grant to the Arizona Geological Survey in Tucson, Arizona, will move to UCAR's Boulder offices starting this month. EarthCube is designed to help researchers across the geosciences address the challenges of understanding and predicting the complexity of the Earth system, from the geology and topography to the water cycle, atmosphere, and space environment of the planet. This approach is critical for improved understanding of the environment and better safeguarding society. In order to better predict the potential effects of a landfalling hurricane on inland mudslides, for example, scientists from multiple disciplines, including meteorology, hydrology, geography, and geology, need a common platform to work together to collect observations, ingest them into advanced computer models of the Earth system, and analyze and interpret the resulting data. "The EarthCube Science Support Office will help us find and share the data geoscientists collect and use to answer critical science questions about the Earth," said Eva Zanzerkia, program director in NSF’s Division of Earth Sciences. Ramamurthy said UCAR is well positioned to help EarthCube meet its goals, since UCAR provides technological support to the geosciences community, including its 109 member universities. UCAR has been involved with EarthCube since NSF launched the initiative. "Currently researchers are spending an enormous amount of time on routine tasks because there is no data system, database, or data infrastructure where they can get all the information they need in some kind of a uniform way from a single interface," Ramamurthy said. "If EarthCube can facilitate the integration of data from multiple domains in a way that is easier and faster, and if there is interoperability in terms of standards for data to be input into a common environment, then integration becomes more easily possible." UCAR is a nonprofit consortium of more than 100 member colleges and universities focused on research and training in the atmospheric and related Earth system sciences. UCAR’s primary activity is managing the National Center for Atmospheric Research (NCAR) on behalf of NSF, NCAR’s sponsor. UCAR also oversees a variety of education and scientific support activities under the umbrella of the UCAR Community Programs, which will administer EarthCube.

A 3D window into a tornado

This simulation was created by NCAR scientist George Bryan to visualize what goes on inside a tornado. The animation is the "high swirl" version in a series that goes from low, to medium, to high. Click to enlarge. (Courtesy Goerge Bryan, NCAR. This image is freely available for media & nonprofit use.) May 17, 2016 | What's really going on inside a tornado? How fast are the strongest winds, and what are the chances that any given location will experience them when a tornado passes by? Due to the difficulties of measuring wind speeds in tornadoes, scientists don't have answers to these questions. However, a collaborative project between researchers at the University of Miami and NCAR has been seeking clues with new, highly detailed computer simulations of tornado wind fields. The simulations can be viewed in a series of animations, created by NCAR scientist George Bryan, that provide a 3D window into the evolving wind fields of idealized tornadoes at different rates of rotation. The "high-swirl animation," shown here, which depicts a powerful tornado with 200-plus mph winds, the purple tubelike structures depict the movements of rapidly rotating vortices. Near-surface winds are represented by colors ranging from light blue (less than 20 meters per second, or 45 mph) to deep red (more than 100 meters per second, or 224 miles per hour). The vortices and winds are contained within a condensation cloud that rises more than 500 meters (1,640 feet) above the surface. Such visualizations can help atmospheric scientists better understand the structures of tornadoes, as well as the shifting location and strength of maximum wind speeds.  Bryan also uses them in presentations to meteorology students. “When you make these 3D visualizations and then animate them, they give you a sense of how the flow evolves and how the turbulence changes,” Bryan said. “These are details you don’t see by just looking at a photograph.” For example, he learned from the visualization that the rotating tubes tilt backward against the flow at higher altitudes. These are the kinds of details that can eventually help scientists better understand these complex storms. The information is also critical for public safety officials and engineers. “If you’re an engineer and designing a building, you want to know details like how much greater is peak wind over average wind in a tornado,” Bryan said. “We’ll get questions from engineers asking about the details of wind gusts in those purple tubes.” Bryan is collaborating on the simulations with Dave Nolan, chair of Miami’s Department of Atmospheric Sciences. To create the animation, Bryan used innovative NCAR software that enables researchers in the atmospheric and related sciences to analyze and interpret results from large computer models. VAPOR (Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers) is an interactive 3D visualization environment for both animations and still-frame images. The open-source software can be downloaded and used on personal computers. VAPOR was developed at NCAR in partnership with the University of California at Davis and Ohio State University. Funding comes from the National Science Foundation and the Korea Institute of Science and Technology Information. Writer/contactDavid Hosansky FunderNational Science Foundation CollaboratorUniversity of Miami

Wrangling observations into models

April 4, 2016 | If scientists could directly measure the properties of all the water throughout the world’s oceans, they wouldn’t need help from NCAR scientist Alicia Karspeck. But since large expanses of the oceans are beyond the reach of observing instruments, Karspeck’s work is critical for those who want estimates of temperature, salinity, and other properties of water around the globe. Scientists need these estimates to better understand the world’s climate system and how it is changing. “It’s painstaking work, but my hope is it will lead to major advances in climate modeling and long-term prediction,” Karspeck said. She is one of a dozen or so researchers at NCAR who spend their days on data assimilation, a field that is becoming increasingly important for the geosciences and other areas of research. Broadly speaking, data assimilation is any method of enabling computer models to utilize relevant observations. Part science and part art, it involves figuring out how to get available measurements--which may be sparse, tightly clustered, or irregularly scattered--into models that tend to simplify the world by breaking it into gridded boxes. Commonly used in weather forecasting, the technique can improve simulations and help scientists predict future events with more confidence. It can also identify deficiencies in both models and observations. As models have become more powerful and observations more numerous, the technique has become so critical that NCAR last year launched a Data Assimilation Program to better leverage expertise across its seven labs. “Activities in data assimilation have grown well beyond traditional applications in numerical weather prediction for the atmosphere and now span across NCAR’s laboratories,” said NCAR Director Jim Hurrell. “The Data Assimilation program is designed to enhance data assimilation research at NCAR, while at the same time serving the broader U.S. research community.” Scientists are using data assimilation techniques to input a range of North American observations into experimental, high-resolution U.S. forecasts. These real-time ensemble forecasts are publicly available while they're being tested. (@UCAR. This image is freely available for media & nonprofit use.) Improving prediction Created by the NCAR Directorate, the Data Assimilation Program is designed to advance prediction of events ranging from severe weather and floods to air pollution outbreaks and peaks in the solar cycle. One of its goals is to encourage collaborations among data assimilation experts at NCAR and the larger research community. For example, scientists in several labs are joining forces to apply data assimilation methods to satellite measurements to create a database of global winds and other atmospheric properties. This database will then be used for a broad range of climate and weather studies. The program also provides funding to hire postdocs at NCAR to focus on data assimilation projects as well as for a software engineer to support such activities. NCAR Senior Scientist Chris Snyder coordinates the Data Assimilation Program. "By bringing money to the table, we’re building up data assimilation capability across NCAR,” said NCAR Senior Scientist Chris Snyder, who coordinates the Data Assimilation Program. “This is critical because data assimilation provides a framework to scientists throughout the atmospheric and related sciences who need to assess where the uncertainties are and how a given observation can help.” NCAR Senior Scientist Jeff Anderson, who oversees the Data Assimilation Research Testbed (DART), says that data assimilation has become central for the geosciences. DART is a software environment that helps researchers develop data assimilation methods and observations with various computer models. “I think the Data Assimilation Program is a huge win for NCAR and the entire atmospheric sciences community,” Anderson said. “The scientific method is about taking observations of the world and making sense of them, and data assimilation is fundamental for applying the scientific method to the geosciences as well as to other research areas.” From oceans to Sun Here are examples of how data assimilation is advancing our understanding of atmospheric and related processes from ocean depths to the Sun’s interior: Oceans. Karspeck is using data assimilation to estimate water properties and currents throughout the world's oceans. This is a computationally demanding task that requires feeding observations into the NCAR-based Community Earth System Model, simulating several days of ocean conditions on the Yellowstone supercomputer, and using those results to update the conditions in the model and run another simulation. The good news: the resulting simulations match well with historical records, indicating that the data assimilation approach is working. “My goal is to turn this into a viable system for researchers,” Karspeck said. Air quality. Atmospheric chemists at NCAR are using data assimilation of satellite observations to improve air quality models that currently draw on limited surface observations of pollutants. For example, assimilating satellite observations would show the effect of emissions from a wildfire in Montana on downwind air quality, such as in Chicago.  “We've done a lot of work to speed up the processing time and the results are promising," said NCAR scientist Helen Worden. “The model simulations after assimilating satellite carbon monoxide data are much closer to actual air quality conditions.” Weather forecasting. Data assimilation is helping scientists diagnose problems with weather models. For example, why do models consistently overpredict or underpredict temperatures near the surface? Using data assimilation, NCAR scientist Josh Hacker discovered that models incorrectly simulate the transfer of heat from the ground into the atmosphere. “With data assimilation, you’re repeatedly confronting the model with observations so you can very quickly see how things go wrong,” he said. Solar cycle. Scientists believe the 11-year solar cycle is driven by mysterious processes deep below the Sun’s surface, such as the movements of cells of plasma between the Sun’s lower latitudes and poles. To understand the causes of the cycle and ultimately predict it, they are turning to data assimilation to augment observations of magnetic fields and plasma flow at the Sun’s surface and feed the resulting information into a computer model of subsurface processes. “We are matching surface conditions to the model, such as the pattern and speed of the plasma flows and evolving magnetic fields,” said NCAR scientist Mausumi Dikpati. Capturing data. In addition to helping scientists improve models, the new Data Assimilation Program is also fostering discussions about observations. NCAR senior scientist Wen-Chau Lee and colleagues who are experts in gathering observations are conferring with computer modelers over how to process the data for the models to readily ingest. One challenge, for example, is that radars may take observations every 150 meters whereas the models often have a resolution of 1-3 kilometers. Inputting the radar observations into the models requires advanced quality control techniques, including coordinate transformation (modifying coordinates from observations to the models) and data thinning (reducing the density of observations while retaining the basic information). “We are modifying our quality control procedures to make sure that the flow of data is smooth.” Lee said. “With data assimilation, the first word is ‘data’,” he added. “Without data, without observations, there is no assimilation.” Writer/contactDavid Hosansky, Manager of Media Relations FundersNCAR Directorate National Science FoundationAdditional funding agencies for specific projects  

Pages

Subscribe to Computer Modeling