Computer Modeling

Advanced computer model focuses on Hurricane Matthew

Oct. 6, 2016 | As Hurricane Matthew churns toward the southeastern U.S. coast, scientists at the National Center for Atmospheric Research (NCAR) are testing an advanced research computer model to see how well it can predict the powerful storm's track and intensity.The Model for Prediction Across Scales (MPAS) uses an innovative software approach that allows scientists to focus on regional conditions while still capturing far-flung atmospheric processes that can influence the storm in question. This is a contrast to the forecast models typically used to track hurricanes today, which cannot simultaneously capture both global and local atmospheric processes.The experimental MPAS model simulates Hurricane Matthew hitting the Southeast. To see a range of model output, visit the MPAS tropical cyclone website. MPAS is able to do both because it uses a flexible mesh that allows it to zoom into higher resolution in some areas — over hurricane breeding grounds, for example — while zooming out over the rest of Earth. This ability to vary resolution across the globe requires a small fraction of the computer power needed to have high resolution everywhere.By testing MPAS during hurricane season, the research team can determine the adjustments that need to be made to the model while gaining insights into how to improve hurricane forecasting in the future."This is an experimental effort," said Chris Davis, a senior scientist and director of NCAR's Mesoscale and Microscale Meteorology Laboratory. "We're doing this to see if we can find systematic biases in the model so we can improve simulations of the tropics in general and hurricanes in particular."Davis and the other members of the research team, including NCAR scientists David Ahijevych, Sang-Hun Park, Bill Skamarock, and Wei Wang, are running MPAS once a day on NCAR's Yellowstone supercomputer, inputting various ocean and atmospheric conditions to see how it performs. The work is supported by the National Science Foundation and the Korea Institute of Science and Technology Information.Even though they are just tests, Davis said the MPAS simulations are often comparable with official forecast models such as those run by the National Hurricane Center and the European Centre for Medium-Range Weather Forecasts. As Matthew was in its early stages, in fact, MPAS did a better job than other models in simulating the northward movement of the storm from the Caribbean Sea toward the Florida coast.The scientists will analyze how MPAS performed and share the results with colleagues in the meteorological community. It's a step in an ongoing research effort to better predict the formation and behavior of hurricanes."We run the model even when the tropics are quiet, but an event like Matthew gives us a special opportunity to see what contributes to errors in tropical cyclone prediction," Davis said. "While a major hurricane can have catastrophic impacts, we hope to learn from it and make computer models even better in the future."Funders:National Science FoundationKorea Institute of Science and Technology InformationWriter/contact:David Hosansky, Manager of Media Relations

40 Earths: NCAR's Large Ensemble reveals staggering climate variability

Sept. 29, 2016 | Over the last century, Earth's climate has had its natural ups and downs. Against the backdrop of human-caused climate change, fluctuating atmosphere and ocean circulation patterns have caused the melting of Arctic sea ice to sometimes speed up and sometimes slow down, for example. And the back-and-forth formation of El Niño and La Niña events in the Pacific has cause d some parts of the world to get wetter or drier while some parts get warmer or cooler, depending on the year.But what if the sequence of variability that actually occurred over the last century was just one way that Earth's climate story could have plausibly unfolded? What if tiny — even imperceptible — changes in Earth's atmosphere had kicked off an entirely different sequence of naturally occurring climate events?"It's the proverbial butterfly effect," said Clara Deser, a senior climate scientist at the National Center for Atmospheric Research (NCAR). "Could a butterfly flapping its wings in Mexico set off these little motions in the atmosphere that cascade into large-scale changes to atmospheric circulation?"To explore the possible impact of miniscule perturbations to the climate — and gain a fuller understanding of the range of climate variability that could occur — Deser and her colleague Jennifer Kay, an assistant professor at the University of Colorado Boulder and an NCAR visiting scientist, led a project to run the NCAR-based Community Earth System Model (CESM) 40 times from 1920 forward to 2100. With each simulation, the scientists modified the model's starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree, touching off a unique and chaotic chain of climate events.The result, called the CESM Large Ensemble, is a staggering display of Earth climates that could have been along with a rich look at future climates that could potentially be."We gave the temperature in the atmosphere the tiniest tickle in the model — you could never measure it — and the resulting diversity of climate projections is astounding," Deser said. "It's been really eye-opening for people."The dataset generated during the project, which is freely available, has already proven to be a tremendous resource for researchers across the globe who are interested in how natural climate variability and human-caused climate change interact. In a little over a year, about 100 peer-reviewed scientific journal articles have used data from the CESM Large Ensemble.Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society's Journal of Climate. (© 2016 AMS.) A community effortRunning a complex climate model like the CESM several dozen times takes a vast amount of computing resources, which makes such projects rare and difficult to pull off. With that in mind, Deser and Kay wanted to make sure that the data resulting from the Large Ensemble were as useful as possible. To do that, they queried scientists from across the community who might make use of the project results — oceanographers, geochemists, atmospheric scientists, biologists, socioeconomic researchers — about what they really wanted."It took a village to make this ensemble happen and for it to be useful to and usable by the broad climate community," Kay said. "The result is a large number of ensemble members, in a state-of-the-art climate model, with outputs asked for by the community, that is publicly available and relatively easy to access — it's no wonder it's getting so much use."Scientists have so far relied on the CESM Large Ensemble to study everything from oxygen levels in the ocean to potential geoengineering scenarios to possible changes in the frequency of moisture-laden atmospheric rivers making landfall. In fact, so many researchers have found the Large Ensemble so useful that Kay and Deser were honored with the 2016 CESM Distinguished Achievement Award, which recognizes significant contributions to the climate modeling community.The award citation noted the pair was chosen because "the Large Ensemble represents one of NCAR's most significant contributions to the U.S. climate research community. … At a scientific level, the utility of the Large Ensemble cannot be overstated."The power of multiple runs: Looking forward — and backwardClearly, the CESM Large Ensemble is useful for looking forward: What is the range of possible futures we might expect in the face of a changing climate? How much warmer will summers become? When will summer Arctic sea ice disappear? How will climate change affect ocean life?But the Large Ensemble is also an extremely valuable tool for understanding our past. This vast storehouse of data helps scientists evaluate observations and put them in context: How unusual is a particular heat wave? Is a recent change in rainfall patterns the result of global warming or could it be from solely natural causes?With only a single model run, scientists are limited in what they can conclude when an observation doesn't match up with a model's projection. For example, if the Arctic sea ice extent were to expand, even though the model projected a decline, what would that mean? Is the physics underlying the model wrong? Or does the model incorrectly capture the natural variability? In other words, if you ran the model more times, with slightly different starting conditions, would one of the model runs correctly project the growth in sea ice?The Large Ensemble helps answer that question. Armed with 40 different simulations, scientists can characterize the range of historic natural variability. With this information, they can determine if observations fit within the envelope of natural variability outlined in the model, instead of comparing them to a single run.Creating an envelope of what can be considered natural also makes it possible to see when the signal of human-caused climate change has pushed an observation beyond the natural variability. The Large Ensemble can also clarify the climate change "signal" in the model. That's because averaging together the 40 ensemble members can effectively cancel out the natural variability — a La Niña in one model run might cancel out an El Niño in another, for example — leaving behind only changes due to climate change."This new ability to separate natural internal variability from externally driven trends is absolutely critical for moving forward our understanding of climate and climate change," said Galen McKinley, a professor of atmospheric and oceanic sciences at the University of Wisconsin–Madison.McKinley used the Large Ensemble — which she called a "transformative tool" — to study changes in the ocean's ability to take up carbon dioxide in a warming climate.The two components of the climate systemThe CESM Large Ensemble is not the first ensemble of climate simulations, though it is perhaps the most comprehensive and widely used. Scientists have long understood that it makes sense to look at more than one model run. Frequently, however, scientists have done this by comparing simulations from different climate models, collectively called a multi-model ensemble.This method gives a feel for the diversity of possible outcomes, but it doesn't allow researchers to determine why two model simulations might differ: Is it because the models themselves represent the physics of the Earth system differently? Or is it because the models have different representations of the natural variability or different sensitivities to changing carbon dioxide concentrations?The Large Ensemble helps resolve this dilemma. Because each member is run using the same model, the differences between runs can be attributed to differences in natural variability alone. The Large Ensemble also offers context for comparing simulations in a multi-model ensemble. If the simulations appear to disagree about what the future may look like—but they still fit within the envelope of natural variability characterized by the Large Ensemble—that could be a clue that the models do not actually disagree on the fundamentals. Instead, they may just be representing different sequences of natural variability.This ability to put model results in context is important, not just for scientists but for policy makers, according to Noah Diffenbaugh, a climate scientist at Stanford University who has used the Large Ensemble in several studies, including one that looks at the contribution of climate change to the recent, severe California drought.“It’s pretty common for real-world decision makers to look at the different simulations from different models, and throw up their hands and say, 'These models don't agree so I can't make decisions,'" he said. "In reality, it may not be that the models are disagreeing. Instead, we may be seeing the actual uncertainty of the climate system. There is some amount of natural uncertainty that we can't reduce — that information is really important for making robust decisions, and the Large Ensemble is giving us a window that we haven’t had before.”Deser agrees that it's important to communicate to the public that, in the climate system, there will always be this "irreducible" uncertainty."We’re always going to have these two components to the climate system: human-induced changes and natural variability. You always have to take both into account," Deser said. "In the future, it will all depend on how the human-induced component is either offset — or augmented — by the sequence of natural variability that unfolds."About the articleTitle: The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate VariabilityAuthors:  J. E. Kay, C. Deser, A. Phillips, A. Mai, C. Hannay, G. Strand, J. M. Arblaster, S. C. Bates, G. Danabasoglu, J. Edwards, M. Holland, P. Kushner, J.-F. Lamarque, D. Lawrence, K. Lindsay, A. Middleton, E. Munoz, R. Neale, K. Oleson, L. Polvani, and M. VertensteinJournal: Bulletin of the American Meteorological Society, DOI: 10.1175/BAMS-D-13-00255.1Funders: National Science FoundationU.S. Department of EnergyIn the news: Stories about research using the CESM Large EnsembleCauses of California drought linked to climate change, Stanford scientists sayStanford University (UCAR Member)The difficulty of predicting an ice-free ArcticUniversity of Colorado Boulder (UCAR Member)Widespread loss of ocean oxygen to become noticeable in 2030sNCARCornell Scientist Predicts Climate Change Will Prompt Earlier Spring Start DateCornell University (UCAR Member)The 2-degree goal and the question of geoengineeringNCAR New climate model better predicts changes to ocean-carbon sinkUniversity of Wisconsin Madison (UCAR Member)Future summers could regularly be hotter than the hottest on recordNCARExtreme-Weather Winters Becoming More CommonStanford (UCAR Member)More frequent extreme precipitation ahead for western North AmericaPacific Northwest National LaboratoryCloudy With A Chance of WarmingUniversity of Colorado Boulder (UCAR Member)Climate change already accelerating sea level rise, study finds NCARLess ice, more water in Arctic Ocean by 2050s, new CU-Boulder study findsUniversity of Colorado Boulder (UCAR Member)California 2100: More frequent and more severe droughts and floods likelyPacific Northwest National Laboratory Searing heat waves detailed in study of future climateNCAR Did climate change, El Nino make Texas floods worse?Utah State University (UCAR Member)Writer/contact:Laura Snider, Senior Science Writer and Public Information Officer

US taps NCAR technology for new water resources forecasts

BOULDER, Colo. — As the National Oceanic and Atmospheric Administration (NOAA) this month launches a comprehensive system for forecasting water resources in the United States, it is turning to technology developed by the National Center for Atmospheric Research (NCAR) and its university and agency collaborators.WRF-Hydro, a powerful NCAR-based computer model, is the first nationwide operational system to provide continuous predictions of water levels and potential flooding in rivers and streams from coast to coast. NOAA's new Office of Water Prediction selected it last year as the core of the agency's new National Water Model."WRF-Hydro gives us a continuous picture of all of the waterways in the contiguous United States," said NCAR scientist David Gochis, who helped lead its development. "By generating detailed forecast guidance that is hours to weeks ahead, it will help officials make more informed decisions about reservoir levels and river navigation, as well as alerting them to dangerous events like flash floods."WRF-Hydro (WRF stands for Weather Research and Forecasting) is part of a major Office of Water Prediction initiative to bolster U.S. capabilities in predicting and managing water resources. By teaming with NCAR and the research community, NOAA's National Water Center is developing a new national water intelligence capability, enabling better impacts-based forecasts for management and decision making.The new WRF-Hydro computer model simulates streams and other aspects of the hydrologic system in far more detail than previously possible. (Image by NOAA Office of Water Prediction.) Unlike past streamflow models, which provided forecasts every few hours and only for specific points along major river systems, WRF-Hydro provides continuous forecasts of millions of points along rivers, streams, and their tributaries across the contiguous United States. To accomplish this, it simulates the entire hydrologic system — including snowpack, soil moisture, local ponded water, and evapotranspiration — and rapidly generates output on some of the nation's most powerful supercomputers.WRF-Hydro was developed in collaboration with NOAA and university and agency scientists through the Consortium of Universities for the Advancement of Hydrologic Science, the U.S. Geological Survey, Israel Hydrologic Service, and Baron Advanced Meteorological Services. Funding came from NOAA, NASA, and the National Science Foundation, which is NCAR's sponsor."WRF-Hydro is a perfect example of the transition from research to operations," said Antonio (Tony) J. Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation (NSF). "It builds on the NSF investment in basic research in partnership with other agencies, helps to accelerate collaboration with the larger research community, and culminates in support of a mission agency such as NOAA. The use of WRF-Hydro in an operational setting will also allow for feedback from operations to research. In the end this is a win-win situation for all parties involved, chief among them the U.S. taxpayers.""Through our partnership with NCAR and the academic and federal water community, we are bringing the state of the science in water forecasting and prediction to bear operationally," said Thomas Graziano, director of NOAA’s new Office of Water Prediction at the National Weather Service.Filling in the water pictureThe continental United States has a vast network of rivers and streams, from major navigable waterways such as the Mississippi and Columbia to the remote mountain brooks flowing from the high Adirondacks into the Hudson River. The levels and flow rates of these watercourses have far-reaching implications for water availability, water quality, and public safety.Until now, however, it has not been possible to predict conditions at all points in the nation's waterways. Instead, computer models have produced a limited picture by incorporating observations from about 4,000 gauges, generally on the country's bigger rivers. Smaller streams and channels are largely left out of these forecast models, and stretches of major rivers for tens of miles are often not predicted — meaning that schools, bridges, and even entire towns can be vulnerable to unexpected changes in river levels.To fill in the picture, NCAR scientists have worked for the past several years with their colleagues within NOAA, other federal agencies, and universities to combine a range of atmospheric, hydrologic, and soil data into a single forecasting system.The resulting National Water Model, based on WRF-Hydro, simulates current and future conditions on rivers and streams along points two miles apart across the contiguous United States. Along with an hourly analysis of current hydrologic conditions, the National Water Model generates three predictions: an hourly 0- to 15-hour short-range forecast, a daily 0- to 10-day medium-range forecast, and a daily 0- to 30-day long-range water resource forecast.The National Water Model predictions using WRF-Hydro offer a wide array of benefits for society. They will help local, state, and federal officials better manage reservoirs, improve navigation along major rivers, plan for droughts, anticipate water quality problems caused by lower flows, and monitor ecosystems for issues such as whether conditions are favorable for fish spawning. By providing a national view, this will also help the Federal Emergency Management Agency deploy resources more effectively in cases of simultaneous emergencies, such as a hurricane in the Gulf Coast and flooding in California."We've never had such a comprehensive system before," Gochis said. "In some ways, the value of this is a blank page yet to be written."A broad spectrum of observationsWRF-Hydro is a powerful forecasting system that incorporates advanced meteorological and streamflow observations, including data from nearly 8,000 U.S. Geological Survey streamflow gauges across the country. Using advanced mathematical techniques, the model then simulates current and future conditions for millions of points on every significant river, steam, tributary, and catchment in the United States.In time, scientists will add additional observations to the model, including snowpack conditions, lake and reservoir levels, subsurface flows, soil moisture, and land-atmosphere interactions such as evapotranspiration, the process by which water in soil, plants, and other land surfaces evaporates into the atmosphere.Scientists over the last year have demonstrated the accuracy of WRF-Hydro by comparing its simulations to observations of streamflow, snowpack, and other variables. They will continue to assess and expand the system as the National Water Model begins operational forecasts.NCAR scientists maintain and update the open-source code of WRF-Hydro, which is available to the academic community and others. WRF-Hydro is widely used by researchers, both to better understand water resources and floods in the United States and other countries such as Norway, Germany, Romania, Turkey, and Israel, and to project the possible impacts of climate change."At any point in time, forecasts from the new National Water Model have the potential to impact 300 million people," Gochis said. "What NOAA and its collaborator community are doing is trying to usher in a new era of bringing in better physics and better data into forecast models for improving situational awareness and hydrologic decision making."CollaboratorsBaron Advanced Meteorological Services Consortium of Universities for the Advancement of Hydrologic ScienceIsrael Hydrologic ServiceNational Center for Atmospheric ResearchNational Oceanic and Atmospheric AdministrationU.S. Geological SurveyFundersNational Science FoundationNational Aeronautics and Space AdministrationNational Oceanic and Atmospheric Administration

NCAR weather ensemble offers glimpse at forecasting's future

July 1, 2016 | Last spring, scientists at the National Center for Atmospheric Research (NCAR) flipped the switch on a first-of-its-kind weather forecasting system. For more than a year, NCAR's high-resolution, real-time ensemble forecasting system has been ingesting 50,000 to 70,000 observations every six hours and creating a whopping 90,000 weather maps each day.The system has become a favorite among professional forecasters and casual weather wonks: Typically more than 200 people check out the site each day with more than a thousand coming during major weather events. During this experimental period, the NCAR ensemble has also become a popular source of guidance within the National Weather Service, where it has already been referenced several hundred times by forecasters at more than 50 different offices. But perhaps more important, the data accumulated from running the system daily — and there is lots of it — is being used by researchers at universities across the country to study a range of topics, from predicting hail size to anticipating power outages for utilities."We wanted to demonstrate that a real-time system of this scale was feasible," said NCAR scientist Craig Schwartz. "But it's also a research project that can help the community learn more about the predictability of different kinds of weather events."Schwartz is a member of the team that designed and operates the system, along with NCAR colleagues Glen Romine, Ryan Sobash, and Kate Fossell.This animation shows the forecast for accumulated snowfall made by each of the NCAR ensemble's 10 members for the 48-hour period beginning on Jan. 22, 2016. In the run-up to the blizzard, which ultimately dropped more than 30 inches of snow on parts of the Mid-Atlantic, more than 1,000 people visited the NCAR ensemble's website. (©UCAR. This animation is freely available for media & nonprofit use.)Testing a unique toolNCAR's high-resolution ensemble forecasting system is unique in the country for a couple of reasons, both of which are revealed in its name: It's an ensemble, and it's high resolution.Instead of producing a single forecast, the system produces an "ensemble" of 10 forecasts, each with slightly different (but equally likely) starting conditions. The degree to which the forecasts look the same or different tells scientists something about the probability that a weather event, like rain, hail, or wind, will actually occur.By comparing the actual outcomes to the forecasted probabilities, scientists can study the predictability of particular weather events under different circumstances. The forecasting system's high resolution (the grid points are just 3 kilometers apart) allows it to simulate small-scale weather phenomena, like the creation of individual storms from convection — the process of moist, warm air rising and then condensing into clouds. The combination of fine grid spacing and ensemble predictions in the NCAR system offers a sneak peek at what the future of weather forecasting might look like, and weather researchers across the country have noticed. Cliff Mass, a professor of atmospheric sciences at the University of Washington whose specialty is forecasting, said: "It's extremely important for the United States to have a convection-allowing ensemble system to push our forecasting capabilities forward. We were delighted that NCAR demonstrated that this could be done."'The cat's meow'The treasure trove of accruing weather data generated by running the NCAR ensemble is already being used by researchers both at NCAR and in the broader community. Jim Steenburgh, for instance, is a researcher at the University of Utah who is using the system to understand the predictability of mountain snowstorms."NCAR's ensemble not only permits the 'formation' of clouds, it can also capture the topography of the western United States," he said. "The mountains control the weather to some degree, so you need to be able to resolve the mountains' effects on precipitation."Steenburgh has also been using the ensemble with his students. "We’re teaching the next generation of weather forecasters," he said. "In the future, these high-resolution ensemble forecasts will be the tools they need to use, and this gives them early, hands-on experience." Like Steenburgh, Lance Bosart, an atmospheric researcher at the University of Albany, State University of New York, has used the ensemble both in his own research — studying the variability of convective events — and with his students. He said having 10 members in the ensemble forecast helps students easily see the great spread of possibilities, and the visual emphasis of the user interface makes it easy for students to absorb the information."What makes it an invaluable tool is the graphical display," he said. "It's visually compelling. You don't have to take a lot of time to explain what you're looking at; you can get right into explaining the science. I like to say it's the cat's meow." Setting an exampleThe NCAR ensemble is also enabling the researchers running it to further their own research. "We're collecting statistics on the misfit between the model predictions and observations and then we're trying to use that to improve our model physics," Romine said. The ensemble project is also teaching the team about the strengths and weaknesses of the way they've chosen to kick off, or "initialize," each of the ensemble members. "The NCAR ensemble happens to produce a pretty good forecast, but we realize there are some shortcomings," Schwartz said. "For example, if we were trying to make the best forecast in the world, we would probably not be initializing the model the way we are. But then we wouldn’t learn as much from a research perspective." The NCAR ensemble began as a yearlong trial, but the project is continuing to run for now. The team would like to keep the system online until next summer, but they don't yet have the computing resources they need to run it past September.If the system does continue to run, the researchers who are using it say there's still more that they and their students can learn from the project. And if not, there's loads of data already collected that are still waiting to be mined. In any case, Mass says NCAR's ensemble has been a valuable project. "It set a really good example for the nation," he said.Community members interested in collaborating or helping support the NCAR ensemble project are encouraged to contact the team at Snider, Senior Science Writer and Public Information Officer 

Climate modeling 101: Explanations without equations

A new book breaks down climate models into easy-to-understand concepts. (Photo courtesy Springer.) June 21, 2016 | Climate scientists tell us it's going to get hotter. How much it rains and where it rains is likely to shift. Sea level rise is apt to accelerate. Oceans are on their way to becoming more acidic and less oxygenated. Floods, droughts, storms, and other extreme weather events are projected to change in frequency or intensity.  But how do they know what they know? For climate scientists, numerical models are the tools of the trade. But for the layperson — and even for scientists in other fields — climate models can seem mysterious. What does "numerical" even mean? Do climate models take other things besides the atmosphere into account?How do scientists know if a model is any good? * Two experts in climate modeling, Andrew Gettelman of the National Center for Atmospheric Research and Richard Rood of the University of Michigan, have your answers and more, free of charge. In a new open-access book, "Demystifying Climate Models," the pair lay out the fundamentals. In 282 pages, the scientists explain the basics of climate science, how that science is translated into a climate model, and what those models can tell us (as well as what they can't) — all without using a single equation. *Find the answers on pages 8, 13, and 161, respectively, of the book. AtmosNews sat down with Gettelman to learn more about the book, which anyone can download at   NCAR scientist Andrew Gettelman has written a new book on climate modeling with Richard Rood of the University of Michigan. (Courtesy photo. This image is freely available for media & nonprofit use.) What was the motivation to write this book? There isn't really another book that sets out the philosophy and structure of models. There are textbooks, but inside you'll find a lot of physics and chemistry: information about momentum equations, turbulent fluxes — which is useful if you want to build your own model. And then there are books on climate change for the layperson, and they devote maybe a paragraph to climate modeling. There's not much in the middle. This book provides an introduction for the beginning grad student, or someone in another field who is interested in using model output, or anyone who is just curious how climate works and how we simulate it. What are some of the biggest misperceptions about climate models that you hear? One is that people say climate models are based on uncertain science. But that's not true at all. If we didn't know the science, my cellphone wouldn't work. Radios wouldn't work. GPS wouldn't work. That's because the energy that warms the Earth, which radiates from the Sun, and is absorbed and re-emitted by Earth's surface — and also by greenhouse gases in the atmosphere — is part of the same spectrum of radiation that makes up radio waves. If we didn't understand electromagnetic waves, we couldn't have created the technology we rely on today. The same is true for the science that underlies other aspects of climate models. (Learn more on page 38 of the book.) But we don't understand everything, right? We have understood the basic physics for hundreds of years. The last piece of it, the discovery that carbon dioxide warms the atmosphere, was put in place in the late 19th, early 20th century. Everything else — the laws of motion, the laws of thermodynamics — was all worked out between the 17th and 19th centuries. (Learn more on page 39 of the book.) We do still have uncertainty in our modeling systems. A big part of this book is about how scientists understand that uncertainty and actually embrace it as part of their work. If you know what you don't know and why, you can use that to better understand the whole climate system. Can we ever eliminate the uncertainty? Not entirely. In our book, we break down uncertainty into three categories: model uncertainty (How good are the models at reflecting how the Earth really works?), initial condition uncertainty (How well do we understand what the Earth system looks like right now?), and scenario uncertainty (What will future emissions look like?) To better understand, it might help to think about the uncertainty that would be involved if you had a computer model that could simulate making a pizza. Instead of trying to figure out what Earth's climate would look like in 50 or 100 years, this model would predict what your pizza would look like when it was done.  The first thing you want to know is how well the model reflects the reality of how a pizza is made. For example, does the model take into account all the ingredients you need to make the pizza, and how they will each evolve? The cheese melts, the dough rises, and the pepperoni shrinks. How well can the model approximate each of those processes? This is model uncertainty. The second thing you'd want to know is if you can input all the pizza's "initial conditions" into the model. Some initial conditions — like how many pepperoni slices are on the pizza and where — are easy to observe, but others are not. For example, kneading the pizza dough creates small pockets of air, but you don’t know exactly where they are. When the dough is heated, the air expands and forms big bubbles in the crust. If you can't tell the model where the air pockets are, it can't accurately predict where the crust bubbles will form when the pizza is baked. The same is true for a climate model. Some parts of the Earth, like the deep oceans and the polar regions, are not easy to observe with enough detail, leaving scientists to estimate what the conditions there are like and leading to the second type of uncertainty in the model results.  Finally, the pizza-baking model also has to deal with "scenario uncertainty," because it doesn't know how long the person baking the pizza will keep it in the oven, or at what temperature. Without understanding the choices the human will make, the model can't say for sure if the dough will be soft, crispy, or burnt. With climate models, over long periods of time, like a century, we've found that this scenario uncertainty is actually the dominant one. In other words, we don't know how much carbon dioxide humans around the world going to emit in the years and decades to come, and it turns out that that's what matters most.  (Learn more about uncertainty on page 10 of the book.) Any other misperceptions you frequently hear? People always say, "If we can't predict the weather next week, how can we know what the climate will be like in 50 years?" Generally speaking, we can't perfectly predict the weather because we don't have a full understanding of all the current conditions. We don't have observations for every grid point on a weather model or for large parts of the ocean, for example. But climate is not concerned about the exact weather on a particular day 50 or 100 years from now. Climate is the statistical distribution of weather, not a particular point on that distribution. Climate prediction is focused on the statistics of this distribution, and that is governed by conservation of energy and mass on long time scales, something we do understand. (Learn more on page 6 of the book. Read more common misperceptions at Did you learn anything about climate modeling while working on the book? My background is the atmosphere. I sat down and wrote the whole section on the atmosphere in practically one sitting. But I had to learn about the other aspects of models, the ocean and the land, which work really differently. The atmosphere has only one boundary, a bottom boundary. We just have to worry about how it interacts with mountains and other bumps on the surface. But the ocean has three hard boundaries: the bottom and the sides, like a giant rough bathtub. It also has a boundary with the atmosphere on the top. Those boundaries really change how the ocean moves. And the land is completely different because it doesn't move at all. Writing this book really gave me a new appreciation for some of the subtleties of other parts of the Earth System and the ways my colleagues model them. (Learn more on page 13 of the book.) What was the most fun part of writing the book for you? I think having to force myself to think in terms of analogies that are understandable to a variety of people. I can describe a model using a whole bunch of words most people don't use every day, like "flux." It was a fun challenge to come up with words that would accurately describe the models and the science but that were accessible to everyone.

UCAR to support EarthCube: Cyberinfrastructure will advance science

BOULDER – EarthCube, a landmark initiative to develop new technological and computational capabilities for geosciences research, will be supported by the University Corporation for Atmospheric Research (UCAR) under a new agreement with the National Science Foundation (NSF). Created by NSF in 2011, EarthCube aims to help researchers across the geosciences from meteorology to seismology better understand our planet in ways that can strengthen societal resilience to natural events. More than 2,500 EarthCube contributors – including scientists, educators, and information professionals – work together on the creation of a common cyberinfrastructure for researchers to collect, access, analyze, share, and visualize all forms of data and related resources. "EarthCube offers the promise to advance geoscience research by creating and delivering critical new capabilities,” said UCAR scientist Mohan Ramamurthy, principal investigator and project director of the new EarthCube office at UCAR. "This is a great opportunity for UCAR to leverage its successful track record in managing large scientific projects that advance our understanding of the planet," said Michael Thompson, interim UCAR president. "The EarthCube project offers the potential to significantly benefit society by helping scientists use the power of diverse big datasets to better understand and predict the natural events, from severe storms to solar disturbances, that affect all of us." EarthCube is designed to foster collaborations across the geosciences. The technology helps scientists in different disciplines better understand the far-reaching influences of natural events, such as how major storms like Sandy (above) affect coastal and inland flooding. This unique view of Sandy was generated with NCAR's VAPOR visualization software, based on detailed computer modeling. (©UCAR. Visualization by Alan Norton, NCAR, based on research by NCAR scientists Mel Shapiro and Thomas Galarneau. This image is freely available for media & nonprofit use. Click here for higher resolution.) UCAR will administer the day-to-day operations of EarthCube under the three-year, $2.8 million agreement with NSF. The EarthCube science support office, currently funded through an NSF grant to the Arizona Geological Survey in Tucson, Arizona, will move to UCAR's Boulder offices starting this month. EarthCube is designed to help researchers across the geosciences address the challenges of understanding and predicting the complexity of the Earth system, from the geology and topography to the water cycle, atmosphere, and space environment of the planet. This approach is critical for improved understanding of the environment and better safeguarding society. In order to better predict the potential effects of a landfalling hurricane on inland mudslides, for example, scientists from multiple disciplines, including meteorology, hydrology, geography, and geology, need a common platform to work together to collect observations, ingest them into advanced computer models of the Earth system, and analyze and interpret the resulting data. "The EarthCube Science Support Office will help us find and share the data geoscientists collect and use to answer critical science questions about the Earth," said Eva Zanzerkia, program director in NSF’s Division of Earth Sciences. Ramamurthy said UCAR is well positioned to help EarthCube meet its goals, since UCAR provides technological support to the geosciences community, including its 109 member universities. UCAR has been involved with EarthCube since NSF launched the initiative. "Currently researchers are spending an enormous amount of time on routine tasks because there is no data system, database, or data infrastructure where they can get all the information they need in some kind of a uniform way from a single interface," Ramamurthy said. "If EarthCube can facilitate the integration of data from multiple domains in a way that is easier and faster, and if there is interoperability in terms of standards for data to be input into a common environment, then integration becomes more easily possible." UCAR is a nonprofit consortium of more than 100 member colleges and universities focused on research and training in the atmospheric and related Earth system sciences. UCAR’s primary activity is managing the National Center for Atmospheric Research (NCAR) on behalf of NSF, NCAR’s sponsor. UCAR also oversees a variety of education and scientific support activities under the umbrella of the UCAR Community Programs, which will administer EarthCube.

A 3D window into a tornado

This simulation was created by NCAR scientist George Bryan to visualize what goes on inside a tornado. The animation is the "high swirl" version in a series that goes from low, to medium, to high. Click to enlarge. (Courtesy Goerge Bryan, NCAR. This image is freely available for media & nonprofit use.) May 17, 2016 | What's really going on inside a tornado? How fast are the strongest winds, and what are the chances that any given location will experience them when a tornado passes by? Due to the difficulties of measuring wind speeds in tornadoes, scientists don't have answers to these questions. However, a collaborative project between researchers at the University of Miami and NCAR has been seeking clues with new, highly detailed computer simulations of tornado wind fields. The simulations can be viewed in a series of animations, created by NCAR scientist George Bryan, that provide a 3D window into the evolving wind fields of idealized tornadoes at different rates of rotation. The "high-swirl animation," shown here, which depicts a powerful tornado with 200-plus mph winds, the purple tubelike structures depict the movements of rapidly rotating vortices. Near-surface winds are represented by colors ranging from light blue (less than 20 meters per second, or 45 mph) to deep red (more than 100 meters per second, or 224 miles per hour). The vortices and winds are contained within a condensation cloud that rises more than 500 meters (1,640 feet) above the surface. Such visualizations can help atmospheric scientists better understand the structures of tornadoes, as well as the shifting location and strength of maximum wind speeds.  Bryan also uses them in presentations to meteorology students. “When you make these 3D visualizations and then animate them, they give you a sense of how the flow evolves and how the turbulence changes,” Bryan said. “These are details you don’t see by just looking at a photograph.” For example, he learned from the visualization that the rotating tubes tilt backward against the flow at higher altitudes. These are the kinds of details that can eventually help scientists better understand these complex storms. The information is also critical for public safety officials and engineers. “If you’re an engineer and designing a building, you want to know details like how much greater is peak wind over average wind in a tornado,” Bryan said. “We’ll get questions from engineers asking about the details of wind gusts in those purple tubes.” Bryan is collaborating on the simulations with Dave Nolan, chair of Miami’s Department of Atmospheric Sciences. To create the animation, Bryan used innovative NCAR software that enables researchers in the atmospheric and related sciences to analyze and interpret results from large computer models. VAPOR (Visualization and Analysis Platform for Ocean, Atmosphere, and Solar Researchers) is an interactive 3D visualization environment for both animations and still-frame images. The open-source software can be downloaded and used on personal computers. VAPOR was developed at NCAR in partnership with the University of California at Davis and Ohio State University. Funding comes from the National Science Foundation and the Korea Institute of Science and Technology Information. Writer/contactDavid Hosansky FunderNational Science Foundation CollaboratorUniversity of Miami

Wrangling observations into models

April 4, 2016 | If scientists could directly measure the properties of all the water throughout the world’s oceans, they wouldn’t need help from NCAR scientist Alicia Karspeck. But since large expanses of the oceans are beyond the reach of observing instruments, Karspeck’s work is critical for those who want estimates of temperature, salinity, and other properties of water around the globe. Scientists need these estimates to better understand the world’s climate system and how it is changing. “It’s painstaking work, but my hope is it will lead to major advances in climate modeling and long-term prediction,” Karspeck said. She is one of a dozen or so researchers at NCAR who spend their days on data assimilation, a field that is becoming increasingly important for the geosciences and other areas of research. Broadly speaking, data assimilation is any method of enabling computer models to utilize relevant observations. Part science and part art, it involves figuring out how to get available measurements--which may be sparse, tightly clustered, or irregularly scattered--into models that tend to simplify the world by breaking it into gridded boxes. Commonly used in weather forecasting, the technique can improve simulations and help scientists predict future events with more confidence. It can also identify deficiencies in both models and observations. As models have become more powerful and observations more numerous, the technique has become so critical that NCAR last year launched a Data Assimilation Program to better leverage expertise across its seven labs. “Activities in data assimilation have grown well beyond traditional applications in numerical weather prediction for the atmosphere and now span across NCAR’s laboratories,” said NCAR Director Jim Hurrell. “The Data Assimilation program is designed to enhance data assimilation research at NCAR, while at the same time serving the broader U.S. research community.” Scientists are using data assimilation techniques to input a range of North American observations into experimental, high-resolution U.S. forecasts. These real-time ensemble forecasts are publicly available while they're being tested. (@UCAR. This image is freely available for media & nonprofit use.) Improving prediction Created by the NCAR Directorate, the Data Assimilation Program is designed to advance prediction of events ranging from severe weather and floods to air pollution outbreaks and peaks in the solar cycle. One of its goals is to encourage collaborations among data assimilation experts at NCAR and the larger research community. For example, scientists in several labs are joining forces to apply data assimilation methods to satellite measurements to create a database of global winds and other atmospheric properties. This database will then be used for a broad range of climate and weather studies. The program also provides funding to hire postdocs at NCAR to focus on data assimilation projects as well as for a software engineer to support such activities. NCAR Senior Scientist Chris Snyder coordinates the Data Assimilation Program. "By bringing money to the table, we’re building up data assimilation capability across NCAR,” said NCAR Senior Scientist Chris Snyder, who coordinates the Data Assimilation Program. “This is critical because data assimilation provides a framework to scientists throughout the atmospheric and related sciences who need to assess where the uncertainties are and how a given observation can help.” NCAR Senior Scientist Jeff Anderson, who oversees the Data Assimilation Research Testbed (DART), says that data assimilation has become central for the geosciences. DART is a software environment that helps researchers develop data assimilation methods and observations with various computer models. “I think the Data Assimilation Program is a huge win for NCAR and the entire atmospheric sciences community,” Anderson said. “The scientific method is about taking observations of the world and making sense of them, and data assimilation is fundamental for applying the scientific method to the geosciences as well as to other research areas.” From oceans to Sun Here are examples of how data assimilation is advancing our understanding of atmospheric and related processes from ocean depths to the Sun’s interior: Oceans. Karspeck is using data assimilation to estimate water properties and currents throughout the world's oceans. This is a computationally demanding task that requires feeding observations into the NCAR-based Community Earth System Model, simulating several days of ocean conditions on the Yellowstone supercomputer, and using those results to update the conditions in the model and run another simulation. The good news: the resulting simulations match well with historical records, indicating that the data assimilation approach is working. “My goal is to turn this into a viable system for researchers,” Karspeck said. Air quality. Atmospheric chemists at NCAR are using data assimilation of satellite observations to improve air quality models that currently draw on limited surface observations of pollutants. For example, assimilating satellite observations would show the effect of emissions from a wildfire in Montana on downwind air quality, such as in Chicago.  “We've done a lot of work to speed up the processing time and the results are promising," said NCAR scientist Helen Worden. “The model simulations after assimilating satellite carbon monoxide data are much closer to actual air quality conditions.” Weather forecasting. Data assimilation is helping scientists diagnose problems with weather models. For example, why do models consistently overpredict or underpredict temperatures near the surface? Using data assimilation, NCAR scientist Josh Hacker discovered that models incorrectly simulate the transfer of heat from the ground into the atmosphere. “With data assimilation, you’re repeatedly confronting the model with observations so you can very quickly see how things go wrong,” he said. Solar cycle. Scientists believe the 11-year solar cycle is driven by mysterious processes deep below the Sun’s surface, such as the movements of cells of plasma between the Sun’s lower latitudes and poles. To understand the causes of the cycle and ultimately predict it, they are turning to data assimilation to augment observations of magnetic fields and plasma flow at the Sun’s surface and feed the resulting information into a computer model of subsurface processes. “We are matching surface conditions to the model, such as the pattern and speed of the plasma flows and evolving magnetic fields,” said NCAR scientist Mausumi Dikpati. Capturing data. In addition to helping scientists improve models, the new Data Assimilation Program is also fostering discussions about observations. NCAR senior scientist Wen-Chau Lee and colleagues who are experts in gathering observations are conferring with computer modelers over how to process the data for the models to readily ingest. One challenge, for example, is that radars may take observations every 150 meters whereas the models often have a resolution of 1-3 kilometers. Inputting the radar observations into the models requires advanced quality control techniques, including coordinate transformation (modifying coordinates from observations to the models) and data thinning (reducing the density of observations while retaining the basic information). “We are modifying our quality control procedures to make sure that the flow of data is smooth.” Lee said. “With data assimilation, the first word is ‘data’,” he added. “Without data, without observations, there is no assimilation.” Writer/contactDavid Hosansky, Manager of Media Relations FundersNCAR Directorate National Science FoundationAdditional funding agencies for specific projects  

The quest to predict severe weather sooner

January 26, 2016 | Weather forecasts have become increasingly more reliable thanks to improvements over the past several decades in computer modeling and observational equipment. However, when it comes to severe weather, that reliability typically begins to deteriorate beyond a two-day forecast. To provide an accurate severe weather outlook three or more days in advance, forecasters need to capture the fine-scale behavior of clouds, vertical wind shear and other local processes, as well as the global atmospheric conditions surrounding the local region of interest. Regional models examine fine-scale conditions at high resolution, but they have a difficult time with accuracy between the area of interest and the surrounding region. Errors in these so-called boundary regions can distort the results for the target area. Simulating the entire globe in high resolution would help, but that takes an exorbitant amount of computing time. MPAS's variable mesh enables smooth transitions from higher resolution (over North America in this example) to coarser resolution over the rest of the globe. (@UCAR. This image is freely available for media & nonprofit use.) A global software platform called Model for Prediction Across Scales, or MPAS, aims at resolving those issues. It offers a new way of simulating the atmosphere while providing scientists with more flexibility when focusing on regional conditions. Its development comes at a time when the U.S. National Weather Service wants to increase the lead time and accuracy of forecasts of severe storms, including hurricanes, tornadoes and flash floods, so communities can be better prepared. Unlike traditional three-dimensional models that calculate atmospheric conditions at multiple points within a block-shaped grid, MPAS uses a hexagonal mesh resembling a soccer ball or honeycomb that can be stretched wide or compressed for higher resolution as needed. "The mesh allows for a smooth transition between areas of coarse and fine resolution, with the goal of eliminating boundary distortions," said NCAR Senior Scientist William Skamarock, one of the developers of MPAS. Look globally as well as locally Vertical wind shear, or the change of winds at height, is a critical factor in determining thunderstorm severity. MPAS is able to simulate vertical wind shear at higher resolutions over local areas of interest, as well as cloud behavior and other processes vital to severe weather prediction. Ocean currents and many other global factors also can alter weather quickly. Global forecasts produced by the National Oceanic and Atmospheric Administration (NOAA) go out to 16 days; for tropical cyclones and hurricanes it's five days, but accuracy declines for the extended forecasts. "For some weather events, such as tropical cyclones, what's going on at the other side of the globe can influence the forecast for your region," Skamarock said. So forecasters need to portray the global environment surrounding a region that's under threat. Jointly developed at NCAR and the Los Alamos National Laboratory in New Mexico, MPAS is being groomed especially to improve regional and global weather forecasts, climate modeling, and atmospheric chemistry research, such as regional air-quality forecasts. Last July, MPAS was selected by NOAA as one of the finalists to become the National Weather Service’s next-generation global weather model. The decision is expected later this year. "The fact that MPAS is a finalist is an expression of confidence in the model’s capabilities," Skamarock said. In tests, MPAS has performed well in predicting springtime thunderstorms and other severe weather over the Great Plains. It also has produced realistic simulations of certain tropical cyclones, including Hurricane Sandy of 2012. However, along with other U.S. models, it missed on 2015's Hurricane Joaquin.  Longer lead times ahead NOAA has reported that MPAS provided realistic, fine-scale detail for Hurricane Sandy in 2012 and for 2013 springtime weather over the continental U.S., including the tornado that struck Moore, Okla.  "MPAS also did reasonably well in providing five-day forecasts during a NOAA hazardous weather experiment last May," Skamarock said. MPAS's 48-hour forecast for July 8, 2015, accurately predicted heavy rain for northern Texas and much of Oklahoma. Abilene wound up getting 8.25 inches, its wettest day since record keeping started in 1885. (@UCAR. This image is freely available for media & nonprofit use.) In spring 2015, MPAS also won high marks for the accuracy of its three-day forecasts that helped guide research aircraft missions during a major field campaign to study nighttime thunderstorms on the Great Plains, called PECAN (Plains Elevated Convection at Night). NCAR Project Scientist Stan Trier, who worked as a forecaster on the PECAN campaign, said the MPAS forecasts were usually the first he would look at for planning purposes because MPAS was the only model that had the resolution to indicate possible storm structures beyond 48 hours. Then, as the time to make decisions on overnight field operations approached, he would update these earlier forecasts with new information produced by shorter-range, high-resolution models. "There were multiple situations where MPAS did quite well at these longer time ranges," Trier said. "Forecasts with two to three days of lead time are less accurate than one-day forecasts. This is expected. But overall, I would definitely say that MPAS was a useful part of the PECAN forecasting process." Most recently, MPAS has been tested in Antarctica and during the 2015 tropical cyclone season in the Atlantic and Pacific oceans. It also is being used as a component within the NCAR-based Community Earth System Model for long-term climate prediction, and has been tested at the Taiwan Typhoon and Flood Research Institute to predict severe weather events in that country. Even if MPAS emerges as the National Weather Service’s next-generation weather model, there will still be a role for the Weather Research and Forecasting platform hosted by NCAR. WRF, an open source model used widely worldwide, is especially adept for local and regional weather predictions in the mid-latitudes. And, while MPAS's variable-mesh design conserves computing requirements, as a global model, it still uses more computing resources than WRF. "With MPAS, we want to predict severe thunderstorms with a mesh spacing of a few kilometers," Skamarock said. "That takes a lot of computer power." Writer/contactJeff Smith, Science Writer and Public Information Officer

NCAR announces powerful new supercomputer for scientific discovery

BOULDER—The National Center for Atmospheric Research (NCAR) announced today that it has selected its next supercomputer for advancing atmospheric and Earth science, following a competitive open procurement process. The new machine will help scientists lay the groundwork for improved predictions of a range of phenomena, from hour-by-hour risks associated with thunderstorm outbreaks to the timing of the 11-year solar cycle and its potential impacts on GPS and other sensitive technologies.The new system, named Cheyenne, will be installed this year at the NCAR-Wyoming Supercomputing Center (NWSC) and become operational at the beginning of 2017.Cheyenne will be built by Silicon Graphics International Corp. (SGI) in conjunction with centralized file system and data storage components provided by DataDirect Networks (DDN). The SGI high-performance computer will be a 5.34-petaflop system, meaning it can carry out 5.34 quadrillion calculations per second. It will be capable of more than 2.5 times the amount of scientific computing performed by Yellowstone, the current NCAR supercomputer.Funded by the National Science Foundation and the state of Wyoming through an appropriation to the University of Wyoming, Cheyenne will be a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. Since the supercomputing facility in Wyoming opened its doors in 2012, more than 2,200 scientists from more than 300 universities and federal labs have used its resources.Six clips of scientific visualizations created with the help of the Yellowstone supercomputer. For more details on the individual clips, and to see the full-length visualizations, click here. “We’re excited to bring more supercomputing power to the scientific community,” said Anke Kamrath, director of operations and services at NCAR’s Computational and Information Systems Laboratory. “Whether it’s the threat of solar storms or a heightened risk in certain severe weather events, this new system will help lead to improved predictions and strengthen society’s resilience to potential disasters.”“Researchers at the University of Wyoming will make great use of the new system as they continue their work into better understanding such areas as the surface and subsurface flows of water and other liquids, cloud processes, and the design of wind energy plants,” said William Gern, vice president of research and economic development at the University of Wyoming. “UW’s relationship with NCAR through the NWSC has greatly strengthened our scientific computing and data-centric research. It’s helping us introduce the next generation of scientists and engineers to these endeavors.”The NWSC is located in Cheyenne, and the name of the new system was chosen to honor the support that it has received from the people of that city. It also commemorates the upcoming 150th anniversary of the city, which was founded in 1867 and named for the American Indian Cheyenne nation.Increased power, greater efficiencyThe new data storage system for Cheyenne will be integrated with NCAR’s existing GLADE file system. The DDN storage will provide an initial capacity of 20 petabytes, expandable to 40 petabytes with the addition of extra drives.  This, combined with the current 16 petabytes of GLADE, will total 36 petabytes of high-speed storage. The new DDN system also will transfer data at the rate of 200 gigabytes per second, which is more than twice as fast as the current file system’s rate of 90 gigabytes per second.The system will include powerful Intel Xeon processors, whose performance will be augmented through optimization work that has been done by NCAR and the University of Colorado Boulder. NCAR and the university performed this work through their participation in the Intel Parallel Computing Centers program.Even with its increased power, Cheyenne will be three times more energy efficient (in floating point operations per second, or flops, per watt) than Yellowstone, its predecessor, which is itself highly efficient.“The new system will have a peak computation rate of over 3 billion calculations per second for every watt of power consumed," said NCAR’s Irfan Elahi, project manager of Cheyenne and section manager for high-end supercomputing services.Scientists used the Yellowstone supercomputer to develop this 3-D rendering of a major thunderstorm in July 2011 that caused flooding in Fourmile Canyon west of Boulder. The colors show conditions in the clouds, including ice particles (light blue), graupel (orange), snow (pink), rain (blue), and water (grey). (Image by David Gochis, NCAR. This image is freely available for media & nonprofit use.)More detailed predictionsHigh-performance computers such as Cheyenne allow researchers to run increasingly detailed models that simulate complex processes and how they might unfold in the future. These predictions give resource managers and policy experts valuable information for planning ahead and mitigating risk.Some of the areas in which Cheyenne is expected to accelerate research include the following:Streamflow. Year-ahead predictions of streamflows and associated reservoir levels at a greater level of detail will provide water managers, farmers, and other decision makers with vital information about likely water availability and the potential for drought or flood impacts.Severe weather. By conducting multiple simultaneous runs (or ensembles) of high-resolution forecast models, scientists will lay the groundwork for more specific predictions of severe weather events, such as the probability that a cluster of intense thunderstorms with the risk of hail or flooding will strike a county at a particular hour.Solar energy. Specialized models of solar irradiance and cloud cover will be run more frequently and at higher resolution, producing research that will help utilities predict how much energy will be generated by major solar arrays hours to days in advance.Regional climate change. Scientists will conduct multiple simulations with detailed climate models, predicting how particular regions around the world will experience changing patterns of precipitation and temperature, along with potential impacts from sea level rise, streamflow, and runoff.Decadal prediction. Ensembles of detailed climate models will also help scientists predict the likelihood of certain climate patterns over a 10-year period, such as the risk of drought for a certain region or changes in Arctic sea ice extent.Air quality. Scientists will be able to simulate the movement and evolution of air pollutants in far more detail, thereby better understanding the potential health effects of particular types of emissions and working toward improved forecasts of air quality.Subsurface flows. More accurate and detailed models will enable researchers to better simulate the subsurface flows of water, oil, and gas, leading to a greater understanding of these resources.Solar storms. Innovative, three-dimensional models of the Sun will lay the groundwork for predictions of the timing and strength of the Sun’s 11-year cycle as well as for days-ahead forecasts of solar disturbances that can generate geomagnetic storms in Earth’s upper atmosphere."Supercomputing is vital to NCAR’s scientific research and applications, giving us a virtual laboratory in which we run experiments that would otherwise be impractical or impossible to do,” said NCAR Director James Hurrell. “Cheyenne will be a key component of the research infrastructure of the United States through its provision of supercomputing specifically tailored for the atmospheric, geospace, and related sciences. The capabilities of this new system will be central to the continued improvement of our ability to understand and predict changes in weather, climate, air quality, and space weather, as well as their impacts on people, ecosystems, and society.”This series of images, based on a research project run on the Yellowstone supercomputer, shows order and chaos in the Sun's interior dynamo. Turbulent plasma motions (image a) generate a tangled web of magnetic field lines, with opposing "wreaths" of magnetism pointing east (red) or west (blue). Images b and c provide a better look at the magnetic wreaths. (Images by Kyle Augustson, NCAR. This image is freely available for media & nonprofit use.)Cheyenne Quick FactsKey features of the new Cheyenne supercomputer system:5.34-petaflop SGI ICE XA Cluster with Intel “Broadwell” processorsMore than 4K compute nodes20% of the compute nodes have 128GB memory and the remaining ~80% have 64GB memory313 terabytes  (TB) of total memoryMellanox EDR InfiniBand high-speed interconnectPartial 9D Enhanced Hypercube interconnect topologySUSE Linux Enterprise Server operating systemAltair PBS Professional Workload ManagerIntel Parallel Studio XE compiler suiteSGI Management Center & SGI Development SuiteMellanox Unified Fabric ManagerThe new Cheyenne supercomputer and the existing file system are complemented by a new centralized parallel file system and data storage components.Key features of the new data storage system:Four DDN SFA14KX systems20 petabytes of usable file system space (can be expanded to 40 petabytes by adding drives)200 GB per second aggregate I/O bandwidth3,360 × 8-TB NL SAS drives48 × 800-GB mixed-use SSD drives for metadata24 × NSD (Network Shared Disk) serversRed Hat Enterprise Linux operating systemIBM GPFS (General Parallel File System)


Subscribe to Computer Modeling