History of numerical weather prediction
The history of numerical weather prediction considers how current weather conditions as input into mathematical models of the atmosphere and oceans to predict the weather and future sea state has changed over the years. Though first attempted manually in the 1920s, it was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC was used to create the first forecasts via computer in 1950, and over the years more powerful computers have been used to increase the size of initial datasets as well as include more complicated versions of the equations of motion. The development of global forecasting models led to the first climate models. The development of limited area models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s.
Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics were developed in the 1970s and 1980s for individual forecast points. The MOS apply statistical techniques to post-process the output of dynamical models with the most recent surface observations and the forecast point's climatology. This technique can correct for model resolution as well as model biases. Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days. The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible.
Background
Until the end of the 19th century, weather prediction was entirely subjective and based on empirical rules, with only limited understanding of the physical mechanisms behind weather processes. In 1901 Cleveland Abbe, founder of the United States Weather Bureau, proposed that the atmosphere is governed by the same principles of thermodynamics and hydrodynamics that were studied in the previous century. In 1904, Vilhelm Bjerknes derived a two-step procedure for model-based weather forecasting. First, a diagnostic step is used to process data to generate initial conditions, which are then advanced in time by a prognostic step that solves the initial value problem. He also identified seven variables that defined the state of the atmosphere at a given point: pressure, temperature, density, humidity, and the three components of the flow velocity vector. Bjerknes pointed out that equations based on mass continuity, conservation of momentum, the first and second laws of thermodynamics, and the ideal gas law could be used to estimate the state of the atmosphere in the future through numerical methods. With the exception of the second law of thermodynamics, these equations form the basis of the primitive equations used in present-day weather models.In 1922, Lewis Fry Richardson published the first attempt at forecasting the weather numerically. Using a hydrostatic variation of Bjerknes's primitive equations, Richardson produced by hand a 6-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. His forecast calculated that the change in surface pressure would be, an unrealistic value incorrect by two orders of magnitude. The large error was caused by an imbalance in the pressure and wind velocity fields used as the initial conditions in his analysis.
The first successful numerical prediction was performed using the ENIAC digital computer in 1950 by a team composed of American meteorologists Jule Charney, Philip Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and computer programmer Klara Dan von Neumann. They used a simplified form of atmospheric dynamics based on solving the barotropic vorticity equation over a single layer of the atmosphere, by computing the geopotential height of the atmosphere's pressure surface. This simplification greatly reduced demands on computer time and memory, so the computations could be performed on the relatively primitive computers of the day. When news of the first weather forecast by ENIAC was received by Richardson in 1950, he remarked that the results were an "enormous scientific advance." The first calculations for a 24‑hour forecast took ENIAC nearly 24 hours to produce, but Charney's group noted that most of that time was spent in "manual operations", and expressed hope that forecasts of the weather before it occurs would soon be realized.
prediction from a numerical weather prediction model. It also shows an Omega block.
In the United Kingdom the Meteorological Office first numerical weather prediction was completed by F. H. Bushby and Mavis Hinds in 1952 under the guidance of John Sawyer. These experimental forecasts were generated using a 12 × 8 grid with a grid spacing of 260 km, a one hour time-step, and required four hours of computing time for a 24 hour forecast on the EDSAC computer at the University of Cambridge and the LEO computer developed by J. Lyons and Co. Following these initial experiments, work moved to the Ferranti Mark 1 computer at the Manchester University Department of Electrical Engineering and in 1959 a Ferranti Mercury computer, known as 'Meteor', was installed at the Met Office.
Early years
In September 1954, Carl-Gustav Rossby assembled an international group of meteorologists in Stockholm and produced the first operational forecast based on the barotropic equation. Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit, a joint project by the U.S. Air Force, Navy, and Weather Bureau. The JNWPU model was originally a three-layer barotropic model, also developed by Charney. It only modeled the atmosphere in the Northern Hemisphere. In 1956, the JNWPU switched to a two-layer thermotropic model developed by Thompson and Gates. The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the and geopotential height surfaces and the average thermal wind between them. However, due to the low skill showed by the thermotropic model, the JNWPU reverted to the single-layer barotropic model in 1958. The Japanese Meteorological Agency became the third organization to initiate operational numerical weather prediction in 1959. The first real-time forecasts made by Australia's Bureau of Meteorology in 1969 for portions of the Southern Hemisphere were also based on the single-layer barotropic model.Later models used more complete equations for atmospheric dynamics and thermodynamics. In 1959, Karl-Heinz Hinkelmann produced the first reasonable primitive equation forecast, 37 years after Richardson's failed attempt. Hinkelmann did so by removing small oscillations from the numerical model during initialization. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. Later additions to primitive equation models allowed additional insight into different weather phenomena. In the United States, solar radiation effects were added to the primitive equation model in 1967; moisture effects and latent heat were added in 1968; and feedback effects from rain on convection were incorporated in 1971. Three years later, the first global forecast model was introduced. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.
Global forecast models
A global forecast model is a weather forecasting model which initializes and forecasts the weather throughout the Earth's troposphere. It is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the flow velocity vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.The National Meteorological Center's Global Spectral Model was introduced during August 1980. The European Centre for Medium-Range Weather Forecasts model debuted on May 1, 1985. The United Kingdom Met Office has been running their global model since the late 1980s, adding a 3D-Var data assimilation scheme in mid-1999. The Canadian Meteorological Centre has been running a global model since 1991. The United States ran the Nested Grid Model from 1987 to 2000, with some features lasting as late as 2009. Between 2000 and 2002, the Environmental Modeling Center ran the Aviation model for shorter range forecasts and the Medium Range Forecast model at longer time ranges. During this time, the AVN model was extended to the end of the forecast period, eliminating the need of the MRF and thereby replacing it. In late 2002, the AVN model was renamed the Global Forecast System. The German Weather Service has been running their global hydrostatic model, the GME, using a hexagonal icosahedral grid since 2002. The GFS is slated to eventually be supplanted by the Flow-following, finite-volume Icosahedral Model, which like the GME is gridded on a truncated icosahedron, in the mid-2010s.
Global climate models
In 1956, Norman A. Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere, which became the first successful climate model. Following Phillips's work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined into the 2000s. In 1986, efforts began to initialize and model soil and vegetation types, which led to more realistic forecasts. For example, the Center for Ocean-Land Atmosphere Studies model showed a warm temperature bias of 2–4 °C and a low precipitation bias due to incorrect parameterization of crop and vegetation type across the central United States. Coupled ocean-atmosphere climate models such as the Hadley Centre for Climate Prediction and Research's HadCM3 model are currently being used as inputs for climate change studies. The importance of gravity waves was neglected within these models until the mid-1980s. Now, gravity waves are required within global climate models in order to properly simulate regional and global scale circulations, though their broad spectrum makes their incorporation complicated. The Climate System Model was developed at the National Center for Atmospheric Research in January 1994.Limited-area models
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models allow for the use of finer grid spacing than global models. The available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model for initial conditions of the edge of their domain in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.In the United States, the first operational regional model, the limited-area fine-mesh model, was introduced in 1971. Its development was halted, or frozen, in 1986. The NGM debuted in 1987 and was also used to create model output statistics for the United States. Its development was frozen in 1991. The ETA model was implemented for the United States in 1993 and in turn was upgraded to the NAM in 2006. The U.S. also offers the Rapid Refresh for short-range and high-resolution applications; both the Rapid Refresh and NAM are built on the same framework, the WRF. Metéo France has been running their Action de Recherche Petite Échelle Grande Échelle mesoscale model for France, based upon the ECMWF global model, since 1995. In July 1996, the Bureau of Meteorology implemented the Limited Area Prediction System. The Canadian Regional Finite-Elements model went into operational use on April 22, 1986. It was followed by the Canadian Global Environmental Multiscale Model mesoscale model on February 24, 1997.
The German Weather Service developed the High Resolution Regional Model in 1999, which is widely run within the operational and research meteorological communities and run with hydrostatic assumptions. The Antarctic Mesoscale Prediction System was developed for the southernmost continent in 2000 by the United States Antarctic Program. The German non-hydrostatic Lokal-Modell for Europe has been run since 2002, and an increase in areal domain became operational on September 28, 2005. The Japanese Meteorological Agency has run a high-resolution, non-hydrostatic mesoscale model since September 2004.
Air quality models
The technical literature on air pollution dispersion is quite extensive and dates back to the 1930s and earlier. One of the early air pollutant plume dispersion equations was derived by Bosanquet and Pearson. Their equation did not assume Gaussian distribution nor did it include the effect of ground reflection of the pollutant plume. Sir Graham Sutton derived an air pollutant plume dispersion equation in 1947 which did include the assumption of Gaussian distribution for the vertical and crosswind dispersion of the plume and also included the effect of ground reflection of the plume. Under the stimulus provided by the advent of stringent environmental control regulations, there was an immense growth in the use of air pollutant plume dispersion calculations between the late 1960s and today. A great many computer programs for calculating the dispersion of air pollutant emissions were developed during that period of time and they were called "air dispersion models". The basis for most of those models was the Complete Equation For Gaussian Dispersion Modeling Of Continuous, Buoyant Air Pollution Plumes The Gaussian air pollutant dispersion equation requires the input of H which is the pollutant plume's centerline height above ground level—and H is the sum of Hs plus ΔH.To determine ΔH, many if not most of the air dispersion models developed between the late 1960s and the early 2000s used what are known as "the Briggs equations." G. A. Briggs first published his plume rise observations and comparisons in 1965. In 1968, at a symposium sponsored by Conservation of Clean Air and Water in Europe, he compared many of the plume rise models then available in the literature. In that same year, Briggs also wrote the section of the publication edited by Slade dealing with the comparative analyses of plume rise models. That was followed in 1969 by his classical critical review of the entire plume rise literature, in which he proposed a set of plume rise equations which have become widely known as "the Briggs equations". Subsequently, Briggs modified his 1969 plume rise equations in 1971 and in 1972.
The Urban Airshed Model, a regional forecast model for the effects of air pollution and acid rain, was developed by a private company in the US in 1970. Development of this model was taken over by the Environmental Protection Agency and improved in the mid to late 1970s using results from a regional air pollution study. While developed in California, this model was later used in other areas of North America, Europe and Asia during the 1980s. The Community Multiscale Air Quality model is an open source air quality model run within the United States in conjunction with the NAM mesoscale model since 2004. The first operational air quality model in Canada, Canadian Hemispheric and Regional Ozone and NOx System, began to be run in 2001. It was replaced with the Global Environmental Multiscale model – Modelling Air quality and Chemistry model in November 2009.
Tropical cyclone models
During 1972, the first model to forecast storm surge along the continental shelf was developed, known as the Special Program to List the Amplitude of Surges from Hurricanes. In 1978, the first hurricane-tracking model based on atmospheric dynamics – the movable fine-mesh model – began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the decade of the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. In the early 1980s, the assimilation of satellite-derived winds from water vapor, infrared, and visible satellite imagery was found to improve tropical cyclones track forecasting. The Geophysical Fluid Dynamics Laboratory hurricane model was used for research purposes between 1973 and the mid-1980s. Once it was determined that it could show skill in hurricane prediction, a multi-year transition transformed the research model into an operational model which could be used by the National Weather Service in 1995.The Hurricane Weather Research and Forecasting model is a specialized version of the Weather Research and Forecasting model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration, the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. It became operational in 2007. Despite improvements in track forecasting, predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statiscal methods continue to show higher skill over dynamical guidance.
Ocean models
The first ocean wave models were developed in the 1960s and 1970s. These models had the tendency to overestimate the role of wind in wave development and underplayed wave interactions. A lack of knowledge concerning how waves interacted among each other, assumptions regarding a maximum wave height, and deficiencies in computer power limited the performance of the models. After experiments were performed in 1968, 1969, and 1973, wind input from the Earth's atmosphere was weighted more accurately in the predictions. A second generation of models was developed in the 1980s, but they could not realistically model swell nor depict wind-driven waves caused by rapidly changing wind fields, such as those within tropical cyclones. This caused the development of a third generation of wave models from 1988 onward.Within this third generation of models, the spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement, wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.
Model output statistics
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics, and were developed by the National Weather Service for their suite of weather forecasting models by 1976. The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983.Ensembles
As proposed by Edward Lorenz in 1963, it is impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of skill, owing to the chaotic nature of the fluid dynamics equations involved. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days. Furthermore, existing observation networks have limited spatial and temporal resolution, which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about six days into the future.Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed a stochastic dynamic model that produced means and variances for the state of the atmosphere. While these Monte Carlo simulations showed skill, in 1974 Cecil Leith revealed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. It was not until 1992 that ensemble forecasts began being prepared by the European Centre for Medium-Range Weather Forecasts, the Canadian Meteorological Centre, and the National Centers for Environmental Prediction. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding.