Home About us Contact | |||
Ensemble Prediction System (ensemble + prediction_system)
Kinds of Ensemble Prediction System Selected AbstractsReanalysis and reforecast of three major European storms of the twentieth century using the ECMWF forecasting system.METEOROLOGICAL APPLICATIONS, Issue 2 2005Part II: Ensemble forecasts In Part II of this study the ECMWF Ensemble Prediction System (EPS) is used to study the probabilistic predictability of three major European storms of the twentieth century. The storms considered are the Dutch storm of 1 February 1953, the Hamburg storm of 17 February 1962, and the British/French storm of October 1987 (Great October storm). Common to all these storms is their severity that caused large loss of life and widespread damage. In Part I of this study it has been found that deterministic predictability of the Dutch and Hamburg storms amount to 48 and 84 hours, respectively. Here, it is shown that the ensemble forecasts supplement the deterministic forecasts. The large number of members in the 48 and 84 hour ensemble forecasts of the Dutch and Hamburg storms, respectively, suggest that at this forecast range and for these storms the sensitivity of the forecasts to analysis and model uncertainties is rather small. From these results, therefore, it is argued that reliable warnings (i.e. low probability for the occurrence of a forecast failure) for the Dutch and Hamburg storms could have been issued 48 and 84 hours, respectively, in advance, had the current ECMWF EPS been available. For the Great October storm it has been found in Part I of this study that short-range and medium-range forecasts of the intensity and track of the storm were very skilful with a high-resolution model of the ECWMF model. The actual timing of the storm, however, was difficult to predict. Here, it is shown that the EPS is capable of predicting large forecast uncertainties associated with the timing of the Great October storm up to 4 days in advance. It is argued that reliable warnings could have been issued at least 96 hours in advance had the ECMWF EPS been available. From the results presented in this study it is concluded that an Ensemble Prediction System is an important component of every early warning system for it allows an a priori quantification of the probability of the occurrence of severe wind storms. Copyright © 2005 Royal Meteorological Society [source] Use of medium-range ensembles at the Met Office 2: Applications for medium-range forecastingMETEOROLOGICAL APPLICATIONS, Issue 3 2002M V Young The term ,medium range' is taken to refer to forecasts for lead times ranging from about 2 or 3 days ahead up to about 10 days ahead. A wide variety of numerical model products are available to the forecaster nowadays, and one of the most important of these is the ECMWF Ensemble Prediction System (EPS). This paper shows how forecasters at the Met Office use these products, in particular the EPS, in an operational environment in the production of medium-range forecasts for a variety of customers, and illustrates some of the techniques involved. Particular reference is made to the PREVIN post-processing system for the EPS which is described in the companion paper by Legg et al. (2002). Forecast products illustrated take the form of synoptic charts (produced primarily via Field Modification software), text guidance and other graphical formats. The probabilistic approach to forecasting is discussed with reference to various examples, in particular the application of the EPS in providing early warnings of severe weather for which risk assessment is increasingly important. A central theme of this paper is the vital role played by forecasters in interpreting the output from the models in terms of the likely weather elements, and using the EPS to help assess confidence levels for a particular forecast as well as possible alternative synoptic evolutions. Verification statistics are presented which demonstrate how the EPS helps the forecaster to add value to the wide range of individual deterministic model products and that furthermore, the forecaster can improve upon many probabilistic products derived directly from the ensemble. Copyright © 2002 Royal Meteorological Society. [source] Storm prediction over Europe using the ECMWF Ensemble Prediction SystemMETEOROLOGICAL APPLICATIONS, Issue 3 2002Roberto Buizza Three severe storms caused great damage in Europe in December 1999. The first storm hit Denmark and Germany on 3 and 4 December, and the other two storms crossed France and Germany on 26 and 28 December. In this study, the performance of the Ensemble Prediction System (EPS) at the European Centre for Medium-Range Weather Forecast (ECMWF) in predicting these intense storms is investigated. Results indicate that the EPS gave early indications of possible severe storm occurrence, and was especially useful when the deterministic TL319L60 forecasts issued on successive days were highly inconsistent. These results indicate that the EPS is a valuable tool for assessing quantitatively the risk of severe weather and issuing early warnings of possible disruptions. The impact of an increase of the ensemble system horizontal resolution (TL255 integration from a TL511 analysis instead of the operational TL159 integration from a TL319 analysis) on the system performance is also investigated. Results show that the resolution increase enhances the ensemble performance in predicting the position and the intensity of intense storms. Copyright © 2002 Royal Meteorological Society. [source] STEPS: A probabilistic precipitation forecasting scheme which merges an extrapolation nowcast with downscaled NWPTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 620 2006Neill E. Bowler Abstract An ensemble-based probabilistic precipitation forecasting scheme has been developed that blends an extrapolation nowcast with a downscaled NWP forecast, known as STEPS: Short-Term Ensemble Prediction System. The uncertainties in the motion and evolution of radar-inferred precipitation fields are quantified, and the uncertainty in the evolution of the precipitation pattern is shown to be the more important. The use of ensembles allows the scheme to be used for applications that require forecasts of the probability density function of areal and temporal averages of precipitation, such as fluvial flood forecasting,a capability that has not been provided by previous probabilistic precipitation nowcast schemes. The output from a NWP forecast model is downscaled so that the small scales not represented accurately by the model are injected into the forecast using stochastic noise. This allows the scheme to better represent the distribution of precipitation rate at spatial scales finer than those adequately resolved by operational NWP. The performance of the scheme has been assessed over the month of March 2003. Performance evaluation statistics show that the scheme possesses predictive skill at lead times in excess of six hours. © Crown copyright, 2006. [source] Evaluation of probabilistic prediction systems for a scalar variableTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 609 2005G. Candille Abstract A systematic study is performed of a number of scores that can be used for objective validation of probabilistic prediction of scalar variables: Rank Histograms, Discrete and Continuous Ranked Probability Scores (DRPS and CRPS, respectively). The reliability-resolution-uncertainty decomposition, defined by Murphy for the DRPS, and extended here to the CRPS, is studied in detail. The decomposition is applied to the results of the Ensemble Prediction Systems of the European Centre for Medium-range Weather Forecasts and the National Centers for Environmental Prediction. Comparison is made with the decomposition of the CRPS defined by Hersbach. The possibility of determining an accurate reliability-resolution decomposition of the RPSs is severely limited by the unavoidably (relatively) small number of available realizations of the prediction system. The Hersbach decomposition may be an appropriate compromise between the competing needs for accuracy and practical computability. Copyright © 2005 Royal Meteorological Society. [source] Probabilistic temperature forecast by using ground station measurements and ECMWF ensemble prediction systemMETEOROLOGICAL APPLICATIONS, Issue 4 2004P. Boi The ECMWF Ensemble Prediction System 2-metre temperature forecasts are affected by systematic errors due mainly to resolution inadequacies. Moreover, other errors sources are present: differences in height above sea level between the station and the corresponding grid point, boundary layer parameterisation, and description of the land surface. These errors are more marked in regions of complex orography. A recursive statistical procedure to adapt ECMWF EPS-2metre temperature fields to 58 meteorological stations on the Mediterranean island of Sardinia is presented. The correction has been made in three steps: (1) bias correction of systematic errors; (2) calibration to adapt the EPS temperature distribution to the station temperature distribution; and (3) doubling the ensemble size with the aim of taking into account the analysis errors. Two years of probabilistic forecasts of freezing are tested by Brier Score, reliability diagram, rank histogram and Brier Skill Score with respect to the climatological forecast. The score analysis shows much better performance in comparison with the climatological forecast and direct model output, for all forecast timse, even after the first step (bias correction). Further gains in skill are obtained by calibration and by doubling the ensemble size. Copyright © 2004 Royal Meteorological Society. [source] The local ETKF and SKEB: Upgrades to the MOGREPS short-range ensemble prediction systemTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 640 2009Neill E. Bowler Abstract The Met Office has been routinely running a short-range global and regional ensemble prediction system (EPS) since the summer of 2005. This article describes a major upgrade to the global ensemble, which affected both the initial condition and model uncertainty perturbations applied in that ensemble. The change to the initial condition perturbations is to allow localization within the ensemble transform Kalman filter (ETKF). This enables better specification of the ensemble spread as a function of location around the globe. The change to the model uncertainty perturbations is the addition of a stochastic kinetic energy backscatter scheme (SKEB). This adds vorticity perturbations to the forecast in order to counteract the damping of small-scale features introduced by the semi-Lagrangian advection scheme. Verification of ensemble forecasts is presented for the global ensemble system. It is shown that the localization of the ETKF gives a distribution of the spread as a function of latitude that better matches the forecast error of the ensemble mean. The SKEB scheme has a substantial effect on the power spectrum of the kinetic energy, and with the scheme a shallowing of the spectral slope is seen in the tail. A k,5/3 slope is seen at wavelengths shorter than 1000 km and this better agrees with the observed spectrum. The local ETKF significantly improves forecasts at all lead times over a number of variables. The SKEB scheme increases the rate of growth of ensemble spread in some variables, and improves forecast skill at short lead times. ©Crown Copyright 2009. Reproduced with the permission of HMSO. Published by John Wiley & Sons Ltd. [source] Scale-dependent verification of ensemble forecastsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 633 2008Thomas Jung Abstract A scale-dependent verification of the ECMWF ensemble prediction system (EPS) in the Northern Hemisphere is presented. The relationship between spread and skill is investigated alongside probabilistic forecast skill for planetary, synoptic and subsynoptic spectral bands. Since the ECMWF model is a spectral model, the three spectral bands have been isolated using total and zonal wavenumber filters. Diagnosed overdispersiveness of ECMWF EPS in the short range is primarily due to excessive amounts of spread on synoptic scales. Diagnosed underdispersiveness of the ensemble beyond day 5 of the forecast can be explained by too little spread on both synoptic and planetary scales. Copyright © 2008 Royal Meteorological Society [source] Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts?THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 630 2008A. P. Weigel Abstract The success of multi-model ensemble combination has been demonstrated in many studies. Given that a multi-model contains information from all participating models, including the less skilful ones, the question remains as to why, and under what conditions, a multi-model can outperform the best participating single model. It is the aim of this paper to resolve this apparent paradox. The study is based on a synthetic forecast generator, allowing the generation of perfectly-calibrated single-model ensembles of any size and skill. Additionally, the degree of ensemble under-dispersion (or overconfidence) can be prescribed. Multi-model ensembles are then constructed from both weighted and unweighted averages of these single-model ensembles. Applying this toy model, we carry out systematic model-combination experiments. We evaluate how multi-model performance depends on the skill and overconfidence of the participating single models. It turns out that multi-model ensembles can indeed locally outperform a ,best-model' approach, but only if the single-model ensembles are overconfident. The reason is that multi-model combination reduces overconfidence, i.e. ensemble spread is widened while average ensemble-mean error is reduced. This implies a net gain in prediction skill, because probabilistic skill scores penalize overconfidence. Under these conditions, even the addition of an objectively-poor model can improve multi-model skill. It seems that simple ensemble inflation methods cannot yield the same skill improvement. Using seasonal near-surface temperature forecasts from the DEMETER dataset, we show that the conclusions drawn from the toy-model experiments hold equally in a real multi-model ensemble prediction system. Copyright © 2008 Royal Meteorological Society [source] Limited-area ensemble predictions at the Norwegian Meteorological InstituteTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 621 2006Inger-Lise Frogner Abstract This study aims at improving 0,3 day probabilistic forecasts of precipitation events in Norway. For this purpose a limited-area ensemble prediction system (LAMEPS) is tested. The horizontal resolution of LAMEPS is 28 km, and there are 31 levels in the vertical. The state variables provided as initial and lateral boundary conditions for the limited-area forecasts are perturbed using a dedicated version of the European Centre for Medium-Range Weather Forecasts (ECMWF) global ensemble prediction system, TEPS. These are constructed by combining initial and evolved singular vectors that at final time (48 h) are targeted to maximize the total energy in a domain containing northern Europe and adjacent sea areas. The resolution of TEPS is T255 with 40 levels. The test period includes 45 cases with 21 ensemble members in each case. We focus on 24 h accumulated precipitation rates with special emphasis on intense events. We also investigate a combination of TEPS and LAMEPS resulting in a system (NORLAMEPS) with 42 ensemble members. NORLAMEPS is compared with the 21-member LAMEPS and TEPS as well as the regular 51-member EPS run at ECMWF. The benefit of using targeted singular vectors is seen by comparing the 21-member TEPS with the 51-member operational EPS, as TEPS has considerably larger spread between ensemble members. For other measures, such as Brier Skill Score (BSS) and Relative Operating Characteristic (ROC) curves, the scores of the two systems are for most cases comparable, despite the difference in ensemble size. NORLAMEPS has the largest ensemble spread of all four ensemble systems studied in this paper, while EPS has the smallest spread. Nevertheless, EPS has higher BSS with NORLAMEPS approaching for the highest precipitation thresholds. For the area under the ROC curve, NORLAMEPS is comparable with or better than EPS for medium to large thresholds. Copyright © 2006 Royal Meteorological Society [source] Probabilistic forecasting from ensemble prediction systems: Improving upon the best-member method by using a different weight and dressing kernel for each memberTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 617 2006Vincent Fortin Abstract Ensembles of meteorological forecasts can both provide more accurate long-term forecasts and help assess the uncertainty of these forecasts. No single method has however emerged to obtain large numbers of equiprobable scenarios from such ensembles. A simple resampling scheme, the ,best member' method, has recently been proposed to this effect: individual members of an ensemble are ,dressed' with error patterns drawn from a database of past errors made by the ,best' member of the ensemble at each time step. It has been shown that the best-member method can lead to both underdispersive and overdispersive ensembles. The error patterns can be rescaled so as to obtain ensembles which display the desired variance. However, this approach fails in cases where the undressed ensemble members are already overdispersive. Furthermore, we show in this paper that it can also lead to an overestimation of the probability of extreme events. We propose to overcome both difficulties by dressing and weighting each member differently, using a different error distribution for each order statistic of the ensemble. We show on a synthetic example and using an operational ensemble prediction system that this new method leads to improved probabilistic forecasts, when the undressed ensemble members are both underdispersive and overdispersive. Copyright © 2006 Royal Meteorological Society. [source] Measures of skill and value of ensemble prediction systems, their interrelationship and the effect of ensemble sizeTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 577 2001David S. Richardson Abstract Ensemble forecasts provide probabilistic predictions for the future state of the atmosphere. Usually the probability of a given event E is determined from the fraction of ensemble members which predict the event. Hence there is a degree of sampling error inherent in the predictions. In this paper a theoretical study is made of the effect of ensemble size on forecast performance as measured by a reliability diagram and Brier (skill) score, and on users by using a simple cost-loss decision model. The relationship between skill and value, and a generalized skill score, dependent on the distribution of users, are discussed. The Brier skill score is reduced from its potential level for all finite-sized ensembles. The impact is most significant for small ensembles, especially when the variance of forecast probabilities is also small. The Brier score for a set of deterministic forecasts is a measure of potential predictability, assuming the forecasts are representative selections from a reliable ensemble prediction system (EPS). There is a consistent effect of finite ensemble size on the reliability diagram. Even if the underlying distribution is perfectly reliable, sampling this using only a small number of ensemble members introduces considerable unreliability. There is a consistent over-forecasting which appears as a clockwise tilt of the reliability diagram. It is important to be aware of the expected effect of ensemble size to avoid misinterpreting results. An ensemble of ten or so members should not be expected to provide reliable probability forecasts. Equally, when comparing the performance of different ensemble systems, any difference in ensemble size should be considered before attributing performance differences to other differences between the systems. The usefulness of an EPS to individual users cannot be deduced from the Brier skill score (nor even directly from the reliability diagram). An EPS with minimal Brier skill may nevertheless be of substantial value to some users, while small differences in skill may hide substantial variation in value. Using a simple cost-loss decision model, the sensitivity of users to differences in ensemble size is shown to depend on the predictability and frequency of the event and on the cost-loss ratio of the user. For an extreme event with low predictability, users with low cost-loss ratio will gain significant benefits from increasing ensemble size from 50 to 100 members, with potential for substantial additional value from further increases in number of members. This sensitivity to large ensemble size is not evident in the Brier skill score. A generalized skill score, dependent on the distribution of users, allows a summary performance measure to be tuned to a particular aspect of EPS performance. [source] A strategy for perturbing surface initial conditions in LAMEPSATMOSPHERIC SCIENCE LETTERS, Issue 2 2010Yong Wang Abstract The lack or inadequate representation of uncertainties in the surface initial conditions (ICs) affects the quality of ensemble forecast, in particular the near surface temperature and precipitation. In this paper, a strategy for perturbing surface ICs in limited area model ensemble prediction system, noncycling surface breeding (NCSB) is proposed. The strategy combines short-range surface forecasts driven by perturbed atmospheric forcing and the breeding method for generating the perturbation to surface ICs. NCSB is implemented and tested in Aire Limitée Adaptation dynamique Développement InterNational-limited area ensemble forecasting (ALADIN-LAEF). Statistical verification demonstrates that the application of NCSB improves the ALADIN-LAEF 2m temperature and precipitation forecast. Positive impacts are also obtained for temperature and specific humidity in the lower atmosphere. Copyright © 2010 Royal Meteorological Society [source] Comparing the scores of hydrological ensemble forecasts issued by two different hydrological modelsATMOSPHERIC SCIENCE LETTERS, Issue 2 2010A. Randrianasolo Abstract A comparative analysis is conducted to assess the quality of streamflow forecasts issued by two different modeling conceptualizations of catchment response, both driven by the same weather ensemble prediction system (PEARP Météo-France). The two hydrological modeling approaches are the physically based and distributed hydrometeorological model SIM (Météo-France) and the lumped soil-moisture-accounting type rainfall-runoff model GRP (Cemagref). Discharges are simulated at 211 catchments in France over 17 months. Skill scores are computed for the first 2 days of forecast range. The results suggest good performance of both hydrological models and illustrate the benefit of streamflow data assimilation for ensemble short-term forecasting. Copyright © 2010 Royal Meteorological Society [source] Hydrological ensemble prediction and verification for the Meuse and Scheldt basinsATMOSPHERIC SCIENCE LETTERS, Issue 2 2010Joris Van den Bergh Abstract We present the hydrological ensemble prediction system developed at the Royal Meteorological Institute (RMI) of Belgium to study the Meuse and Scheldt basins. An overview is presented of the hydrological model and the operational setup of the forecasting system. We present some results of a 3-year hindcast that was performed to verify the quality of the probabilistic forecasting system. The raw precipitation forecasts and streamflow forecasts are considered: we provide skill scores and relative economic value for various subcatchments of the Meuse and Scheldt basins. Copyright © 2010 Royal Meteorological Society [source] Can ensemble forecasts improve the reliability of flood alerts?JOURNAL OF FLOOD RISK MANAGEMENT, Issue 4 2009J. Dietrich Abstract A probabilistic evaluation of ensemble forecasts can be used to communicate uncertainty to decision makers. We present a flood forecast scheme, which combines forecasts from the European COSMO-LEPS, SRNWP-PEPS and COSMO-DE (lagged average) ensemble prediction systems with a rainfall,runoff model. The methodology was demonstrated with a case study for the Central European Mulde River basin. In this paper, we summarize results from hindcast simulations for seven events from 2002 to 2008. The ensemble spread resulting from uncertainty in rainfall forecast was very high at 2,5 days lead time. The median of the medium- and short-range forecasts and a lagged average ensemble of the very short-range forecasts proved to be reliable regarding the probability of exceeding flood alert levels. However, the limited number of observed events does not allow for the postulation of prescriptive binary decision rules. Flood managers have to adapt their decisions when new information becomes available. [source] Retracted and replaced:Impact of observational error on the validation of ensemble prediction systemsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 631 2008G. Candille Abstract This article has been retracted and replaced.Please see retraction notice DOI:10.1002/qj.273. Replacement article DOI:10.1002/qj.268 published in QJ 134:633 Part B. [source] Probabilistic forecasting from ensemble prediction systems: Improving upon the best-member method by using a different weight and dressing kernel for each memberTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 617 2006Vincent Fortin Abstract Ensembles of meteorological forecasts can both provide more accurate long-term forecasts and help assess the uncertainty of these forecasts. No single method has however emerged to obtain large numbers of equiprobable scenarios from such ensembles. A simple resampling scheme, the ,best member' method, has recently been proposed to this effect: individual members of an ensemble are ,dressed' with error patterns drawn from a database of past errors made by the ,best' member of the ensemble at each time step. It has been shown that the best-member method can lead to both underdispersive and overdispersive ensembles. The error patterns can be rescaled so as to obtain ensembles which display the desired variance. However, this approach fails in cases where the undressed ensemble members are already overdispersive. Furthermore, we show in this paper that it can also lead to an overestimation of the probability of extreme events. We propose to overcome both difficulties by dressing and weighting each member differently, using a different error distribution for each order statistic of the ensemble. We show on a synthetic example and using an operational ensemble prediction system that this new method leads to improved probabilistic forecasts, when the undressed ensemble members are both underdispersive and overdispersive. Copyright © 2006 Royal Meteorological Society. [source] Measures of skill and value of ensemble prediction systems, their interrelationship and the effect of ensemble sizeTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 577 2001David S. Richardson Abstract Ensemble forecasts provide probabilistic predictions for the future state of the atmosphere. Usually the probability of a given event E is determined from the fraction of ensemble members which predict the event. Hence there is a degree of sampling error inherent in the predictions. In this paper a theoretical study is made of the effect of ensemble size on forecast performance as measured by a reliability diagram and Brier (skill) score, and on users by using a simple cost-loss decision model. The relationship between skill and value, and a generalized skill score, dependent on the distribution of users, are discussed. The Brier skill score is reduced from its potential level for all finite-sized ensembles. The impact is most significant for small ensembles, especially when the variance of forecast probabilities is also small. The Brier score for a set of deterministic forecasts is a measure of potential predictability, assuming the forecasts are representative selections from a reliable ensemble prediction system (EPS). There is a consistent effect of finite ensemble size on the reliability diagram. Even if the underlying distribution is perfectly reliable, sampling this using only a small number of ensemble members introduces considerable unreliability. There is a consistent over-forecasting which appears as a clockwise tilt of the reliability diagram. It is important to be aware of the expected effect of ensemble size to avoid misinterpreting results. An ensemble of ten or so members should not be expected to provide reliable probability forecasts. Equally, when comparing the performance of different ensemble systems, any difference in ensemble size should be considered before attributing performance differences to other differences between the systems. The usefulness of an EPS to individual users cannot be deduced from the Brier skill score (nor even directly from the reliability diagram). An EPS with minimal Brier skill may nevertheless be of substantial value to some users, while small differences in skill may hide substantial variation in value. Using a simple cost-loss decision model, the sensitivity of users to differences in ensemble size is shown to depend on the predictability and frequency of the event and on the cost-loss ratio of the user. For an extreme event with low predictability, users with low cost-loss ratio will gain significant benefits from increasing ensemble size from 50 to 100 members, with potential for substantial additional value from further increases in number of members. This sensitivity to large ensemble size is not evident in the Brier skill score. A generalized skill score, dependent on the distribution of users, allows a summary performance measure to be tuned to a particular aspect of EPS performance. [source] Aims, challenges and progress of the Hydrological Ensemble Prediction Experiment (HEPEX) following the third HEPEX workshop held in Stresa 27 to 29 June 2007ATMOSPHERIC SCIENCE LETTERS, Issue 2 2008Jutta Thielen Abstract Since several years, users of weather forecasts have begun to realize the benefit of quantifying the uncertainty associated with forecasts rather than relying on single value forecasts. At the same time, hydrologists and water managers have begun to explore the potential benefit of ensemble prediction systems (EPS) for hydrological applications. The Hydrologic Ensemble Prediction Experiment (HEPEX) is an international project that aims to foster the development of probabilistic hydrological forecasting and corresponding decision making tools. Since 2004, HEPEX has provided discussion opportunities for hydrological and meteorological scientists involved in the development, testing, and operational management of forecasting systems, and end users. Copyright © 2008 Royal Meteorological Society [source] A Bayesian hierarchical approach to ensemble weather forecastingJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2010A. F. Di Narzo Summary., In meteorology, the traditional approach to forecasting employs deterministic models mimicking atmospheric dynamics. Forecast uncertainty due to partial knowledge of the initial conditions is tackled by ensemble predictions systems. Probabilistic forecasting is a relatively new approach which may properly account for all sources of uncertainty. We propose a hierarchical Bayesian model which develops this idea and makes it possible to deal with ensemble predictions systems with non-identifiable members by using a suitable definition of the second level of the model. An application to Italian small-scale temperature data is shown. [source] |