Probabilistic Predictions (probabilistic + prediction)

Distribution by Scientific Domains


Selected Abstracts


Exit polling in a cold climate: the BBC,ITV experience in Britain in 2005

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2008
John Curtice
Summary., Conducting an exit poll to forecast the outcome of a national election in terms of both votes and seats is particularly difficult in Britain. No official information is available on how individual polling stations voted in the past, use of single-member plurality means that there is no consistent relationship between votes and seats, electors can choose to vote by post and most of those who vote in person do so late in the day. In addition, around one in every six intended exit poll respondents refuses to participate. Methods that were developed to overcome these problems, and their use in the successful 2005 British Broadcasting Corporation,Independent Television exit poll, are described and evaluated. The methodology included a panel design to allow the estimation of electoral change at local level, coherent multiple-regression modelling of multiparty electoral change to capture systematic patterns of variation, probabilistic prediction of constituency winners to account for uncertainty in projected constituency level shares, collection of information about the voting intentions of postal voters before polling day and access to interviewer guesses on the voting behaviour of refusals. The coverage and accuracy of the exit poll data are critically examined, the effect of key aspects of the statistical modelling of the data is assessed and some general lessons are drawn for the design and analysis of electoral exit polls. [source]


Evaluation of probabilistic prediction systems for a scalar variable

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 609 2005
G. Candille
Abstract A systematic study is performed of a number of scores that can be used for objective validation of probabilistic prediction of scalar variables: Rank Histograms, Discrete and Continuous Ranked Probability Scores (DRPS and CRPS, respectively). The reliability-resolution-uncertainty decomposition, defined by Murphy for the DRPS, and extended here to the CRPS, is studied in detail. The decomposition is applied to the results of the Ensemble Prediction Systems of the European Centre for Medium-range Weather Forecasts and the National Centers for Environmental Prediction. Comparison is made with the decomposition of the CRPS defined by Hersbach. The possibility of determining an accurate reliability-resolution decomposition of the RPSs is severely limited by the unavoidably (relatively) small number of available realizations of the prediction system. The Hersbach decomposition may be an appropriate compromise between the competing needs for accuracy and practical computability. Copyright © 2005 Royal Meteorological Society. [source]


Calculation of Posterior Probabilities for Bayesian Model Class Assessment and Averaging from Posterior Samples Based on Dynamic System Data

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2010
Sai Hung Cheung
Because of modeling uncertainty, a set of competing candidate model classes may be available to represent a system and it is then desirable to assess the plausibility of each model class based on system data. Bayesian model class assessment may then be used, which is based on the posterior probability of the different candidates for representing the system. If more than one model class has significant posterior probability, then Bayesian model class averaging provides a coherent mechanism to incorporate all of these model classes in making probabilistic predictions for the system response. This Bayesian model assessment and averaging requires calculation of the evidence for each model class based on the system data, which requires the evaluation of a multi-dimensional integral involving the product of the likelihood and prior defined by the model class. In this article, a general method for calculating the evidence is proposed based on using posterior samples from any Markov Chain Monte Carlo algorithm. The effectiveness of the proposed method is illustrated by Bayesian model updating and assessment using simulated earthquake data from a ten-story nonclassically damped building responding linearly and a four-story building responding inelastically. [source]


Point process methodology for on-line spatio-temporal disease surveillance

ENVIRONMETRICS, Issue 5 2005
Peter Diggle
Abstract We formulate the problem of on-line spatio-temporal disease surveillance in terms of predicting spatially and temporally localised excursions over a pre-specified threshold value for the spatially and temporally varying intensity of a point process in which each point represents an individual case of the disease in question. Our point process model is a non-stationary log-Gaussian Cox process in which the spatio-temporal intensity, ,(x,t), has a multiplicative decomposition into two deterministic components, one describing purely spatial and the other purely temporal variation in the normal disease incidence pattern, and an unobserved stochastic component representing spatially and temporally localised departures from the normal pattern. We give methods for estimating the parameters of the model, and for making probabilistic predictions of the current intensity. We describe an application to on-line spatio-temporal surveillance of non-specific gastroenteric disease in the county of Hampshire, UK. The results are presented as maps of exceedance probabilities, P{R(x,t)c|data}, where R(x,t) is the current realisation of the unobserved stochastic component of ,(x,t) and c is a pre-specified threshold. These maps are updated automatically in response to each day's incident data using a web-based reporting system. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Communicating the value of probabilistic forecasts with weather roulette

METEOROLOGICAL APPLICATIONS, Issue 2 2009
Renate Hagedorn
Abstract In times of ever increasing financial constraints on public weather services it is of growing importance to communicate the value of their forecasts and products. While many diagnostic tools exist to evaluate forecast systems, intuitive diagnostics for communicating the skill of probabilistic forecasts are few. When the goal is communication with a non-expert audience it can be helpful to compare performance in more everyday terms than ,bits of information'. Ideally, of course, the method of presentation will be directly related to specific skill scores with known strengths and weaknesses. This paper introduces Weather Roulette, a conceptual framework for evaluating probabilistic predictions where skill is quantified using an effective daily interest rate; it is straightforward to deploy, comes with a simple storyline and importantly is comprehensible and plausible for a non-expert audience. Two variants of Weather Roulette are presented, one of which directly reflects proper local skill scores. Weather Roulette contrasts the performance of two forecasting systems, one of which may be climatology. Several examples of its application to ECMWF forecasts are discussed illustrating this new tool as useful addition to the suite of available probabilistic scoring metrics. Copyright © 2008 Royal Meteorological Society [source]


How much does simplification of probability forecasts reduce forecast quality?

METEOROLOGICAL APPLICATIONS, Issue 1 2008
F. J. Doblas-Reyes
Abstract Probability forecasts from an ensemble are often discretized into a small set of categories before being distributed to the users. This study investigates how such simplification can affect the forecast quality of probabilistic predictions as measured by the Brier score (BS). An example from the European Centre for Medium-Range Weather Forecasts (ECMWF) operational seasonal ensemble forecast system is used to show that the simplification of the forecast probabilities reduces the Brier skill score (BSS) by as much as 57% with respect to the skill score obtained with the full set of probabilities issued from the ensemble. This is more obvious for a small number of probability categories and is mainly due to a decrease in forecast resolution of up to 36%. The impact of the simplification as a function of the ensemble size is also discussed. The results suggest that forecast quality should be made available for the set of probabilities that the forecast user has access to as well as for the complete set of probabilities issued by the ensemble forecasting system. Copyright © 2008 Royal Meteorological Society [source]


Measures of skill and value of ensemble prediction systems, their interrelationship and the effect of ensemble size

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 577 2001
David S. Richardson
Abstract Ensemble forecasts provide probabilistic predictions for the future state of the atmosphere. Usually the probability of a given event E is determined from the fraction of ensemble members which predict the event. Hence there is a degree of sampling error inherent in the predictions. In this paper a theoretical study is made of the effect of ensemble size on forecast performance as measured by a reliability diagram and Brier (skill) score, and on users by using a simple cost-loss decision model. The relationship between skill and value, and a generalized skill score, dependent on the distribution of users, are discussed. The Brier skill score is reduced from its potential level for all finite-sized ensembles. The impact is most significant for small ensembles, especially when the variance of forecast probabilities is also small. The Brier score for a set of deterministic forecasts is a measure of potential predictability, assuming the forecasts are representative selections from a reliable ensemble prediction system (EPS). There is a consistent effect of finite ensemble size on the reliability diagram. Even if the underlying distribution is perfectly reliable, sampling this using only a small number of ensemble members introduces considerable unreliability. There is a consistent over-forecasting which appears as a clockwise tilt of the reliability diagram. It is important to be aware of the expected effect of ensemble size to avoid misinterpreting results. An ensemble of ten or so members should not be expected to provide reliable probability forecasts. Equally, when comparing the performance of different ensemble systems, any difference in ensemble size should be considered before attributing performance differences to other differences between the systems. The usefulness of an EPS to individual users cannot be deduced from the Brier skill score (nor even directly from the reliability diagram). An EPS with minimal Brier skill may nevertheless be of substantial value to some users, while small differences in skill may hide substantial variation in value. Using a simple cost-loss decision model, the sensitivity of users to differences in ensemble size is shown to depend on the predictability and frequency of the event and on the cost-loss ratio of the user. For an extreme event with low predictability, users with low cost-loss ratio will gain significant benefits from increasing ensemble size from 50 to 100 members, with potential for substantial additional value from further increases in number of members. This sensitivity to large ensemble size is not evident in the Brier skill score. A generalized skill score, dependent on the distribution of users, allows a summary performance measure to be tuned to a particular aspect of EPS performance. [source]