Prediction Intervals (prediction + interval)

Distribution by Scientific Domains


Selected Abstracts


Prediction intervals in linear regression taking into account errors on both axes

JOURNAL OF CHEMOMETRICS, Issue 10 2001
F. Javier del Río
Abstract This study reports the expressions for the variances in the prediction of the response and predictor variables calculated with the bivariate least squares (BLS) regression technique. This technique takes into account the errors on both axes. Our results are compared with those of a simulation process based on six different real data sets. The mean error in the results from the new expressions is between 4% and 5%. With weighted least squares, ordinary least squares, the constant variance ratio approach and orthogonal regression, on the other hand, mean errors can be as high as 85%, 277%, 637% and 1697% respectively. An important property of the prediction intervals calculated with BLS is that the results are not affected when the axes are switched. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Natural variation in baseline data: when do we call a new sample ,resistant'?,

PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 9 2002
Lukas Schaub
Abstract Mortality of pear psylla to amitraz was studied by means of bioassays. Variation between samples, temporal variation within the season in one orchard and spatial variation between Swiss regions were considered. Variation between samples was large enough to produce different Probit functions and LC50 values. Temporal and spatial variations were too small to indicate resistance. Prediction intervals of the pooled functions using bootstrapping were calculated to determine if future samples come from a population with decreased sensitivity. Probabilistic criteria on the population level were proposed for resistance. © 2002 Society of Chemical Industry [source]


A re-evaluation of random-effects meta-analysis

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009
Julian P. T. Higgins
Summary., Meta-analysis in the presence of unexplained heterogeneity is frequently undertaken by using a random-effects model, in which the effects underlying different studies are assumed to be drawn from a normal distribution. Here we discuss the justification and interpretation of such models, by addressing in turn the aims of estimation, prediction and hypothesis testing. A particular issue that we consider is the distinction between inference on the mean of the random-effects distribution and inference on the whole distribution. We suggest that random-effects meta-analyses as currently conducted often fail to provide the key results, and we investigate the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods. We conclude that the Bayesian approach has the advantage of naturally allowing for full uncertainty, especially for prediction. However, it is not without problems, including computational intensity and sensitivity to a priori judgements. We propose a simple prediction interval for classical meta-analysis and offer extensions to standard practice of Bayesian meta-analysis, making use of an example of studies of ,set shifting' ability in people with eating disorders. [source]


Using form analysis techniques to improve photogrammetric mass-estimation methods

MARINE MAMMAL SCIENCE, Issue 1 2008
Kelly M. Proffitt
Abstract Numerical characterization of animal body forms using elliptical Fourier decomposition may be a useful analytic technique in a variety of marine mammal investigations. Using data collected from the Weddell seal (Leptonychotes weddellii), we describe the method of body form characterization using elliptical Fourier analysis and demonstrated usefulness of the technique in photogrammetric mass-estimation modeling. We compared photogrammetric mass-estimation models developed from (1) standard morphometric measurement covariates, (2) elliptical Fourier coefficient covariates, and (3) a combination of morphometric and Fourier coefficient covariates and found that mass-estimation models employing a combination of morphometric measurements and Fourier coefficients outperformed models containing only one covariate type. Inclusion of Fourier coefficients in photogrammetric mass-estimation models employing standard morphometric measurements reduced the width of the prediction interval by 24.4%. Increased precision of photogrammetric mass-estimation models employing Fourier coefficients as model covariates may expand the range of ecological questions that can be addressed with estimated mass measurements. [source]


Bayesian inference for Rayleigh distribution under progressive censored sample

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2006
Shuo-Jye Wu
Abstract It is often the case that some information is available on the parameter of failure time distributions from previous experiments or analyses of failure time data. The Bayesian approach provides the methodology for incorporation of previous information with the current data. In this paper, given a progressively type II censored sample from a Rayleigh distribution, Bayesian estimators and credible intervals are obtained for the parameter and reliability function. We also derive the Bayes predictive estimator and highest posterior density prediction interval for future observations. Two numerical examples are presented for illustration and some simulation study and comparisons are performed. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Nonparametric prediction intervals for the future rainfall records,

ENVIRONMETRICS, Issue 5 2006
Mohammad Z. Raqab
Abstract Prediction of records plays an important role in the environmental applications, especially, prediction of rainfall extremes, highest water levels, sea surface, and air record temperatures. In this paper, based on the observed records drawn from a sequence sample of independent and identically random variables, we develop prediction intervals as well as prediction upper and lower bounds for records from another independent sequence. We extend the prediction problem to include prediction regions for joint upper records from a future sequence sample. The Bonferouni's inequality is used to choose appropriate prediction coefficients for the joint prediction. A real data set representing the records of the annual (January 1,December 31) rainfall at Los Angeles Civic Center is addressed to illustrate the proposed prediction procedures in the environmental applications. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Identifying the Potential Loss of Monitoring Wells Using an Uncertainty Analysis

GROUND WATER, Issue 6 2005
Vicky L. Freedman
From the mid-1940s through the 1980s, large volumes of waste water were discharged at the Hanford Site in southeastern Washington State, causing a large-scale rise (>20 m) in the water table. When waste water discharges ceased in 1988, ground water mounds began to dissipate. This caused a large number of wells to go dry and has made it difficult to monitor contaminant plume migration. To identify monitoring wells that will need replacement, a methodology has been developed using a first-order uncertainty analysis with UCODE, a nonlinear parameter estimation code. Using a three-dimensional, finite-element ground water flow code, key parameters were identified by calibrating to historical hydraulic head data. Results from the calibration period were then used to check model predictions by comparing monitoring wells' wet/dry status with field data. This status was analyzed using a methodology that incorporated the 0.3 cumulative probability derived from the confidence and prediction intervals. For comparison, a nonphysically based trend model was also used as a predictor of wells' wet/dry status. Although the numerical model outperformed the trend model, for both models, the central value of the intervals was a better predictor of a wet well status. The prediction interval, however, was more successful at identifying dry wells. Predictions made through the year 2048 indicated that 46% of the wells in the monitoring well network are likely to go dry in areas near the river and where the ground water mound is dissipating. [source]


Time Series Based Errors and Empirical Errors in Fertility Forecasts in the Nordic Countries,

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2004
Nico Keilman
Summary We use ARCH time series models to derive model based prediction intervals for the Total Fertility Rate (TFR) in Norway, Sweden, Finland, and Denmark up to 2050. For the short term (5,10 yrs), expected TFR-errors are compared with empirical forecast errors observed in historical population forecasts prepared by the statistical agencies in these countries since 1969. Medium-term and long-term (up to 50 years) errors are compared with error patterns based on so-called naïve forecasts, i.e. forecasts that assume that recently observed TFR-levels also apply for the future. Résumé Nous avons construit un modèle de séries temporelles du type ARCH pour calculer des intervalles de prédiction pour l'Indice Synthétique de Fécondité (ISF) pour la Norvège, la Suède, la Finlande, et le Danemark jusqu'àl'année 2050. Pour le court terme (5,10 ans dans le futur), on compare les erreurs attendues pour l'ISF avec les erreurs calculées dans des prévisions démographiques historiques, préparées par des bureaux de statistique dans ces pays depuis 1969. Les erreurs à moyen terme et long terme (jusqu'à 50 ans dans le futur), sont comparées avec des structures d'erreur fondée sur des prévisions dites "naïves", c'està-dire, des prévisions qui supposent que le niveau d'ISF observé pour une période récente est valable aussi pour le futur. À court terme, nous trouvons que les intervalles de prédiction calculés par le modèle de séries temporelles et ceux dérivés des erreurs historiques sont du mêeme ordre d'amplitude. Cependant, il fautêtre prudent, car la collecte des données de base historiques est limitée. Les erreurs "naïves" fournissent de l'information utile pour le court terme et le long terme. En effet, des intervalles de prédiction fondés sur des erreurs naïves à 50 ans dans le futur se comparent très bien avec des intervalles fondés sur le modèle de séries temporelles, sauf pour le Danemark. Pour ce pays, les données de base ne nous permettent pas de calculer des intervalles "naäfs" pour des périodes de prévision au-delà de 20 ans. En général, on peut conclure que les erreurs historiques et les erreurs naïves ne montrent pas que les intervalles de prédiction fondés sur des modèles de séries temporelles du type ARCH sont excessivement larges. Nous avons constaté que les intervalles à 67 pour cent de l'ISF ont une amplitude d'environ 0.5 enfants par femme à l'horizon de 10 ans, et approximativement 0.85 enfants par femmeà 50 ans. [source]


The relative influence of advice from human experts and statistical methods on forecast adjustments

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 4 2009
Dilek Önkal
Abstract Decision makers and forecasters often receive advice from different sources including human experts and statistical methods. This research examines, in the context of stock price forecasting, how the apparent source of the advice affects the attention that is paid to it when the mode of delivery of the advice is identical for both sources. In Study 1, two groups of participants were given the same advised point and interval forecasts. One group was told that these were the advice of a human expert and the other that they were generated by a statistical forecasting method. The participants were then asked to adjust forecasts they had previously made in light of this advice. While in both cases the advice led to improved point forecast accuracy and better calibration of the prediction intervals, the advice which apparently emanated from a statistical method was discounted much more severely. In Study 2, participants were provided with advice from two sources. When the participants were told that both sources were either human experts or both were statistical methods, the apparent statistical-based advice had the same influence on the adjusted estimates as the advice that appeared to come from a human expert. However when the apparent sources of advice were different, much greater attention was paid to the advice that apparently came from a human expert. Theories of advice utilization are used to identify why the advice of a human expert is likely to be preferred to advice from a statistical method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Noise propagation and error estimations in multivariate curve resolution alternating least squares using resampling methods

JOURNAL OF CHEMOMETRICS, Issue 7-8 2004
Joaquim Jaumot
Abstract Different approaches for the calculation of prediction intervals of estimations obtained in multivariate curve resolution using alternating least squares optimization methods are explored and compared. These methods include Monte Carlo simulations, noise addition and jackknife resampling. Obtained results allow a preliminary investigation of noise effects and error propagation on resolved profiles and on parameters estimated from them. The effect of noise on rotational ambiguities frequently found in curve resolution methods is discussed. This preliminary study is shown for the resolution of a three-component equilibrium system with overlapping concentration and spectral profiles. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Prediction intervals in linear regression taking into account errors on both axes

JOURNAL OF CHEMOMETRICS, Issue 10 2001
F. Javier del Río
Abstract This study reports the expressions for the variances in the prediction of the response and predictor variables calculated with the bivariate least squares (BLS) regression technique. This technique takes into account the errors on both axes. Our results are compared with those of a simulation process based on six different real data sets. The mean error in the results from the new expressions is between 4% and 5%. With weighted least squares, ordinary least squares, the constant variance ratio approach and orthogonal regression, on the other hand, mean errors can be as high as 85%, 277%, 637% and 1697% respectively. An important property of the prediction intervals calculated with BLS is that the results are not affected when the axes are switched. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Simultaneous prediction intervals for ARMA processes with stable innovations

JOURNAL OF FORECASTING, Issue 3 2009
John P. Nolan
Abstract We describe a method for calculating simultaneous prediction intervals for ARMA times series with heavy-tailed stable innovations. The spectral measure of the vector of prediction errors is shown to be discrete. Direct computation of high-dimensional stable probabilities is not feasible, but we show that Monte Carlo estimates of the interval width is practical. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Bootstrap prediction intervals for autoregressive models of unknown or infinite lag order

JOURNAL OF FORECASTING, Issue 4 2002
Jae H. Kim
Abstract Recent studies on bootstrap prediction intervals for autoregressive (AR) model provide simulation findings when the lag order is known. In practical applications, however, the AR lag order is unknown or can even be infinite. This paper is concerned with prediction intervals for AR models of unknown or infinite lag order. Akaike's information criterion is used to estimate (approximate) the unknown (infinite) AR lag order. Small-sample properties of bootstrap and asymptotic prediction intervals are compared under both normal and non-normal innovations. Bootstrap prediction intervals are constructed based on the percentile and percentile- t methods, using the standard bootstrap as well as the bootstrap-after-bootstrap. It is found that bootstrap-after-bootstrap prediction intervals show small-sample properties substantially better than other alternatives, especially when the sample size is small and the model has a unit root or near-unit root. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Guaranteed-content prediction intervals for non-linear autoregressions

JOURNAL OF FORECASTING, Issue 4 2001
Xavier de LunaArticle first published online: 25 JUL 200
Abstract In this paper we present guaranteed-content prediction intervals for time series data. These intervals are such that their content (or coverage) is guaranteed with a given high probability. They are thus more relevant for the observed time series at hand than classical prediction intervals, whose content is guaranteed merely on average over hypothetical repetitions of the prediction process. This type of prediction inference has, however, been ignored in the time series context because of a lack of results. This gap is filled by deriving asymptotic results for a general family of autoregressive models, thereby extending existing results in non-linear regression. The actual construction of guaranteed-content prediction intervals directly follows from this theory. Simulated and real data are used to illustrate the practical difference between classical and guaranteed-content prediction intervals for ARCH models. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Identification of asymmetric prediction intervals through causal forces

JOURNAL OF FORECASTING, Issue 4 2001
J. Scott Armstrong
Abstract When causal forces are specified, the expected direction of the trend can be compared with the trend based on extrapolation. Series in which the expected trend conflicts with the extrapolated trend are called contrary series. We hypothesized that contrary series would have asymmetric forecast errors, with larger errors in the direction of the expected trend. Using annual series that contained minimal information about causality, we examined 671 contrary forecasts. As expected, most (81%) of the errors were in the direction of the causal forces. Also as expected, the asymmetries were more likely for longer forecast horizons; for six-year-ahead forecasts, 89% of the forecasts were in the expected direction. The asymmetries were often substantial. Contrary series should be flagged and treated separately when prediction intervals are estimated, perhaps by shifting the interval in the direction of the causal forces. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Accuracy and precision of radiostereometric analysis in the measurement of three-dimensional micromotion in a fracture model of the distal radius

JOURNAL OF ORTHOPAEDIC RESEARCH, Issue 2 2005
Rami Madanat
Abstract The purpose of the current study was to verify the feasibility of radiostereometric analysis (RSA) in monitoring three-dimensional fracture micromotion in fractures of the distal radius. The experimental set-up consisted of a simulated model of an extra-articular Colles' fracture, including metallic beads inserted into the bone on either side of the fracture site. The model was rigidly fixed to high precision micrometer stages allowing controlled translation in three axes and rotation about the longitudinal and transverse axes. The whole construct was placed inside a RSA calibration cage with two perpendicular radiographic film cassettes. Accuracy was calculated as the 95% prediction intervals from the regression analyses between the micromotion measured by RSA and actual displacements measured by micrometers. Precision was determined as the standard deviation of five repeated measurements of a 200 ,m displacement or a 0.5° rotation along a specific axis. Translations from 25 ,m to 5 mm were measured with an accuracy of ±6,m and translations of 200,m were measured with a precision of 2,6 ,m. Rotations ranging from 1/6° to 2° were measured with an accuracy of ±0.073° and rotations of 1/2° were measured with a precision of 0.025°,0.096°. The number of markers and their configuration had greater impact on the accuracy and precision of rotation than on those of translation. Aside from the unknown rate of clinical marker loosening, the current results favor the use of at least four markers in each bone fragment in distal radius fractures. These results suggest a strong rationale for the use of RSA as an objective tool for comparing different treatment modalities and novel bone graft substitutes aimed at stabilization of fractures of the distal radius. © 2004 Orthopaedic Research Society. Published by Elsevier Ltd. All rights reserved. [source]


Bootstrap predictive inference for ARIMA processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2004
Lorenzo Pascual
Abstract., In this study, we propose a new bootstrap strategy to obtain prediction intervals for autoregressive integrated moving-average processes. Its main advantage over other bootstrap methods previously proposed for autoregressive integrated processes is that variability due to parameter estimation can be incorporated into prediction intervals without requiring the backward representation of the process. Consequently, the procedure is very flexible and can be extended to processes even if their backward representation is not available. Furthermore, its implementation is very simple. The asymptotic properties of the bootstrap prediction densities are obtained. Extensive finite-sample Monte Carlo experiments are carried out to compare the performance of the proposed strategy vs. alternative procedures. The behaviour of our proposal equals or outperforms the alternatives in most of the cases. Furthermore, our bootstrap strategy is also applied for the first time to obtain the prediction density of processes with moving-average components. [source]


The adjustment of prediction intervals to account for errors in parameter estimation

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2004
Paul Kabaila
Abstract., Standard approximate 1 , , prediction intervals (PIs) need to be adjusted to take account of the error in estimating the parameters. This adjustment may be aimed at setting the (unconditional) probability that the PI includes the value being predicted equal to 1 , ,. Alternatively, this adjustment may be aimed at setting the probability that the PI includes the value being predicted equal to 1 , ,, conditional on an appropriate statistic T. For an autoregressive process of order p, it has been suggested that T consist of the last p observations. We provide a new criterion by which both forms of adjustment can be compared on an equal footing. This new criterion of performance is the closeness of the coverage probability, conditional on all of the data, of the adjusted PI and 1 , ,. In this paper, we measure this closeness by the mean square of the difference between this conditional coverage probability and 1 , ,. We illustrate the application of this new criterion to a Gaussian zero-mean autoregressive process of order 1 and one-step-ahead prediction. For this example, this comparison shows that the adjustment which is aimed at setting the coverage probability equal to 1 , , conditional on the last observation is the better of the two adjustments. [source]


Under restrictive conditions, can the widths of linear enamel hypoplasias be used as relative indicators of stress episode duration?

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 2 2009
Amelia Hubbard
Abstract Linear enamel hypoplasia (LEH), a type of enamel defect reflecting nonspecific physiological stress, has traditionally been used by bioarchaeologists to assess human health. Initially, measurements of defect width were used to estimate the duration of stress episodes. More recently, methods of counting within-defect perikymata (enamel growth increments) were developed to more accurately assess duration. Because perikymata are often not continuously visible within defects, while widths can usually be measured, the primary purpose of this article was to determine if, under restrictive conditions, the widths of LEH defects might be used as relative indicators of stress episode duration. Using a set of dental replicas from the prehistoric Irene Mound (1150,1400 A.D.), this study also investigated potential sources of variation in defect widths and how often defect widths could be measured and within-defect perikymata counted. Of 120 defects, only 47 contained both measurable defect widths and total within-defect perikymata, while 79 had measurable defect widths. Regression analysis revealed that, for these 47 defects, defect widths were more strongly related to the total number of within-defect perikymata than they were to crown region or tooth type. Although wide prediction intervals indicated that a defect's width could not be used to predict the number of within-defect perikymata for an individual, narrower confidence intervals associated with hypothetical mean population widths suggested that mean defect widths might be used to rank populations in terms of relative average stress episode duration. Am J Phys Anthropol 2009. © 2008 Wiley-Liss, Inc. [source]


EXPONENTIAL SMOOTHING AND NON-NEGATIVE DATA

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
Muhammad Akram
Summary The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non-negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non-negative data. We explore exponential smoothing state space models for non-negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non-Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short-term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one-step-ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed. [source]


Time-Varying Functional Regression for Predicting Remaining Lifetime Distributions from Longitudinal Trajectories

BIOMETRICS, Issue 4 2005
Hans-Georg Müller
Summary A recurring objective in longitudinal studies on aging and longevity has been the investigation of the relationship between age-at-death and current values of a longitudinal covariate trajectory that quantifies reproductive or other behavioral activity. We propose a novel technique for predicting age-at-death distributions for situations where an entire covariate history is included in the predictor. The predictor trajectories up to current time are represented by time-varying functional principal component scores, which are continuously updated as time progresses and are considered to be time-varying predictor variables that are entered into a class of time-varying functional regression models that we propose. We demonstrate for biodemographic data how these methods can be applied to obtain predictions for age-at-death and estimates of remaining lifetime distributions, including estimates of quantiles and of prediction intervals for remaining lifetime. Estimates and predictions are obtained for individual subjects, based on their observed behavioral trajectories, and include a dimension-reduction step that is implemented by projecting on a single index. The proposed techniques are illustrated with data on longitudinal daily egg-laying for female medflies, predicting remaining lifetime and age-at-death distributions from individual event histories observed up to current time. [source]


The Challenge of Predicting Demand for Emergency Department Services

ACADEMIC EMERGENCY MEDICINE, Issue 4 2008
Melissa L. McCarthy MS
Abstract Objectives:, The objective was to develop methodology for predicting demand for emergency department (ED) services by characterizing ED arrivals. Methods:, One year of ED arrival data from an academic ED were merged with local climate data. ED arrival patterns were described; Poisson regression was selected to represent the count of hourly ED arrivals as a function of temporal, climatic, and patient factors. The authors evaluated the appropriateness of prediction models by whether the data met key Poisson assumptions, including variance proportional to the mean, positive skewness, and absence of autocorrelation among hours. Model accuracy was assessed by comparing predicted and observed histograms of arrival counts and by how frequently the observed hourly count fell within the 50 and 90% prediction intervals. Results:, Hourly ED arrivals were obtained for 8,760 study hours. Separate models were fit for high- versus low-acuity patients because of significant arrival pattern differences. The variance was approximately equal to the mean in the high- and low-acuity models. There was no residual autocorrelation (r = 0) present after controlling for temporal, climatic, and patient factors that influenced the arrival rate. The observed hourly count fell within the 50 and 90% prediction intervals 50 and 90% of the time, respectively. The observed histogram of arrival counts was nearly identical to the histogram predicted by a Poisson process. Conclusions:, At this facility, demand for ED services was well approximated by a Poisson regression model. The expected arrival rate is characterized by a small number of factors and does not depend on recent numbers of arrivals. [source]


A Bayesian Sensitivity Analysis of Out-of-hospital 12-lead Electrocardiograms: Implications for Regionalization of Cardiac Care

ACADEMIC EMERGENCY MEDICINE, Issue 12 2007
Scott T. Youngquist MD
Background The effectiveness of out-of-hospital regionalization of ST-elevation myocardial infarction (STEMI) patients to hospitals providing primary percutaneous coronary intervention depends on the accuracy of the out-of-hospital 12-lead electrocardiogram (PHTL). Although estimates of sensitivity and specificity of PHTL for STEMI have been reported, the impact of out-of-hospital STEMI prevalence on positive predictive value (PPV) has not been evaluated. Objectives To describe the relationship between varying population STEMI prevalences and PHTL predictive values, using ranges of PHTL sensitivity and specificity. Methods The authors performed a Bayesian analysis using PHTL, where values for sensitivities (60%,70%), specificities (98%), and two prevalence ranges (0.5%,5% and 5%,20%) were derived from a literature review. PPV prediction intervals were compared with three months of prospective data from the Los Angeles County Emergency Medical Services Agency STEMI regionalization program. Results When the estimated prevalence of STEMI in the out-of-hospital population is 5%,20%, the median PPV of the PHTL is 83% (95% credible interval [CrI] = 53% to 97%). However, if the population prevalence of STEMI is between 0.5% and 5%, the median PPV is 43% (95% CrI = 12% to 86%). When the PPV prediction intervals were incorporated with the Los Angeles County Emergency Medical Services Agency data, the PPV was 66%. Conclusions Even when assuming high specificity for PHTL, the false-positive rate will be considerable if applied to a population at low risk for STEMI. Before broadening application of PHTL to low-risk patients, the implications of a high false-positive rate should be considered. [source]