Prediction Error (prediction + error)

Distribution by Scientific Domains

Kinds of Prediction Error

  • reward prediction error


  • Selected Abstracts


    When What You See Isn't What You Get: Alcohol Cues, Alcohol Administration, Prediction Error, and Human Striatal Dopamine

    ALCOHOLISM, Issue 1 2009
    Karmen K. Yoder
    Background:, The mesolimbic dopamine (DA) system is implicated in the development and maintenance of alcohol drinking; however, the exact mechanisms by which DA regulates human alcohol consumption are unclear. This study assessed the distinct effects of alcohol-related cues and alcohol administration on striatal DA release in healthy humans. Methods:, Subjects underwent 3 PET scans with [11C]raclopride (RAC). Subjects were informed that they would receive either an IV Ringer's lactate infusion or an alcohol (EtOH) infusion during scanning, with naturalistic visual and olfactory cues indicating which infusion would occur. Scans were acquired in the following sequence: (1) Baseline Scan: Neutral cues predicting a Ringer's lactate infusion, (2) CUES Scan: Alcohol-related cues predicting alcohol infusion in a Ringer's lactate solution, but with alcohol infusion after scanning to isolate the effects of cues, and (3) EtOH Scan: Neutral cues predicting Ringer's, but with alcohol infusion during scanning (to isolate the effects of alcohol without confounding expectation or craving). Results:, Relative to baseline, striatal DA concentration decreased during CUES, but increased during EtOH. Conclusion:, While the results appear inconsistent with some animal experiments showing dopaminergic responses to alcohol's conditioned cues, they can be understood in the context of the hypothesized role of the striatum in reward prediction error, and of animal studies showing that midbrain dopamine neurons decrease and increase firing rates during negative and positive prediction errors, respectively. We believe that our data are the first in humans to demonstrate such changes in striatal DA during reward prediction error. [source]


    On Estimating Conditional Mean-Squared Prediction Error in Autoregressive Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2003
    CHING-KANG ING
    Abstract. Zhang and Shaman considered the problem of estimating the conditional mean-squared prediciton error (CMSPE) for a Gaussian autoregressive (AR) process. They used the final prediction error (FPE) of Akaike to estimate CMSPE and proposed that FPE's effectiveness be judged by its asymptotic correlation with CMSPE. However, as pointed out by Kabaila and He, the derivation of this correlation by Zhang and Shaman is incomplete, and the performance of FPE in estimating CMSPE is also poor in Kabaila and He's simulation study. Kabaila and He further proposed an alternative estimator of CMSPE, V, in the stationary AR(1) model. They reported that V has a larger normalized correlation with CMSPE through Monte Carlo simulation results. In this paper, we propose a generalization of V, V,, in the higher-order AR model, and obtain the asymptotic correlation of FPE and V, with CMSPE. We show that the limit of the normalized correlation of V, with CMSPE is larger than that of FPE with CMSPE, and hence Kabaila and He's finding is justified theoretically. In addition, the performances of the above estimators of CMSPE are re-examined in terms of mean-squared errors (MSE). Our main conclusion is that from the MSE point of view, V, is the best choice among a family of asymptotically unbiased estimators of CMSPE including FPE and V, as its special cases. [source]


    Two-sample Comparison Based on Prediction Error, with Applications to Candidate Gene Association Studies

    ANNALS OF HUMAN GENETICS, Issue 1 2007
    K. Yu
    Summary To take advantage of the increasingly available high-density SNP maps across the genome, various tests that compare multilocus genotypes or estimated haplotypes between cases and controls have been developed for candidate gene association studies. Here we view this two-sample testing problem from the perspective of supervised machine learning and propose a new association test. The approach adopts the flexible and easy-to-understand classification tree model as the learning machine, and uses the estimated prediction error of the resulting prediction rule as the test statistic. This procedure not only provides an association test but also generates a prediction rule that can be useful in understanding the mechanisms underlying complex disease. Under the set-up of a haplotype-based transmission/disequilibrium test (TDT) type of analysis, we find through simulation studies that the proposed procedure has the correct type I error rates and is robust to population stratification. The power of the proposed procedure is sensitive to the chosen prediction error estimator. Among commonly used prediction error estimators, the .632+ estimator results in a test that has the best overall performance. We also find that the test using the .632+ estimator is more powerful than the standard single-point TDT analysis, the Pearson's goodness-of-fit test based on estimated haplotype frequencies, and two haplotype-based global tests implemented in the genetic analysis package FBAT. To illustrate the application of the proposed method in population-based association studies, we use the procedure to study the association between non-Hodgkin lymphoma and the IL10 gene. [source]


    Efron-Type Measures of Prediction Error for Survival Analysis

    BIOMETRICS, Issue 4 2007
    Thomas A. Gerds
    Summary Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548,560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial. [source]


    Prediction of human pharmacokinetics,gut-wall metabolism

    JOURNAL OF PHARMACY AND PHARMACOLOGY: AN INTERNATI ONAL JOURNAL OF PHARMACEUTICAL SCIENCE, Issue 10 2007
    Urban Fagerholm
    Intestinal mucosal cells operate with different metabolic and transport activity, and not all of them are involved in drug absorption and metabolism. The fraction of these cells involved is dependent on the absorption characteristics of compounds and is difficult to predict (it is probably small). The cells also appear comparably impermeable. This shows a limited applicability of microsome intrinsic clearance (CLint)-data for prediction of gut-wall metabolism, and the difficulty to predict the gut-wall CL (CLGW) and extraction ratio (EGW). The objectives of this review were to evaluate determinants and methods for prediction of first-pass and systemic EGW and CLGW in man, and if required and possible, develop new simple prediction methodology. Animal gut-wall metabolism data do not appear reliable for scaling to man. In general, the systemic CLGW is low compared with the hepatic CL. For a moderately extracted CYP3A4-substrate with high permeability, midazolam, the gut-wall/hepatic CL-ratio is only 1/35. This suggests (as a general rule) that systemic CLGW can be neglected when predicting the total CL. First-pass EGW could be of importance, especially for substrates of CYP3A4 and conjugating enzymes. For several reasons, including those presented above and that blood flow based models are not applicable in the absorptive direction, it seems poorly predicted with available methodology. Prediction errors are large (several-fold on average; maximum-15-fold). A new simple first-pass EGW -prediction method that compensates for regional and local differences in absorption and metabolic activity has been developed. It has been based on human cell in-vitro CLint and fractional absorption from the small intestine for reference (including verapamil) and test substances, and in-vivo first-pass EGW -data for reference substances. First-pass EGW -values for CYP3A4-substrates with various degrees of gastrointestinal uptake and CLint and a CYP2D6-substrate were well-predicted (negligible errors). More high quality in-vitro CLint - and in-vivo EGW -data are required for further validation of the method. [source]


    Evaluation of the PESERA model in two contrasting environments

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2009
    F. Licciardello
    Abstract The performance of the Pan-European Soil Erosion Risk Assessment (PESERA) model was evaluated by comparison with existing soil erosion data collected in plots under different land uses and climate conditions in Europe. In order to identify the most important sources of error, the PESERA model was evaluated by comparing model output with measured values as well as by assessing the effect of the various model components on prediction accuracy through a multistep approach. First, the performance of the hydrological and erosion components of PESERA was evaluated separately by comparing both runoff and soil loss predictions with measured values. In order to assess the performance of the vegetation growth component of PESERA, the predictions of the model based on observed values of vegetation ground cover were also compared with predictions based on the simulated vegetation cover values. Finally, in order to evaluate the sediment transport model, predicted monthly erosion rates were also calculated using observed values of runoff and vegetation cover instead of simulated values. Moreover, in order to investigate the capability of PESERA to reproduce seasonal trends, the observed and simulated monthly runoff and erosion values were aggregated at different temporal scale and we investigated at what extend the model prediction error could be reduced by output aggregation. PESERA showed promise to predict annual average spatial variability quite well. In its present form, short-term temporal variations are not well captured probably due to various reasons. The multistep approach showed that this is not only due to unrealistic simulation of cover and runoff, being erosion prediction also an important source of error. Although variability between the investigated land uses and climate conditions is well captured, absolute rates are strongly underestimated. A calibration procedure, focused on a soil erodibility factor, is proposed to reduce the significant underestimation of soil erosion rates. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Consistency of dynamic site response at Port Island

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2001
    Laurie G. Baise
    Abstract System identification (SI) methods are used to determine empirical Green's functions (EGF) for soil intervals at the Port Island Site in Kobe, Japan and in shake table model tests performed by the Port and Harbor Research Institute (PHRI) to emulate the site during the 17 January 1995 Hyogo-ken Nanbu earthquake. The model form for the EGFs is a parametric auto-regressive moving average (ARMA) model mapping the ground motions recorded at the base of a soil interval to the top of that interval, hence capturing the effect of the soil on the through-passing wave. The consistency of site response at Port Island before, during, and after the mainshock is examined by application of small motion foreshock EGFs to incoming ground motions over these time intervals. The prediction errors (or misfits) for the foreshocks, the mainshock, and the aftershocks, are assessed to determine the extent of altered soil response as a result of liquefaction of the ground during the mainshock. In addition, the consistency of soil response between field and model test is verified by application of EGFs calculated from the shake table test to the 17 January input data. The prediction error is then used to assess the consistency of behaviour between the two cases. By using EGFs developed for small-amplitude foreshock ground motions, ground motions were predicted for all intervals of the vertical array except those that liquefied with small error. Analysis of the post-liquefied ground conditions implies that the site response gradually returns to a pre-earthquake state. Site behaviour is found to be consistent between foreshocks and the mainshock for the native ground (below 16 m in the field) with a normalized mean square error (NMSE) of 0.080 and a peak ground acceleration (PGA) of 0.5g. When the soil actually liquefies (change of state), recursive models are needed to track the variable soil behaviour for the remainder of the shaking. The recursive models are shown to demonstrate consistency between the shake table tests and the field with a NMSE of 0.102 for the 16 m to surface interval that liquefied. The aftershock ground response was not modelled well with the foreshock EGF immediately after the mainshock (NMSE ranging from 0.37 to 0.92). One month after the mainshock, the prediction error from the foreshock modeled was back to the foreshock error level. Copyright © 2001 John Wiley Sons, Ltd. [source]


    Separable approximations of space-time covariance matrices

    ENVIRONMETRICS, Issue 7 2007
    Marc G. Genton
    Abstract Statistical modeling of space-time data has often been based on separable covariance functions, that is, covariances that can be written as a product of a purely spatial covariance and a purely temporal covariance. The main reason is that the structure of separable covariances dramatically reduces the number of parameters in the covariance matrix and thus facilitates computational procedures for large space-time data sets. In this paper, we discuss separable approximations of nonseparable space-time covariance matrices. Specifically, we describe the nearest Kronecker product approximation, in the Frobenius norm, of a space-time covariance matrix. The algorithm is simple to implement and the solution preserves properties of the space-time covariance matrix, such as symmetry, positive definiteness, and other structures. The separable approximation allows for fast kriging of large space-time data sets. We present several illustrative examples based on an application to data of Irish wind speeds, showing that only small differences in prediction error arise while computational savings for large data sets can be obtained. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Prediction of sea surface temperature from the global historical climatology network data

    ENVIRONMETRICS, Issue 3 2004
    Samuel S. P. Shen
    Abstract This article describes a spatial prediction method that predicts the monthly sea surface temperature (SST) anomaly field from the land only data. The land data are from the Global Historical Climatology Network (GHCN). The prediction period is 1880,1999 and the prediction ocean domain extends from 60°S to 60°N with a spatial resolution 5°×5°. The prediction method is a regression over the basis of empirical orthogonal functions (EOFs). The EOFs are computed from the following data sets: (a) the Climate Prediction Center's optimally interpolated sea surface temperature (OI/SST) data (1982,1999); (b) the National Climatic Data Center's blended product of land-surface air temperature (1992,1999) produced from combining the Special Satellite Microwave Imager and GHCN; and (c) the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis data (1982,1999). The optimal prediction method minimizes the first- M -mode mean square error between the true and predicted anomalies over both land and ocean. In the optimization process, the data errors of the GHCN boxes are used, and their contribution to the prediction error is taken into account. The area-averaged root mean square error of prediction is calculated. Numerical experiments demonstrate that this EOF prediction method can accurately recover the global SST anomalies during some circulation patterns and add value to the SST bias correction in the early history of SST observations and the validation of general circulation models. Our results show that (i) the land only data can accurately predict the SST anomaly in the El Nino months when the temperature anomaly structure has very large correlation scales, and (ii) the predictions for La Nina, neutral, or transient months require more EOF modes because of the presence of the small scale structures in the anomaly field. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Short-term electric power load forecasting using feedforward neural networks

    EXPERT SYSTEMS, Issue 3 2004
    Heidar A. Malki
    Abstract: This paper presents the results of a study on short-term electric power load forecasting based on feedforward neural networks. The study investigates the design components that are critical in power load forecasting, which include the selection of the inputs and outputs from the data, the formation of the training and the testing sets, and the performance of the neural network models trained to forecast power load for the next hour and the next day. The experiments are used to identify the combination of the most significant parameters that can be used to form the inputs of the neural networks in order to reduce the prediction error. The prediction error is also reduced by predicting the difference between the power load of the next hour (day) and that of the present hour (day). This is a promising alternative to the commonly used approach of predicting the actual power load. The potential of the proposed method is revealed by its comparison with two existing approaches that utilize neural networks for electric power load forecasting. [source]


    GIS visualisation and analysis of mobile hydroacoustic fisheries data: a practical example

    FISHERIES MANAGEMENT & ECOLOGY, Issue 6 2005
    A. R. COLEY
    Abstract, Hydroacoustic remote sensing of fish populations residing in large freshwater bodies has become a widely used and effective monitoring tool. However, easy visualisation of the data and effective analysis is more problematic. The use of GIS-based interpolations enables easy visualisation of survey data and an analysis tool for investigating fish populations. Three years of hydroacoustic surveys of Cardiff Bay in South Wales presented an opportunity to develop analysis and visualisation techniques. Inverse distance weighted (IDW) interpolation was used to show the potential of such techniques in analysing survey data both spatially (1-year survey) and temporally (by looking at the spatial changes between years). IDW was fairly successful in visualising the hydroacoustic data for Cardiff Bay. However, other techniques may improve on this initial work and provide improved analysis, total density estimates and statistically derived estimations of prediction error. [source]


    Conidial dispersal of Gremmeniella abietina: climatic and microclimatic factors

    FOREST PATHOLOGY, Issue 6 2003
    R.-L. Petäistö
    Summary The conidia dispersal started in Suonenjoki, in central Finland, in 1997,99 by the end of May or beginning of June, and continued occasionally at least to the middle of September. The temperature sum, day degrees (d.d., threshold temperature = 5°C) was between 100 and 165 d.d. at the beginning of dispersal. In years 1997,99, 80, 94 and 82% of the dispersal had occurred by the end of July , beginning of August when the temperature sum reached 800 d.d. All the spore data are coming from the spore traps. Cumulative number of conidia increased linearily with logarithm of temperature sum. A binary logistic regression model with temperature sum and rainfall as explanatory variables predicted accurately the date of the first spores in the spring: the prediction error was at most 3 days. The model classified 69% of all the days in the analysis correctly to the spore-free days and correspondingly 74% to the days of at least one spore caught. A regression model for the number of spores per day explains 21, 5 and 51% of the within-season variation in 1997,99 (24, 37 and 62% on a logarithmic scale). The explanatory weather variables in the model were d.d., rain and year. The very low explanatory coefficient of determination in 1998 results from one exceptionally high number of conidia. The between-differences in the total number of spores were large and could not be explained by the measured weather variables. In the regression model, these differences were taken into account by adding a constant for each year in the model. Rain increased conidia dispersal significantly but conidia were found also in consecutive rainless days. Résumé La dissémination des conidies a démarré fin mai-début juin à Suonenjoki, dans le centre de la Finlande, pour les années 1997,1999, et s'est poursuivie occasionnellement au moins jusqu'à mi-septembre. La somme de température (seuil de 5°C) est de 100-165 degrés-jours au début de la période de dissémination. Pour les années 1997, 1998 et 1999, 80%, 94% et 82% de la dissémination s'était produite fin juillet-début août, quand la somme de température avait atteint 800 degrés-jours. Le nombre cumulé de conidies augmente linéairement avec le logarithme de la somme de températures. Un modèle de régression logistique binaire avec la somme de températures et les précipitations comme variables explicatives prédit de façon précise la date des premières émissions au printemps: l'erreur de prédiction est au plus de trois jours. Le modèle assigne correctement 69% de l'ensemble des jours analysés à des jours sans spores et 74% des jours avec au moins une spore piégée. Un modèle de régression pour le nombre de spores par jour explique 21%, 5 % et 51 % de la variation intra-saison en 1997, 1998 et 1999 (24, 37 et 62% pour une échelle logarithmique). Les variables climatiques explicatives du modèle sont les degrés-jours, les précipitations et l'année. Le très faible coefficient de détermination de 1998 provient d'un seul comptage exceptionnellement élevé de conidies. Les différences entre années pour le nombre total de spores sont importantes et ne peuvent s'expliquer par les variables climatiques mesurées. Dans le modèle de régression, ces différences sont prises en compte en ajoutant une constante pour chaque année. Les pluies augmentent significativement la dissémination des conidies mais des conidies ont été observées également dans des périodes de plusieurs jours consécutifs sans pluie. Zusammenfassung Die Ausbreitung der Konidien von Gremmeniella abietina begann in Suonenjoki, Zentralfinnland, in den Jahren 1997 , 1999 Ende Mai oder Anfang Juni und dauerte gelegentlich bis mindestens Mitte September an. Die Temperatursumme (Schwellenwert 5 °C) lag zu Beginn der Ausbreitung zwischen 100 und 165. Ende Juli bis Anfang August, wenn die Temperatursumme 800 erreicht hatte, war in den Jahren 1997, 1998 und 1999 jeweils 80%, 94% bzw. 82% der mit einer Sporenfalle erfassten Sporulation abgeschlossen. Die kumulierte Anzahl der Sporen nahm linear mit dem Logarithmus der Temperatursumme zu. Ein binäres logistisches Regressionsmodell mit der Temperatursumme und dem Niederschlag als erklärenden Variablen sagte das Datum der ersten Sporenfreisetzung im Frühjahr zutreffend voraus, der Vorhersagefehler lag hier bei höchstens drei Tagen. Das Modell klassifizierte 69% der Tage ohne Sporulation richtig und analog 74% der Tage mit zumindest einer erfassten Spore. Ein Regressionsmodell für die Anzahl Sporen pro Tag erklärte 21%, 5% und 51% der Variation innerhalb einer Saison für die Jahre 1997, 1998 und 1999 (24, 37 und 62% auf der Skala des natürlichen Logarithmus). Die erklärenden Wettervariablen in dem Modell waren Temperatursumme, Niederschlag und Jahr. Der sehr kleine Wert des erklärenden Koeffizienten für 1998 ist die Folge eines einzelnen Ereignisses mit ungewöhnlich hoher Sporenzahl. Innerhalb der Anzahl der Sporen waren die Unterschiede gross und konnten nicht mit den gemessenen Wetterdaten erklärt werden. Im Regressionsmodell wurden diese Unterschiede berücksichtigt, indem für jedes Jahr eine Konstante hinzugefügt wurde. Regen erhöhte die Konidienausbreitung signifikant, aber Sporen waren auch an den nachfolgenden regenfreien Tagen nachweisbar. [source]


    The biogeography of prediction error: why does the introduced range of the fire ant over-predict its native range?

    GLOBAL ECOLOGY, Issue 1 2007
    Matthew C. Fitzpatrick
    ABSTRACT Aim, The use of species distribution models (SDMs) to predict biological invasions is a rapidly developing area of ecology. However, most studies investigating SDMs typically ignore prediction errors and instead focus on regions where native distributions correctly predict invaded ranges. We investigated the ecological significance of prediction errors using reciprocal comparisons between the predicted invaded and native range of the red imported fire ant (Solenopsis invicta) (hereafter called the fire ant). We questioned whether fire ants occupy similar environments in their native and introduced range, how the environments that fire ants occupy in their introduced range changed through time relative to their native range, and where fire ant propagules are likely to have originated. Location, We developed models for South America and the conterminous United States (US) of America. Methods, We developed models using the Genetic Algorithm for Rule-set Prediction (GARP) and 12 environmental layers. Occurrence data from the native range in South America were used to predict the introduced range in the US and vice versa. Further, time-series data recording the invasion of fire ants in the US were used to predict the native range. Results, Native range occurrences under-predicted the invasive potential of fire ants, whereas occurrence data from the US over-predicted the southern boundary of the native range. Secondly, introduced fire ants initially established in environments similar to those in their native range, but subsequently invaded harsher environments. Time-series data suggest that fire ant propagules originated near the southern limit of their native range. Conclusions, Our findings suggest that fire ants from a peripheral native population established in an environment similar to their native environment, and then ultimately expanded into environments in which they are not found in their native range. We argue that reciprocal comparisons between predicted native and invaded ranges will facilitate a better understanding of the biogeography of invasive and native species and of the role of SDMs in predicting future distributions. [source]


    Separate brain regions code for salience vs. valence during reward prediction in humans

    HUMAN BRAIN MAPPING, Issue 4 2007
    Jimmy Jensen
    Abstract Predicting rewards and avoiding aversive conditions is essential for survival. Recent studies using computational models of reward prediction implicate the ventral striatum in appetitive rewards. Whether the same system mediates an organism's response to aversive conditions is unclear. We examined the question using fMRI blood oxygen level-dependent measurements while healthy volunteers were conditioned using appetitive and aversive stimuli. The temporal difference learning algorithm was used to estimate reward prediction error. Activations in the ventral striatum were robustly correlated with prediction error, regardless of the valence of the stimuli, suggesting that the ventral striatum processes salience prediction error. In contrast, the orbitofrontal cortex and anterior insula coded for the differential valence of appetitive/aversive stimuli. Given its location at the interface of limbic and motor regions, the ventral striatum may be critical in learning about motivationally salient stimuli, regardless of valence, and using that information to bias selection of actions. Inc. Hum Brain Mapp, 2007. © 2006 Wiley-Liss, Inc. [source]


    Frequency analysis for predicting 1% annual maximum water levels along Florida coast, US

    HYDROLOGICAL PROCESSES, Issue 23 2008
    Sudong Xu
    Abstract In the Coastal Flood Insurance Study by the Federal Emergency Management Agency (FEMA, 2005), 1% annual maximum coastal water levels are used in coastal flood hazard mitigation and engineering design in coastal areas of USA. In this study, a frequency analysis method has been developed to provide more accurate predictions of 1% annual maximum water levels for the Florida coast waters. Using 82 and 94 years of annual maximum water level data at Pensacola and Fernandina, performances of traditional frequency analysis methods, including advanced method of Generalized Extreme Value distribution method, have been evaluated. Comparison with observations of annual maximum water levels with 83 and 95 years of return periods indicate that traditional methods are unable to provide satisfactory predictions of 1% annual maximum water levels to account for hurricane-induced extreme water levels. Based on the characteristics of annual maximum water level distribution of Pensacola and Fernandina stations, a new probability distribution method has been developed in this study. Comparison with observations indicates that the method presented in this study significantly improves the accuracy of predictions of 1% annual maximum water levels. For Fernandina station, predictions of extreme water level match well with the general trend of observations. With a correlation coefficient of 0·98, the error for the maximum observed extreme water level of 3·11 m (National Geodetic Vertical Datum) with 95 years of return period is 0·92%. For Pensacola station, the prediction error for the maximum observed extreme water level with a return period of 83 years is 5·5%, with a correlation value of 0·98. The frequency analysis has also been reasonably compared to the more costly Monte Carlo simulation method. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Generalized forgetting functions for on-line least-squares identification of time-varying systems

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 4 2001
    R. E. Mahony
    The problem of on-line identification of a parametric model for continuous-time, time-varying systems is considered via the minimization of a least-squares criterion with a forgetting function. The proposed forgetting function depends on two time-varying parameters which play crucial roles in the stability analysis of the method. The analysis leads to the consideration of a Lyapunov function for the identification algorithm that incorporates both prediction error and parameter convergence measures. A theorem is proved showing finite time convergence of the Lyapunov function to a neighbourhood of zero, the size of which depends on the evolution of the time-varying error terms in the parametric model representation. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Designing predictors for MIMO switching supervisory control

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 3 2001
    Edoardo Mosca
    Abstract The paper studies the problem of inferring the behaviour of a linear feedback loop made up by an uncertain MIMO plant and a given candidate controller from data taken from the plant possibly driven by a different controller. In such a context, it is shown here that a convenient tool to work with is a quantity called normalized discrepancy. This is a measure of mismatch between the loop made up by the unknown plant in feedback with the candidate controller and the nominal ,tuned-loop' related to the same candidate controller. It is shown that discrepancy can in principle be obtained by resorting to the concept of a virtual reference, and conveniently computed in real time by suitably filtering an output prediction error. The latter result is of relevant practical value for on-line implementation and of paramount importance in switching supervisory control of uncertain plants, particularly in the case of a coarse distribution of candidate models. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    How do we tell which estimates of past climate change are correct?,

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 10 2009
    Steven C. Sherwood
    Abstract Estimates of past climate change often involve teasing small signals from imperfect instrumental or proxy records. Success is often evaluated on the basis of the spatial or temporal consistency of the resulting reconstruction, or on the apparent prediction error on small space and time scales. However, inherent methodological trade-offs illustrated here can cause climate signal accuracy to be unrelated, or even inversely related, to such performance measures. This is a form of the classic conflict in statistics between minimum variance and unbiased estimators. Comprehensive statistical simulations based on climate model output are probably the best way to reliably assess whether methods of reconstructing climate from sparse records, such as radiosondes or paleoclimate proxies, actually work on longer time scales. Copyright © 2008 Royal Meteorological Society [source]


    Kinetic study of methacrylate copolymerization systems by thermoanalysis methods

    JOURNAL OF APPLIED POLYMER SCIENCE, Issue 5 2008
    Ali Habibi
    Abstract The free-radical solution copolymerization of isobutyl methacrylate with lauryl methacrylate in the presence of an inhibitor was studied with thermoanalysis methods. A set of inhibited polymerization experiments was designed. Four different levels of initial inhibitor/initiator molar ratios were considered. In situ polymerization experiments were carried out with differential scanning calorimetry. Furthermore, to determine the impact of the polymerization media on the rate of initiation, the kinetics of the initiator decomposition were followed with nonisothermal thermoanalysis methods, and the results were compared with in situ polymerization counterparts. The robust M -estimation method was used to retrieve the kinetic parameters of the copolymerization system. This estimation method led to a reasonable prediction error for the dataset with strong multicollinearity. The model-free isoconversional method was employed to find the variation of the Arrhenius activation energy with the conversion. It was found that robust M -estimation outperformed existing methods of estimation in terms of statistical precision and computational speed, while maintaining good robustness. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci 2008 [source]


    Variable selection in random calibration of near-infrared instruments: ridge regression and partial least squares regression settings

    JOURNAL OF CHEMOMETRICS, Issue 3 2003
    Arief Gusnanto
    Abstract Standard methods for calibration of near-infrared instruments, such as partial least-squares (PLS) and ridge regression (RR), typically use the full set of wavelengths in the model. In this paper we investigate the effect of variable (wavelength) selection for these two methods on the model prediction. For RR the selection is optimized with respect to the ridge parameter, the number of variables and the configuration of the variables in the model. A fast iterative computational algorithm is developed for the purpose of this optimization. For PLS the selection is optimized with respect to the number of components, the number of variables and the configuration of the variables. We use three real data sets in this study: processed milk from the market, milk from a dairy farm and milk from the production line of a milk processing factory. The quantity of interest is the concentration of fat in the milk. The observations are randomly split into estimation and validation sets. Optimization is based on the mean square prediction error computed on the validation set. The results indicate that the wavelength selection will not always give better prediction than using all of the available wavelengths. Investigation of the information in the spectra is necessary to determine whether all of them are relevant to the objective of the model. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Effect of various estimates of renal function on prediction of vancomycin concentration by the population mean and Bayesian methods

    JOURNAL OF CLINICAL PHARMACY & THERAPEUTICS, Issue 4 2009
    Y. Tsuji BSc
    Summary Objective:, Renal function was estimated in 129 elderly patients with methicillin-resistant Staphylococcus aureus (MRSA) who were treated with vancomycin (VCM). The estimation was performed by substituting serum creatinine (SCR) measured enzymatically and a value converted using the Jaffe method into the Cockcroft-Gault and Modification of Diet in Renal Disease (MDRD) equations. The serum trough level was predicted from three estimates of renal function by the population mean (PM) and Bayesian methods and the predictability was assessed. Methods:, Two-compartment model-based Japanese population parameters for VCM were used, and the mean prediction error (ME) and root mean squared error (RMSE) were calculated as indices of bias and accuracy, respectively, for predictions by the PM and Bayesian methods. Results:, The PM method gave the highest correlation with the measured value using the estimate of renal function obtained by substituting the Jaffe-converted SCR into the Cockcroft-Gault equation. There was no positive or negative bias in the ME and the value was significantly smaller than for other predicted data (P < 0·05). RMSE was also the smallest, indicating that this method increases the predictability of the serum VCM trough level. While, ME showed a negative bias for all values predicted by the Bayesian method, both the ME and RMSE were very small. Conclusion:, In the application of the PM method for VCM treatment of elderly patients with MRSA, substitution of SCR based on the Jaffe method into the Cockcroft-Gault equation increases the predictability of the serum VCM trough level. The Bayesian method predicted the serum VCM trough level with high accuracy using any of the estimates of renal function. [source]


    Forecasting with k -factor Gegenbauer Processes: Theory and Applications

    JOURNAL OF FORECASTING, Issue 8 2001
    L. Ferrara
    Abstract This paper deals with the k -factor extension of the long memory Gegenbauer process proposed by Gray et al. (1989). We give the analytic expression of the prediction function derived from this long memory process and provide the h -step-ahead prediction error when parameters are either known or estimated. We investigate the predictive ability of the k -factor Gegenbauer model on real data of urban transport traffic in the Paris area, in comparison with other short- and long-memory models. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    A global examination of allometric scaling for predicting human drug clearance and the prediction of large vertical allometry,

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 8 2006
    Huadong Tang
    Abstract Allometrically scaled data sets (138 compounds) used for predicting human clearance were obtained from the literature. Our analyses of these data have led to four observations. (1) The current data do not provide strong evidence that systemic clearance (CLs; n,=,102) is more predictable than apparent oral clearance (CLpo; n,=,24), but caution needs to be applied because of potential CLpo prediction error caused by differences in bioavailability across species. (2) CLs of proteins (n,=,10) can be more accurately predicted than that of non-protein chemicals (n,=,102). (3) CLs is more predictable for compounds eliminated by renal or biliary excretion (n,=,33) than by metabolism (n,=,57). (4) CLs predictability for hepatically eliminated compounds followed the order: high CL (n,=,11),>,intermediate CL (n,=,17),>,low CL (n,=,29). All examples of large vertical allometry (% error of prediction greater than 1000%) occurred only when predicting human CLs of drugs having very low CLs. A qualitative analysis revealed the application of two potential rules for predicting the occurrence of large vertical allometry: (1) ratio of unbound fraction of drug in plasma (fu) between rats and humans greater than 5; (2) C logP greater than 2. Metabolic elimination could also serve as an additional indicator for expecting large vertical allometry. © 2006 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 95: 1783,1799, 2006 [source]


    When What You See Isn't What You Get: Alcohol Cues, Alcohol Administration, Prediction Error, and Human Striatal Dopamine

    ALCOHOLISM, Issue 1 2009
    Karmen K. Yoder
    Background:, The mesolimbic dopamine (DA) system is implicated in the development and maintenance of alcohol drinking; however, the exact mechanisms by which DA regulates human alcohol consumption are unclear. This study assessed the distinct effects of alcohol-related cues and alcohol administration on striatal DA release in healthy humans. Methods:, Subjects underwent 3 PET scans with [11C]raclopride (RAC). Subjects were informed that they would receive either an IV Ringer's lactate infusion or an alcohol (EtOH) infusion during scanning, with naturalistic visual and olfactory cues indicating which infusion would occur. Scans were acquired in the following sequence: (1) Baseline Scan: Neutral cues predicting a Ringer's lactate infusion, (2) CUES Scan: Alcohol-related cues predicting alcohol infusion in a Ringer's lactate solution, but with alcohol infusion after scanning to isolate the effects of cues, and (3) EtOH Scan: Neutral cues predicting Ringer's, but with alcohol infusion during scanning (to isolate the effects of alcohol without confounding expectation or craving). Results:, Relative to baseline, striatal DA concentration decreased during CUES, but increased during EtOH. Conclusion:, While the results appear inconsistent with some animal experiments showing dopaminergic responses to alcohol's conditioned cues, they can be understood in the context of the hypothesized role of the striatum in reward prediction error, and of animal studies showing that midbrain dopamine neurons decrease and increase firing rates during negative and positive prediction errors, respectively. We believe that our data are the first in humans to demonstrate such changes in striatal DA during reward prediction error. [source]


    Review of Urban Stormwater Quality Models: Deterministic, Stochastic, and Hybrid Approaches,

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2007
    Christopher C. Obropta
    Abstract:, The growing impact of urban stormwater on surface-water quality has illuminated the need for more accurate modeling of stormwater pollution. Water quality based regulation and the movement towards integrated urban water management place a similar demand for improved stormwater quality model predictions. The physical, chemical, and biological processes that affect stormwater quality need to be better understood and simulated, while acknowledging the costs and benefits that such complex modeling entails. This paper reviews three approaches to stormwater quality modeling: deterministic, stochastic, and hybrid. Six deterministic, three stochastic, and three hybrid models are reviewed in detail. Hybrid approaches show strong potential for reducing stormwater quality model prediction error and uncertainty. Improved stormwater quality models will have wide ranging benefits for combined sewer overflow management, total maximum daily load development, best management practice design, land use change impact assessment, water quality trading, and integrated modeling. [source]


    Fixed rank kriging for very large spatial data sets

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2008
    Noel Cressie
    Summary., Spatial statistics for very large spatial data sets is challenging. The size of the data set, n, causes problems in computing optimal spatial predictors such as kriging, since its computational cost is of order . In addition, a large data set is often defined on a large spatial domain, so the spatial process of interest typically exhibits non-stationary behaviour over that domain. A flexible family of non-stationary covariance functions is defined by using a set of basis functions that is fixed in number, which leads to a spatial prediction method that we call fixed rank kriging. Specifically, fixed rank kriging is kriging within this class of non-stationary covariance functions. It relies on computational simplifications when n is very large, for obtaining the spatial best linear unbiased predictor and its mean-squared prediction error for a hidden spatial process. A method based on minimizing a weighted Frobenius norm yields best estimators of the covariance function parameters, which are then substituted into the fixed rank kriging equations. The new methodology is applied to a very large data set of total column ozone data, observed over the entire globe, where n is of the order of hundreds of thousands. [source]


    On Estimating Conditional Mean-Squared Prediction Error in Autoregressive Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2003
    CHING-KANG ING
    Abstract. Zhang and Shaman considered the problem of estimating the conditional mean-squared prediciton error (CMSPE) for a Gaussian autoregressive (AR) process. They used the final prediction error (FPE) of Akaike to estimate CMSPE and proposed that FPE's effectiveness be judged by its asymptotic correlation with CMSPE. However, as pointed out by Kabaila and He, the derivation of this correlation by Zhang and Shaman is incomplete, and the performance of FPE in estimating CMSPE is also poor in Kabaila and He's simulation study. Kabaila and He further proposed an alternative estimator of CMSPE, V, in the stationary AR(1) model. They reported that V has a larger normalized correlation with CMSPE through Monte Carlo simulation results. In this paper, we propose a generalization of V, V,, in the higher-order AR model, and obtain the asymptotic correlation of FPE and V, with CMSPE. We show that the limit of the normalized correlation of V, with CMSPE is larger than that of FPE with CMSPE, and hence Kabaila and He's finding is justified theoretically. In addition, the performances of the above estimators of CMSPE are re-examined in terms of mean-squared errors (MSE). Our main conclusion is that from the MSE point of view, V, is the best choice among a family of asymptotically unbiased estimators of CMSPE including FPE and V, as its special cases. [source]


    Prediction Variance and Information Worth of Observations in Time Series

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2000
    Mohsen Pourahmadi
    The problem of developing measures of worth of observations in time series has not received much attention in the literature. Any meaningful measure of worth should naturally depend on the position of the observation as well as the objectives of the analysis, namely parameter estimation or prediction of future values. We introduce a measure that quantifies worth of a set of observations for the purpose of prediction of outcomes of stationary processes. The worth is measured as the change in the information content of the entire past due to exclusion or inclusion of a set of observations. The information content is quantified by the mutual information, which is the information theoretic measure of dependency. For Gaussian processes, the measure of worth turns out to be the relative change in the prediction error variance due to exclusion or inclusion of a set of observations. We provide formulae for computing predictive worth of a set of observations for Gaussian autoregressive moving-average processs. For non-Gaussian processes, however, a simple function of its entropy provides a lower bound for the variance of prediction error in the same manner that Fisher information provides a lower bound for the variance of an unbiased estimator via the Cramer-Rao inequality. Statistical estimation of this lower bound requires estimation of the entropy of a stationary time series. [source]


    Interspecies allometric scaling: prediction of clearance in large animal species: Part II: mathematical considerations

    JOURNAL OF VETERINARY PHARMACOLOGY & THERAPEUTICS, Issue 5 2006
    M. MARTINEZ
    Interspecies scaling is a useful tool for the prediction of pharmacokinetic parameters from animals to humans, and it is often used for estimating a first-time in human dose. However, it is important to appreciate the mathematical underpinnings of this scaling procedure when using it to predict pharmacokinetic parameter values across animal species. When cautiously applied, allometry can be a tool for estimating clearance in veterinary species for the purpose of dosage selection. It is particularly valuable during the selection of dosages in large zoo animal species, such as elephants, large cats and camels, for which pharmacokinetic data are scant. In Part I, allometric predictions of clearance in large animal species were found to pose substantially greater risks of inaccuracies when compared with that observed for humans. In this report, we examine the factors influencing the accuracy of our clearance estimates from the perspective of the relationship between prediction error and such variables as the distribution of body weight values used in the regression analysis, the influence of a particular observation on the clearance estimate, and the ,goodness of fit' (R2) of the regression line. Ultimately, these considerations are used to generate recommendations regarding the data to be included in the allometric prediction of clearance in large animal species. [source]


    Hepatic venous congestion in living donor liver transplantation: Preoperative quantitative prediction and follow-up using computed tomography

    LIVER TRANSPLANTATION, Issue 6 2004
    Shin Hwang
    Hepatic venous congestion (HVC) has not been assessed quantitatively prior to hepatectomy and its resolving mechanism has not been fully analyzed. We devised and verified a new method to predict HVC, in which HVC was estimated from delineation of middle hepatic vein (MHV) tributaries in computed tomography (CT) images. The predicted HVC was transferred to the right hepatic lobes of 20 living donors using a paper scale, and it was compared with the actual observed HVC that occurred after parenchymal transection and arterial clamping. The evolution of HVC from its emergence to resolution was followed up with CT. Volume proportions of the predicted and observed HVC were 31.7 ± 6.3% and 31.3 ± 9.4% of right lobe volume (RLV) (P = .74), respectively, which resulted in a prediction error of 3.8 ± 3.7% of RLV. We observed the changes in the HVC area of the right lobes both in donors without MHV trunk and in recipients with MHV reconstruction. After 7 days, the HVC of 33.5 ± 7.7% of RLV was changed to a computed tomography attenuation abnormality (CTAA) of 28.4 ± 5.3% of RLV in 12 donor remnant right lobes, and the HVC of 29.1 ± 11.5% of RLV was reduced to a CTAA of 9.3 ± 3.2% of RLV in 7 recipient right lobe grafts with MHV reconstruction. There was no parenchymal regeneration of the HVC area in donor remnant livers during first 7 days. In conclusion, we believe that this CT-based method for HVC prediction deserves to be applied as an inevitable part of preoperative donor evaluation. The changes in CTAA observed in the right lobes of donors and recipients indicate that MHV reconstruction can effectively decrease the HVC area. (Liver Transpl 2004;10:763,770.) [source]