Parameter Estimation (parameter + estimation)

Distribution by Scientific Domains

Kinds of Parameter Estimation

  • kinetic parameter estimation
  • model parameter estimation

  • Terms modified by Parameter Estimation

  • parameter estimation method
  • parameter estimation methods
  • parameter estimation problem
  • parameter estimation procedure

  • Selected Abstracts


    Significance of Modeling Error in Structural Parameter Estimation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001
    Masoud Sanayei
    Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source]


    Stationary-Increment Variance-Gamma and t Models: Simulation and Parameter Estimation

    INTERNATIONAL STATISTICAL REVIEW, Issue 2 2008
    Richard Finlay
    Summary We detail a method of simulating data from long range dependent processes with variance-gamma or t distributed increments, test various estimation procedures [method of moments (MOM), product-density maximum likelihood (PMLE), non-standard minimum,2and empirical characteristic function estimation] on the data, and assess the performance of each. The investigation is motivated by the apparent poor performance of the MOM technique using real data (Tjetjep & Seneta, 2006); and the need to assess the performance of PMLE for our dependent data models. In the simulations considered the product-density method performs favourably. Résumé Nous détaillons une méthode de simulation de données relatives à des processus à accroissements de lois Variance-Gamma ou t. Nous testons sur ces données diverses procédures d'estimation (méthode des moments, maximum de vraisemblance, ,2 non standard minimum, et fonction caractéristique empirique) et nous évaluons la performance de chacune. Cette étude est motivée par le peu d'efficacité de la technique des moments appliquée à des données réelles (Tjetjep et Seneta 2006) et par le besoin d'évaluer la performance de la méthode du maximum de vraisemblance relative à une densité produit appliquée à nos modèles de données dépendantes. Dans les simulations que nous avons faites la méthode de la densité produit donne des résultats satisfaisants. [source]


    Kinetic Parameter Estimation of Time-temperature Integrators Intended for Use with Packaged Fresh Seafood

    JOURNAL OF FOOD SCIENCE, Issue 3 2004
    T. F. M ENDOZA
    ABSTRACT: The United States Food and Drug Administration (USFDA) considers any hermetically sealed package containing fresh seafood as a reduced oxygen package (ROP) if the oxygen transmission rate of the package is less than 10000 cm3/m2/d. USFDA's recent Import Alert nr 16-125 effectively bans the use of ROP for fresh seafood in the United States unless adequate temperature control and thermal history monitoring is used. Time-temperature integrators (TTI) were proposed as one potential method to satisfy this thermal monitoring requirement. Evaluation and selection of appropriate TTIs remains a difficult process for seafood manufacturers. Three commercially available TTIs (Vitsab M2-10, C2-10, and Fresh-Check TJ2) and 5 prototype TTIs (Avery Dennison) were evaluated for performance against the Skinner and Larkin (1998) botulinum toxin lag-time relationship. Isothermal treatments at 0°C, 5°C, 10°C, and 15°C were used to determine Arrhenius kinetic parameters of TTIs. Computer models were used to predict and compare actual TTI performance under dynamic thermal conditions. Results suggest that Vitsab M2-10 and Avery Dennison T126(2) and T126(4) TTIs may be used to predict safety of fresh seafood in ROP. [source]


    Parameter Estimation of Stochastic Processes with Long-range Dependence and Intermittency

    JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2001
    Jiti Gao
    This paper considers the case where a stochastic process may display both long-range dependence and second-order intermittency. The existence of such a process is established in Anh, Angulo and Ruiz-Medina (1999). We systematically study the estimation of parameters involved in the spectral density function of a process with long-range dependence and second-order intermittency. An estimation procedure for the parameters is given. Numerical results are presented to support the estimation procedure proposed in this paper. [source]


    Simultaneous Data Reconciliation and Parameter Estimation in Bulk Polypropylene Polymerizations in Real Time

    MACROMOLECULAR SYMPOSIA, Issue 1 2006
    Diego Martinez Prata
    Abstract This work presents the implementation of a methodology for dynamic data reconciliation and simultaneous estimation of quality and productivity parameters in real time, using data from an industrial bulk Ziegler-Natta propylene polymerization process. A phenomenological model of the real process, based on mass and energy balances, was developed and implemented for interpretation of actual plant data. The resulting nonlinear dynamic optimization problem was solved using a sequential approach on a time window specifically tuned for the studied process. Despite the essentially isothermal operation conditions, obtained results show that inclusion of energy balance constraints allows for increase of information redundancy and, as a consequence, for computation of better parameter estimates than the ones obtained when the energy balance constraints are not considered (Prata et al., 2005). Examples indicate that the proposed technique can be used very effectively for monitoring of polymer quality and identification of process malfunctions in real time even when laboratory analyses are scarce. [source]


    Reactive Flow Model Parameter Estimation Using Genetic Algorithms

    PROPELLANTS, EXPLOSIVES, PYROTECHNICS, Issue 3 2010
    Jose Baranda, Ribeiro
    Abstract An original real-coded genetic algorithm methodology that has been developed for the estimation of the parameters of the Tarver reactive flow model of shock initiation and detonation of heterogeneous solid explosives is described in detail. This methodology allows, in a single optimisation procedure and without the need for a starting solution, to search for the 15 parameters of the reaction rate law of the reactive flow model that fit the numerical results to the experimental ones. The developed methodology was applied and tested with an experimental situation, described in detail in the literature, involving the acceleration of a tantalum metal plate by an LX-17 explosive charge. The obtained parameters allow a very good description of the experimental results and are close to the ones originally used by Tarver and co-authors in their simulation of the phenomenon. [source]


    Parameter Estimation in the Error-in-Variables Models Using the Gibbs Sampler

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2006
    Jessada J. Jitjareonchai
    Abstract Least squares and maximum likelihood techniques have long been used in parameter estimation problems. However, those techniques provide only point estimates with unknown or approximate uncertainty information. Bayesian inference coupled with the Gibbs Sampler is an approach to parameter estimation that exploits modern computing technology. The estimation results are complete with exact uncertainty information. The Error-in-Variables model (EVM) approach is investigated in this study. In it, both dependent and independent variables contain measurement errors, and the true values and uncertainties of all measurements are estimated. This EVM set-up leads to unusually large dimensionality in the estimation problem, which makes parameter estimation very difficult with classical techniques. In this paper, an innovative way of performing parameter estimation is introduced to chemical engineers. The paper shows that the method is simple and efficient; as well, complete and accurate uncertainty information about parameter estimates is readily available. Two real-world EVM examples are demonstrated: a large-scale linear model and an epidemiological model. The former is simple enough for most readers to understand the new concepts without difficulty. The latter has very interesting features in that a Poisson distribution is assumed, and a parameter with known distribution is retained while other unknown parameters are estimated. The Gibbs Sampler results are compared with those of the least squares. Les techniques de moindres carrés et de similitude maximale sont utilisées depuis longtemps dans les problèmes d'estimation des paramètres. Cependant, ces techniques ne fournissent que des estimations ponctuelles avec de l'information sur les incertitudes inconnue ou approximative. L'inférence de Bayes couplée à l'échantillonneur de Gibbs est une approche d'estimation paramétrique qui exploite la technologie moderne de calcul par ordinateur. Les résultats d'estimation sont complets avec l'information exacte sur les incertitudes. L'approche du modèle d'erreurs dans les variables (EVM) est étudiée dans cette étude. Dans cette méthode, les variables dépendantes et indépendantes contiennent des erreurs de mesure, et les véritables valeurs et incertitudes de toutes les mesures sont estimées. Ce système EVM mène à une dimensionnalité inhabituellement grande dans le problème d'estimation, ce qui rend l'estimation de paramètres très difficile avec les techniques classiques. Dans cet article, une façon innovante d'effectuer l'estimation de paramètres est présentée aux ingénieurs de génie chimique. On montre dans cet article que la méthode est simple et efficace; de même, de l'information complète et précise sur l'incertitude d'estimation de paramètres est accessible. Deux exemples d'EVM en situation réelle sont montrés, soient un modèle linéaire de grande échelle et un modèle épidémiologique. Le premier modèle est suffisamment simple pour la plupart des lecteurs pour comprendre les nouveaux concepts sans difficulté. Le deuxième possède des caractéristiques extrêmement intéressantes, en ce sens qu'on suppose une distribution de Poisson et qu'un paramètre ayant une distribution connue est retenu pendant que d'autres paramètres non connus sont estimés. Les résultats de l'échantillonneur de Gibbs sont comparés à ceux de la méthode des moindres carrés. [source]


    Data assimilation with regularized nonlinear instabilities

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 648 2010
    Henry D. I. Abarbanel
    Abstract In variational formulations of data assimilation, the estimation of parameters or initial state values by a search for a minimum of a cost function can be hindered by the numerous local minima in the dependence of the cost function on those quantities. We argue that this is a result of instability on the synchronization manifold where the observations are required to match the model outputs in the situation where the data and the model are chaotic. The solution to this impediment to estimation is given as controls moving the positive conditional Lyapunov exponents on the synchronization manifold to negative values and adding to the cost function a penalty that drives those controls to zero as a result of the optimization process implementing the assimilation. This is seen as the solution to the proper size of ,nudging' terms: they are zero once the estimation has been completed, leaving only the physics of the problem to govern forecasts after the assimilation window. We show how this procedure, called Dynamical State and Parameter Estimation (DSPE), works in the case of the Lorenz96 model with nine dynamical variables. Using DSPE, we are able to accurately estimate the fixed parameter of this model and all of the state variables, observed and unobserved, over an assimilation time interval [0, T]. Using the state variables at T and the estimated fixed parameter, we are able to accurately forecast the state of the model for t > T to those times where the chaotic behaviour of the system interferes with forecast accuracy. Copyright © 2010 Royal Meteorological Society [source]


    Parameter Estimation and Actuator Characteristics of Hybrid Magnetic Bearings for Axial Flow Blood Pump Applications

    ARTIFICIAL ORGANS, Issue 7 2009
    Tau Meng Lim
    Abstract Axial flow blood pumps are generally smaller as compared to centrifugal pumps. This is very beneficial because they can provide better anatomical fit in the chest cavity, as well as lower the risk of infection. This article discusses the design, levitated responses, and parameter estimation of the dynamic characteristics of a compact hybrid magnetic bearing (HMB) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet brushless and sensorless motor. It is levitated by two HMBs at both ends in five degree of freedom with proportional,integral,derivative controllers, among which four radial directions are actively controlled and one axial direction is passively controlled. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMB system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air,in both the radial and axial directions. Experimental estimation showed that the dynamic characteristics of the HMB system are dominated by the frequency-dependent stiffness coefficients. By injecting a multifrequency excitation force signal onto the rotor through the HMBs, it is noticed in the experimental results the maximum displacement linear operating range is 20% of the static eccentricity with respect to the rotor and stator gap clearance. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamic properties under normal operating conditions with fluid. [source]


    Parameter Estimation and Goodness-of-Fit in Log Binomial Regression

    BIOMETRICAL JOURNAL, Issue 1 2006
    L. Blizzard
    Abstract An estimate of the risk, adjusted for confounders, can be obtained from a fitted logistic regression model, but it substantially over-estimates when the outcome is not rare. The log binomial model, binomial errors and log link, is increasingly being used for this purpose. However this model's performance, goodness of fit tests and case-wise diagnostics have not been studied. Extensive simulations are used to compare the performance of the log binomial, a logistic regression based method proposed by Schouten et al. (1993) and a Poisson regression approach proposed by Zou (2004) and Carter, Lipsitz, and Tilley (2005). Log binomial regression resulted in "failure" rates (non-convergence, out-of-bounds predicted probabilities) as high as 59%. Estimates by the method of Schouten et al. (1993) produced fitted log binomial probabilities greater than unity in up to 19% of samples to which a log binomial model had been successfully fit and in up to 78% of samples when the log binomial model fit failed. Similar percentages were observed for the Poisson regression approach. Coefficient and standard error estimates from the three models were similar. Rejection rates for goodness of fit tests for log binomial fit were around 5%. Power of goodness of fit tests was modest when an incorrect logistic regression model was fit. Examples demonstrate the use of the methods. Uncritical use of the log binomial regression model is not recommended. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Parameter Estimation for Partially Complete Time and Type of Failure Data

    BIOMETRICAL JOURNAL, Issue 2 2004
    Debasis Kundu
    Abstract The theory of competing risks has been developed to asses a specific risk in presence of other risk factors. In this paper we consider the parametric estimation of different failure modes under partially complete time and type of failure data using latent failure times and cause specific hazard functions models. Uniformly minimum variance unbiased estimators and maximum likelihood estimators are obtained when latent failure times and cause specific hazard functions are exponentially distributed. We also consider the case when they follow Weibull distributions. One data set is used to illustrate the proposed techniques. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Parameter Estimation in a Gompertzian Stochastic Model for Tumor Growth

    BIOMETRICS, Issue 4 2000
    L. Ferrante
    Summary. The problem of estimating parameters in the drift coefficient when a diffusion process is observed continuously requires some specific assumptions. In this paper, we consider a stochastic version of the Gompertzian model that describes in vivo tumor growth and its sensitivity to treatment with antiangiogenic drugs. An explicit likelihood function is obtained, and we discuss some properties of the maximum likelihood estimator for the intrinsic growth rate of the stochastic Gompertzian model. Furthermore, we show some simulation results on the behavior of the corresponding discrete estimator. Finally, an application is given to illustrate the estimate of the model parameters using real data. [source]


    Cell Population Modeling and Parameter Estimation for Continuous Cultures of Saccharomyces cerevisiae

    BIOTECHNOLOGY PROGRESS, Issue 5 2002
    Prashant Mhaskar
    Saccharomyces cerevisiae is known to exhibit sustained oscillations in chemostats operated under aerobic and glucose-limited growth conditions. The oscillations are reflected both in intracellular and extracellular measurements. Our recent work has shown that unstructured cell population balance models are capable of generating sustained oscillations over an experimentally meaningful range of dilution rates. A disadvantage of such unstructured models is that they lack variables that can be compared directly to easily measured extracellular variables. Thus far, most of our work in model development has been aimed at achieving qualitative agreement with experimental data. In this paper, a segregated model with a simple structured description of the extracellular environment is developed and evaluated. The model accounts for the three most important metabolic pathways involved in cell growth with glucose substrate. As compared to completely unstructured models, the major advantage of the proposed model is that predictions of extracellular variables can be compared directly to experimental data. Consequently, the model structure is well suited for the application of estimation techniques aimed at determining unknown model parameters from available extracellular measurements. A steady-state parameter selection method developed in our group is extended to oscillatory dynamics to determine the parameters that can be estimated most reliably. The chosen parameters are estimated by solving a nonlinear programming problem formulated to minimize the difference between predictions and measurements of the extracellular variables. The efficiency of the parameter estimation scheme is demonstrated using simulated and experimental data. [source]


    The Value of Subsidence Data in Ground Water Model Calibration

    GROUND WATER, Issue 4 2008
    Tingting Yan
    The accurate estimation of aquifer parameters such as transmissivity and specific storage is often an important objective during a ground water modeling investigation or aquifer resource evaluation. Parameter estimation is often accomplished with changes in hydraulic head data as the key and most abundant type of observation. The availability and accessibility of global positioning system and interferometric synthetic aperture radar data in heavily pumped alluvial basins can provide important subsidence observations that can greatly aid parameter estimation. The aim of this investigation is to evaluate the value of spatial and temporal subsidence data for automatically estimating parameters with and without observation error using UCODE-2005 and MODFLOW-2000. A synthetic conceptual model (24 separate cases) containing seven transmissivity zones and three zones each for elastic and inelastic skeletal specific storage was used to simulate subsidence and drawdown in an aquifer with variably thick interbeds with delayed drainage. Five pumping wells of variable rates were used to stress the system for up to 15 years. Calibration results indicate that (1) the inverse of the square of the observation values is a reasonable way to weight the observations, (2) spatially abundant subsidence data typically produce superior parameter estimates under constant pumping even with observation error, (3) only a small number of subsidence observations are required to achieve accurate parameter estimates, and (4) for seasonal pumping, accurate parameter estimates for elastic skeletal specific storage values are largely dependent on the quantity of temporal observational data and less on the quantity of available spatial data. [source]


    Maquiladora Employment Dynamics in Nuevo Laredo

    GROWTH AND CHANGE, Issue 1 2007
    JESÚS CAÑAS
    ABSTRACT The Nuevo Laredo maquiladora sector has grown enormously during the last two decades. The short-term time series characteristics of this portion of the regional economy are analyzed in an attempt to quantify the trends underlying this remarkable performance. Parameter estimation is accomplished via linear transfer function (LTF) analysis. Data are drawn from the January 1990,December 2000 sample period. Empirical results indicate that real wage rates, maquiladora plants, U.S. industrial activity, and the real exchange rate of the peso play significant roles in determining month-to-month fluctuations in maquiladora employment. Furthermore, sub-sample forecast simulation exercises are conducted as an additional means for verifying model reliability. Empirical results indicate that the forecasts generated with the LTF model are less accurate than those associated with a simple random walk procedure for twelve separate step-length periods. [source]


    Parameter estimation in semi-distributed hydrological catchment modelling using a multi-criteria objective function

    HYDROLOGICAL PROCESSES, Issue 22 2007
    Hamed Rouhani
    Abstract Output generated by hydrologic simulation models is traditionally calibrated and validated using split-samples of observed time series of total water flow, measured at the drainage outlet of the river basin. Although this approach might yield an optimal set of model parameters, capable of reproducing the total flow, it has been observed that the flow components making up the total flow are often poorly reproduced. Previous research suggests that notwithstanding the underlying physical processes are often poorly mimicked through calibration of a set of parameters hydrologic models most of the time acceptably estimates the total flow. The objective of this study was to calibrate and validate a computer-based hydrologic model with respect to the total and slow flow. The quick flow component used in this study was taken as the difference between the total and slow flow. Model calibrations were pursued on the basis of comparing the simulated output with the observed total and slow flow using qualitative (graphical) assessments and quantitative (statistical) indicators. The study was conducted using the Soil and Water Assessment Tool (SWAT) model and a 10-year historical record (1986,1995) of the daily flow components of the Grote Nete River basin (Belgium). The data of the period 1986,1989 were used for model calibration and data of the period 1990,1995 for model validation. The predicted daily average total flow matched the observed values with a Nash,Sutcliff coefficient of 0·67 during calibration and 0·66 during validation. The Nash,Sutcliff coefficient for slow flow was 0·72 during calibration and 0·61 during validation. Analysis of high and low flows indicated that the model is unbiased. A sensitivity analysis revealed that for the modelling of the daily total flow, accurate estimation of all 10 calibration parameters in the SWAT model is justified, while for the slow flow processes only 4 out of the set of 10 parameters were identified as most sensitive. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Hierarchical Bayesian modelling of wind and sea surface temperature from the Portuguese coast

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 9 2010
    Ricardo T. Lemos
    Abstract In this work, we revisit a recent analysis that pointed to an overall relaxation of the Portuguese coastal upwelling system, between 1941 and 2000, and apply more elaborate statistical techniques to assess that evidence. Our goal is to fit a model for environmental variables that accommodate seasonal cycles, long-term trends, short-term fluctuations with some degree of autocorrelation, and cross-correlations between measuring sites and variables. Reference cell coding is used to investigate similarities in behaviour among sites. Parameter estimation is performed in a single modelling step, thereby producing more reliable credibility intervals than previous studies. This is of special importance in the assessment of trend significance. We employ a Bayesian approach with a purposely developed Markov chain Monte Carlo method to explore the posterior distribution of the parameters. Our results substantiate most previous findings and provide new insight on the relationship between wind and sea surface temperature off the Portuguese coast. Copyright © 2009 Royal Meteorological Society [source]


    Parameter estimation in Bayesian reconstruction of SPECT images: An aid in nuclear medicine diagnosis

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2004
    Antonio López
    Abstract Despite the adequacy of Bayesian methods to reconstruct nuclear medicine SPECT (single-photon emission computed tomography) images, they are rarely used in everyday medical practice. This is primarily because of their computational cost and the need to appropriately select the prior model hyperparameters. We propose a simple procedure for the estimation of these hyperparameters and the reconstruction of the original image and test the procedure on both synthetic and real SPECT images. The experimental results demonstrate that the proposed hyperparameter estimation method produces satisfactory reconstructions. Although we have used generalized Gaussian Markov random fields (GGMRF) as prior models, the proposed estimation method can be applied to any priors with convex potential and tractable partition function with respect to the scale hyperparameter. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 21,27, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20003 [source]


    Parameter estimation in selected populations with missing data

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 2 2009
    G. Yagüe-Utrilla
    Summary This study proposes a procedure to estimate genetic parameters in populations where a selection process results in the loss of an unknown number of observations. The method was developed under the Bayesian inference scope following the missing data theory approach. Its implementation requires slight modifications to the Gibbs sampler algorithm. In order to show the efficiency of this option, a simulation study was conducted. [source]


    Volatility forecasting with double Markov switching GARCH models

    JOURNAL OF FORECASTING, Issue 8 2009
    Cathy W. S. Chen
    Abstract This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat-tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first-order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value-at-risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Long-memory dynamic Tobit models

    JOURNAL OF FORECASTING, Issue 5 2006
    A. E. Brockwell
    Abstract We introduce a long-memory dynamic Tobit model, defining it as a censored version of a fractionally integrated Gaussian ARMA model, which may include seasonal components and/or additional regression variables. Parameter estimation for such a model using standard techniques is typically infeasible, since the model is not Markovian, cannot be expressed in a finite-dimensional state-space form, and includes censored observations. Furthermore, the long-memory property renders a standard Gibbs sampling scheme impractical. Therefore we introduce a new Markov chain Monte Carlo sampling scheme, which is orders of magnitude more efficient than the standard Gibbs sampler. The method is inherently capable of handling missing observations. In case studies, the model is fit to two time series: one consisting of volumes of requests to a hard disk over time, and the other consisting of hourly rainfall measurements in Edinburgh over a 2-year period. The resulting posterior distributions for the fractional differencing parameter demonstrate, for these two time series, the importance of the long-memory structure in the models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Parameter estimation for differential equations: a generalized smoothing approach

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2007
    J. O. Ramsay
    Summary., We propose a new method for estimating parameters in models that are defined by a system of non-linear differential equations. Such equations represent changes in system outputs by linking the behaviour of derivatives of a process to the behaviour of the process itself. Current methods for estimating parameters in differential equations from noisy data are computationally intensive and often poorly suited to the realization of statistical objectives such as inference and interval estimation. The paper describes a new method that uses noisy measurements on a subset of variables to estimate the parameters defining a system of non-linear differential equations. The approach is based on a modification of data smoothing methods along with a generalization of profiled estimation. We derive estimates and confidence intervals, and show that these have low bias and good coverage properties respectively for data that are simulated from models in chemical engineering and neurobiology. The performance of the method is demonstrated by using real world data from chemistry and from the progress of the autoimmune disease lupus. [source]


    AN EVALUATION OF NON-ITERATIVE METHODS FOR ESTIMATING THE LINEAR-BY-LINEAR PARAMETER OF ORDINAL LOG-LINEAR MODELS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2009
    Eric J. Beh
    Summary Parameter estimation for association and log-linear models is an important aspect of the analysis of cross-classified categorical data. Classically, iterative procedures, including Newton's method and iterative scaling, have typically been used to calculate the maximum likelihood estimates of these parameters. An important special case occurs when the categorical variables are ordinal and this has received a considerable amount of attention for more than 20 years. This is because models for such cases involve the estimation of a parameter that quantifies the linear-by-linear association and is directly linked with the natural logarithm of the common odds ratio. The past five years has seen the development of non-iterative procedures for estimating the linear-by-linear parameter for ordinal log-linear models. Such procedures have been shown to lead to numerically equivalent estimates when compared with iterative, maximum likelihood estimates. Such procedures also enable the researcher to avoid some of the computational difficulties that commonly arise with iterative algorithms. This paper investigates and evaluates the performance of three non-iterative procedures for estimating this parameter by considering 14 contingency tables that have appeared in the statistical and allied literature. The estimation of the standard error of the association parameter is also considered. [source]


    Discriminant Analysis for Longitudinal Data with Multiple Continuous Responses and Possibly Missing Data

    BIOMETRICS, Issue 1 2009
    Guillermo Marshall
    Summary Multiple outcomes are often used to properly characterize an effect of interest. This article discusses model-based statistical methods for the classification of units into one of two or more groups where, for each unit, repeated measurements over time are obtained on each outcome. We relate the observed outcomes using multivariate nonlinear mixed-effects models to describe evolutions in different groups. Due to its flexibility, the random-effects approach for the joint modeling of multiple outcomes can be used to estimate population parameters for a discriminant model that classifies units into distinct predefined groups or populations. Parameter estimation is done via the expectation-maximization algorithm with a linear approximation step. We conduct a simulation study that sheds light on the effect that the linear approximation has on classification results. We present an example using data from a study in 161 pregnant women in Santiago, Chile, where the main interest is to predict normal versus abnormal pregnancy outcomes. [source]


    Bayesian Robust Inference for Differential Gene Expression in Microarrays with Multiple Samples

    BIOMETRICS, Issue 1 2006
    Raphael Gottardo
    Summary We consider the problem of identifying differentially expressed genes under different conditions using gene expression microarrays. Because of the many steps involved in the experimental process, from hybridization to image analysis, cDNA microarray data often contain outliers. For example, an outlying data value could occur because of scratches or dust on the surface, imperfections in the glass, or imperfections in the array production. We develop a robust Bayesian hierarchical model for testing for differential expression. Errors are modeled explicitly using a t -distribution, which accounts for outliers. The model includes an exchangeable prior for the variances, which allows different variances for the genes but still shrinks extreme empirical variances. Our model can be used for testing for differentially expressed genes among multiple samples, and it can distinguish between the different possible patterns of differential expression when there are three or more samples. Parameter estimation is carried out using a novel version of Markov chain Monte Carlo that is appropriate when the model puts mass on subspaces of the full parameter space. The method is illustrated using two publicly available gene expression data sets. We compare our method to six other baseline and commonly used techniques, namely the t -test, the Bonferroni-adjusted t -test, significance analysis of microarrays (SAM), Efron's empirical Bayes, and EBarrays in both its lognormal,normal and gamma,gamma forms. In an experiment with HIV data, our method performed better than these alternatives, on the basis of between-replicate agreement and disagreement. [source]


    Practical identifiability of biokinetic parameters of a model describing two-step nitrification in biofilms

    BIOTECHNOLOGY & BIOENGINEERING, Issue 3 2008
    D. Brockmann
    Abstract Parameter estimation and model calibration are key problems in the application of biofilm models in engineering practice, where a large number of model parameters need to be determined usually based on experimental data with only limited information content. In this article, identifiability of biokinetic parameters of a biofilm model describing two-step nitrification was evaluated based solely on bulk phase measurements of ammonium, nitrite, and nitrate. In addition to evaluating the impact of experimental conditions and available measurements, the influence of mass transport limitation within the biofilm and the initial parameter values on identifiability of biokinetic parameters was evaluated. Selection of parameters for identifiability analysis was based on global mean sensitivities while parameter identifiability was analyzed using local sensitivity functions. At most, four of the six most sensitive biokinetic parameters were identifiable from results of batch experiments at bulk phase dissolved oxygen concentrations of 0.8 or 5 mg O2/L. High linear dependences between the parameters of the subsets and resulted in reduced identifiability. Mass transport limitation within the biofilm did not influence the number of identifiable parameters but, in fact, decreased collinearity between parameters, especially for parameters that are otherwise correlated (e.g., µAOB and , or µNOB and ). The choice of the initial parameter values had a significant impact on the identifiability of two parameter subsets, both including the parameters µAOB and . Parameter subsets that did not include the subsets µAOB and or µNOB and were clearly identifiable independently of the choice of the initial parameter values. Biotechnol. Bioeng. 2008;101: 497,514. © 2008 Wiley Periodicals, Inc. [source]


    Parameters estimation of the d.c. electrothermal model of the bipolar transistor

    INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 2 2002
    Janusz Zar
    Abstract This paper concerns the problems of the parameter values estimation of d.c. electrothermal model of the BJT formulated for circuit analysis with SPICE. In this case, PARTS software available in SPICE cannot be used. In the paper, a new estimation algorithm for d.c. electrothermal model of the BJT is proposed. The form of the electrothermal BJT model is also presented. The estimation algorithm implemented into the computer-controlled measurement set allows one to derive the values of the parameters automatically after the measurements of the selected isothermal characteristics and proper calculations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Performance-driven muscle-based facial animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2001
    Byoungwon Choe
    Abstract We describe a system to synthesize facial expressions by editing captured performances. For this purpose, we use the actuation of expression muscles to control facial expressions. We note that there have been numerous algorithms already developed for editing gross body motion. While the joint angle has direct effect on the configuration of the gross body, the muscle actuation has to go through a complicated mechanism to produce facial expressions. Therefore,we devote a significant part of this paper to establishing the relationship between muscle actuation and facial surface deformation. We model the skin surface using the finite element method to simulate the deformation caused by expression muscles. Then, we implement the inverse relationship, muscle actuation parameter estimation, to find the muscle actuation values from the trajectories of the markers on the performer's face. Once the forward and inverse relationships are established, retargeting or editing a performance becomes an easy job. We apply the original performance data to different facial models with equivalent muscle structures, to produce similar expressions. We also produce novel expressions by deforming the original data curves of muscle actuation to satisfy the key-frame constraints imposed by animators.Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Significance of Modeling Error in Structural Parameter Estimation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001
    Masoud Sanayei
    Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source]


    Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo Approach

    EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2008
    Manuel Ammann
    G12 Abstract We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact finite sample properties and accounts for errors-in-variables. Using S&P 500 panel data, we analyse the empirical performance of the CAPM and theFama and French (1993)three-factor model. We find that time-variation of betas in the CAPM and the time variation of the coefficients for the size factor (SMB) and the distress factor (HML) in the three-factor model improve the empirical performance. Therefore, our findings are consistent with time variation of firm-specific exposure to market risk, systematic credit risk and systematic size effects. However, a Bayesian model comparison trading off goodness of fit and model complexity indicates that the conditional CAPM performs best, followed by the conditional three-factor model, the unconditional CAPM, and the unconditional three-factor model. [source]