Information Matrix (information + matrix)

Distribution by Scientific Domains

Kinds of Information Matrix

  • fisher information matrix


  • Selected Abstracts


    System identification applied to long-span cable-supported bridges using seismic records

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2008
    Dionysius M. Siringoringo
    Abstract This paper presents the application of system identification (SI) to long-span cable-supported bridges using seismic records. The SI method is based on the System Realization using Information Matrix (SRIM) that utilizes correlations between base motions and bridge accelerations to identify coefficient matrices of a state-space model. Numerical simulations using a benchmark cable-stayed bridge demonstrate the advantages of this method in dealing with multiple-input multiple-output (MIMO) data from relatively short seismic records. Important issues related to the effects of sensor arrangement, measurement noise, input inclusion, and the types of input with respect to identification results are also investigated. The method is applied to identify modal parameters of the Yokohama Bay Bridge, Rainbow Bridge, and Tsurumi Fairway Bridge using the records from the 2004 Chuetsu-Niigata earthquake. Comparison of modal parameters with the results of ambient vibration tests, forced vibration tests, and analytical models are presented together with discussions regarding the effects of earthquake excitation amplitude on global and local structural modes. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    On the Evaluation of the Information Matrix for Multiplicative Seasonal Time-Series Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2006
    E. J. Godolphin
    Abstract., This paper gives a procedure for evaluating the Fisher information matrix for a general multiplicative seasonal autoregressive moving average time-series model. The method is based on the well-known integral specification of Whittle [Ark. Mat. Fys. Astr. (1953) vol. 2. pp. 423,434] and leads to a system of linear equations, which is independent of the seasonal period and has a closed solution. It is shown to be much simpler, in general, than the method of Klein and Mélard [Journal of Time Series Analysis (1990) vol. 11, pp. 231,237], which depends on the seasonal period. It is also shown that the nonseasonal method of McLeod [Biometrika (1984) vol. 71, pp. 207,211] has the same basic features as that of Klein and Mélard. Explicit solutions are obtained for the simpler nonseasonal and seasonal models in common use, a feature which has not been attempted with the Klein,Mélard or the McLeod approaches. Several illustrations of these results are discussed in detail. [source]


    Fisher Information Matrix of the Dirichlet-multinomial Distribution

    BIOMETRICAL JOURNAL, Issue 2 2005
    Sudhir R. Paul
    Abstract In this paper we derive explicit expressions for the elements of the exact Fisher information matrix of the Dirichlet-multinomial distribution. We show that exact calculation is based on the beta-binomial probability function rather than that of the Dirichlet-multinomial and this makes the exact calculation quite easy. The exact results are expected to be useful for the calculation of standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters for data that arise in practice in toxicology and other similar fields. Standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters, based on the exact and the asymptotic Fisher information matrix based on the Dirichlet distribution, are obtained for a set of data from Haseman and Soares (1976), a dataset from Mosimann (1962) and a more recent dataset from Chen, Kodell, Howe and Gaylor (1991). There is substantial difference between the standard errors of the estimates based on the exact Fisher information matrix and those based on the asymptotic Fisher information matrix. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Estimating the effect of treatment in a proportional hazards model in the presence of non-compliance and contamination

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2007
    Jack Cuzick
    Summary., Methods for adjusting for non-compliance and contamination, which respect the randomization, are extended from binary outcomes to time-to-event analyses by using a proportional hazards model. A simple non-iterative method is developed when there are no covariates, which is a generalization of the Mantel,Haenszel estimator. More generally, a ,partial likelihood' is developed which accommodates covariates under the assumption that they are independent of compliance. A key feature is that the proportion of contaminators and non-compliers in the risk set is updated at each failure time. When covariates are not independent of compliance, a full likelihood is developed and explored, but this leads to a complex estimator. Estimating equations and information matrices are derived for these estimators and they are evaluated by simulation studies. [source]


    Design of experiments with unknown parameters in variance

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2002
    Valerii V. Fedorov
    Abstract Model fitting when the variance function depends on unknown parameters is a popular problem in many areas of research. Iterated estimators which are asymptotically equivalent to maximum likelihood estimators are proposed and their convergence is discussed. From a computational point of view, these estimators are very close to the iteratively reweighted least-squares methods. The additive structure of the corresponding information matrices allows us to apply convex design theory which leads to optimal design algorithms. We conclude with examples which illustrate how to bridge our general results with specific applied needs. In particular, a model with experimental costs is introduced and is studied within the normalized design paradigm. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Skew-symmetric distributions generated by the distribution function of the normal distribution

    ENVIRONMETRICS, Issue 4 2007
    Héctor W. Gómez
    Abstract In this paper we study a general family of skew-symmetric distributions which are generated by the cumulative distribution of the normal distribution. For some distributions, moments are computed which allows computing asymmetry and kurtosis coefficients. It is shown that the range for asymmetry and kurtosis parameters is wider than for the family of models introduced by Nadarajah and Kotz (2003). For the skew- t -normal model, we discuss approaches for obtaining maximum likelihood estimators and derive the Fisher information matrix, discussing some of its properties and special cases. We report results of an application to a real data set related to nickel concentration in soil samples. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Quantum measurement and information

    FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 2-3 2003
    Z. Hradil
    The operationally defined invariant information introduced by Brukner and Zeilinger is related to the problem of estimation of quantum states. It quantifies how the estimated states differ in average from the true states in the sense of Hilbert-Schmidt norm. This information evaluates the quality of the measurement and data treatment adopted. Its ultimate limitation is given by the trace of inverse of Fisher information matrix. [source]


    Exploratory second-order analyses for components and factors

    JAPANESE PSYCHOLOGICAL RESEARCH, Issue 1 2002
    Haruhiko Ogasawara
    Abstract: Exploratory methods using second-order components and second-order common factors were proposed. The second-order components were obtained from the resolution of the correlation matrix of obliquely rotated first-order principal components. The standard errors of the estimates of the second-order component loadings were derived from an augmented information matrix with restrictions for the loadings and associated parameters. The second-order factor analysis proposed was similar to the classical method in that the factor correlations among the first-order factors were further resolved by the exploratory method of factor analysis. However, in this paper the second-order factor loadings were estimated by the generalized least squares using the asymptotic variance-covariance matrix for the first-order factor correlations. The asymptotic standard errors for the estimates of the second-order factor loadings were also derived. A numerical example was presented with simulated results. [source]


    Maximum likelihood estimation of higher-order integer-valued autoregressive processes

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2008
    Ruijun Bu
    Abstract., In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701,722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) specification with binomial thinning and Poisson innovations, we examine both the asymptotic efficiency and finite sample properties of the ML estimator in relation to the widely used conditional least squares (CLS) and Yule,Walker (YW) estimators. We conclude that, if the Poisson assumption can be justified, there are substantial gains to be had from using ML especially when the thinning parameters are large. [source]


    On the Evaluation of the Information Matrix for Multiplicative Seasonal Time-Series Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2006
    E. J. Godolphin
    Abstract., This paper gives a procedure for evaluating the Fisher information matrix for a general multiplicative seasonal autoregressive moving average time-series model. The method is based on the well-known integral specification of Whittle [Ark. Mat. Fys. Astr. (1953) vol. 2. pp. 423,434] and leads to a system of linear equations, which is independent of the seasonal period and has a closed solution. It is shown to be much simpler, in general, than the method of Klein and Mélard [Journal of Time Series Analysis (1990) vol. 11, pp. 231,237], which depends on the seasonal period. It is also shown that the nonseasonal method of McLeod [Biometrika (1984) vol. 71, pp. 207,211] has the same basic features as that of Klein and Mélard. Explicit solutions are obtained for the simpler nonseasonal and seasonal models in common use, a feature which has not been attempted with the Klein,Mélard or the McLeod approaches. Several illustrations of these results are discussed in detail. [source]


    Real-time adaptive sequential design for optimal acquisition of arterial spin labeling MRI data

    MAGNETIC RESONANCE IN MEDICINE, Issue 1 2010
    Jingyi Xie
    Abstract An optimal sampling schedule strategy based on the Fisher information matrix and the D-optimality criterion has previously been proposed as a formal framework for optimizing inversion time scheduling for multi-inversion-time arterial spin labeling experiments. Optimal sampling schedule possesses the primary advantage of improving parameter estimation precision but requires a priori estimation of plausible parameter distributions that may not be available in all situations. An adaptive sequential design approach addresses this issue by incorporating the optimal sampling schedule strategy into an adaptive process that iteratively updates the parameter estimates and adjusts the optimal sampling schedule accordingly as data are acquired. In this study, the adaptive sequential design method was experimentally implemented with a real-time feedback scheme on a clinical MRI scanner and was tested in six normal volunteers. Adapted schedules were found to accommodate the intrinsically prolonged arterial transit times in the occipital lobe of the brain. Simulation of applying the adaptive sequential design approach on subjects with pathologically reduced perfusion was also implemented. Simulation results show that the adaptive sequential design approach is capable of incorporating pathologic parameter information into an optimal arterial spin labeling scheduling design within a clinically useful experimental time. Magn Reson Med, 2010. © 2010 Wiley-Liss, Inc. [source]


    A Pareto model for classical systems

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 1 2008
    Saralees Nadarajah
    Abstract A new Pareto distribution is introduced for pooling knowledge about classical systems. It takes the form of the product of two Pareto probability density functions (pdfs). Various structural properties of this distribution are derived, including its cumulative distribution function (cdf), moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Optimal designs for parameter estimation of the Ornstein,Uhlenbeck process

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 5 2009
    Maroussa Zagoraiou
    Abstract This paper deals with optimal designs for Gaussian random fields with constant trend and exponential correlation structure, widely known as the Ornstein,Uhlenbeck process. Assuming the maximum likelihood approach, we study the optimal design problem for the estimation of the trend µ and the correlation parameter , using a criterion based on the Fisher information matrix. For the problem of trend estimation, we give a new proof of the optimality of the equispaced design for any sample size (see Statist. Probab. Lett. 2008; 78:1388,1396). We also show that for the estimation of the correlation parameter, an optimal design does not exist. Furthermore, we show that the optimal strategy for µ conflicts with the one for ,, since the equispaced design is the worst solution for estimating the correlation. Hence, when the inferential purpose concerns both the unknown parameters we propose the geometric progression design, namely a flexible class of procedures that allow the experimenter to choose a suitable compromise regarding the estimation's precision of the two unknown parameters guaranteeing, at the same time, high efficiency for both. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Issues in the optimal design of computer simulation experiments

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2009
    Werner Müller
    Abstract Output from computer simulation experiments is often approximated as realizations of correlated random fields. Consequently, the corresponding optimal design questions must cope with the existence and detection of an error correlation structure, issues largely unaccounted for by traditional optimal design theory. Unfortunately, many of the nice features of well-established design techniques, such as additivity of the information matrix, convexity of design criteria, etc., do not carry over to the setting of interest. This may lead to unexpected, counterintuitive, even paradoxical effects in the design as well as the analysis stage of computer simulation experiments. In this paper we intend to give an overview and some simple but illuminating examples of this behaviour. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A model-based approach to quality control of paper production

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2004
    Patrick E. Brown
    Abstract This paper uses estimated model parameters as inputs into multivariate quality control charts. The thickness of paper leaving a paper mill is measured at a high sampling rate, and these data are grouped into successive data segments. A stochastic model for paper is fitted to each data segment, leading to parameter estimates and information-based standard errors for these estimates. The estimated model parameters vary by more than one can be explained by the information-based standard errors, suggesting that the ,true' underlying parameters are not constant over time. A model is formulated for the true parameters in which the information matrix dictates the distribution for the observed parameters given the true parameters. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    LEVERAGE ADJUSTMENTS FOR DISPERSION MODELLING IN GENERALIZED NONLINEAR MODELS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
    Gordon K. Smyth
    Summary For normal linear models, it is generally accepted that residual maximum likelihood estimation is appropriate when covariance components require estimation. This paper considers generalized linear models in which both the mean and the dispersion are allowed to depend on unknown parameters and on covariates. For these models there is no closed form equivalent to residual maximum likelihood except in very special cases. Using a modified profile likelihood for the dispersion parameters, an adjusted score vector and adjusted information matrix are found under an asymptotic development that holds as the leverages in the mean model become small. Subsequently, the expectation of the fitted deviances is obtained directly to show that the adjusted score vector is unbiased at least to,O(1/n). Exact results are obtained in the single-sample case. The results reduce to residual maximum likelihood estimation in the normal linear case. [source]


    MAXIMUM LIKELIHOOD ESTIMATION FOR A POISSON RATE PARAMETER WITH MISCLASSIFIED COUNTS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2005
    James D. Stamey
    Summary This paper proposes a Poisson-based model that uses both error-free data and error-prone data subject to misclassification in the form of false-negative and false-positive counts. It derives maximum likelihood estimators (MLEs) for the Poisson rate parameter and the two misclassification parameters , the false-negative parameter and the false-positive parameter. It also derives expressions for the information matrix and the asymptotic variances of the MLE for the rate parameter, the MLE for the false-positive parameter, and the MLE for the false-negative parameter. Using these expressions the paper analyses the value of the fallible data. It studies characteristics of the new double-sampling rate estimator via a simulation experiment and applies the new MLE estimators and confidence intervals to a real dataset. [source]


    Score Tests for Exploring Complex Models: Application to HIV Dynamics Models

    BIOMETRICAL JOURNAL, Issue 1 2010
    Julia Drylewicz
    Abstract In biostatistics, more and more complex models are being developed. This is particularly the case in system biology. Fitting complex models can be very time-consuming, since many models often have to be explored. Among the possibilities are the introduction of explanatory variables and the determination of random effects. The particularity of this use of the score test is that the null hypothesis is not itself very simple; typically, some random effects may be present under the null hypothesis. Moreover, the information matrix cannot be computed, but only an approximation based on the score. This article examines this situation with the specific example of HIV dynamics models. We examine the score test statistics for testing the effect of explanatory variables and the variance of random effect in this complex situation. We study type I errors and the statistical powers of this score test statistics and we apply the score test approach to a real data set of HIV-infected patients. [source]


    Fisher Information Matrix of the Dirichlet-multinomial Distribution

    BIOMETRICAL JOURNAL, Issue 2 2005
    Sudhir R. Paul
    Abstract In this paper we derive explicit expressions for the elements of the exact Fisher information matrix of the Dirichlet-multinomial distribution. We show that exact calculation is based on the beta-binomial probability function rather than that of the Dirichlet-multinomial and this makes the exact calculation quite easy. The exact results are expected to be useful for the calculation of standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters for data that arise in practice in toxicology and other similar fields. Standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters, based on the exact and the asymptotic Fisher information matrix based on the Dirichlet distribution, are obtained for a set of data from Haseman and Soares (1976), a dataset from Mosimann (1962) and a more recent dataset from Chen, Kodell, Howe and Gaylor (1991). There is substantial difference between the standard errors of the estimates based on the exact Fisher information matrix and those based on the asymptotic Fisher information matrix. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Standard Errors for EM Estimates in Generalized Linear Models with Random Effects

    BIOMETRICS, Issue 3 2000
    Herwig Friedl
    Summary. A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulas are used to approximate the integrals in the EM algorithm, where two different approaches are pursued, i.e., Gauss-Hermite quadrature in the case of Gaussian random effects and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations. [source]