Estimation Problem (estimation + problem)

Distribution by Scientific Domains

Kinds of Estimation Problem

  • parameter estimation problem


  • Selected Abstracts


    Bit-length expansion for digital images

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 10 2009
    Tomoaki Kimura
    Abstract Bit-length of digital images is decided by the Analog-to-Digital (AD) converter. The bit-length expansion technique is necessary for high-quality displays such as the liquid crystal display (LCD) and the plasma display. Bit-length expansion is also necessary for low-bit length images, which are degraded by the pseudo-contour. In this paper, we propose a bit-length expansion method from M -level to N -level [N=3(M,1)+1]. This operation means that two quantization levels are introduced between existing levels. The bit-length expansion method results in the estimation problem of an error image between an M -level's image and an N -level's image. The error image is the pattern image which consists of three values. We show the estimation method of the three values pattern image and that the proposed method can remove the pseudo-contour of digital images through several application results. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 92(10): 32,40, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10105 [source]


    Mahalanobis distance-based traffic matrix estimation

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2010
    Dingde Jiang
    This letter studies large-scale IP traffic matrix (TM) estimation problem and proposes a novel method called the Mahalanobis distance-based regressive inference (MDRI). By using Mahalanobis distance as an optimal metric, we can get rid of the highly ill-posed nature of this problem. We describe the TM estimation into an optimal process, and then by optimising the regularised equation about this problem, TM's estimation can accurately obtained. Testing results are shown to be promising. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Sequential Monte Carlo methods for multi-aircraft trajectory prediction in air traffic management

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2010
    I. Lymperopoulos
    Abstract Accurate prediction of aircraft trajectories is an important part of decision support and automated tools in air traffic management. We demonstrate that by combining information from multiple aircraft at different locations and time instants, one can provide improved trajectory prediction (TP) accuracy. To perform multi-aircraft TP, we have at our disposal abundant data. We show how this multi-aircraft sensor fusion problem can be formulated as a high-dimensional state estimation problem. The high dimensionality of the problem and nonlinearities in aircraft dynamics and control prohibit the use of common filtering methods. We demonstrate the inefficiency of several sequential Monte Carlo algorithms on feasibility studies involving multiple aircraft. We then develop a novel particle filtering algorithm to exploit the structure of the problem and solve it in realistic scale situations. In all studies we assume that aircraft fly level (possibly at different altitudes) with known, constant, aircraft-dependent airspeeds and estimate the wind forecast errors based only on ground radar measurements. Current work concentrates on extending the algorithms to non-level flights, the joint estimation of wind forecast errors and the airspeed and mass of the different aircraft and the simultaneous fusion of airborne and ground radar measurements. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Simultaneous input and parameter estimation with input observers and set-membership parameter bounding: theory and an automotive application

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 5 2006
    I. Kolmanovsky
    Abstract The paper addresses an on-line, simultaneous input and parameter estimation problem for a first-order system affected by measurement noise. This problem is motivated by practical applications in the area of engine control. Our approach combines an input observer for the unknown input with a set-membership algorithm to estimate the parameter. The set-membership algorithm takes advantage of a priori available information such as (i) known bounds on the unknown input, measurement noise and time rate of change of the unknown input; (ii) the form of the input observer in which the unknown parameter affects only the observer output; and (iii) the input observer error bounds for the case when the parameter is known exactly. The asymptotic properties of the algorithm as the observer gain increases are delineated. It is shown that for accurate estimation the unknown input needs to approach the known bounds a sufficient number of times (these time instants need not be known). Powertrain control applications are discussed and a simulation example based on application to engine control is reported. A generalization of the basic ideas to higher order systems is also elaborated. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    A practical approach for estimating illumination distribution from shadows using a single image

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2005
    Taeone Kim
    Abstract This article presents a practical method that estimates illumination distribution from shadows using only a single image. The shadows are assumed to be cast on a textured, Lambertian surface by an object of known shape. Previous methods for illumination estimation from shadows usually require that the reflectance property of the surface on which shadows are cast be constant or uniform, or need an additional image to cancel out the effects of varying albedo of the textured surface on illumination estimation. But, our method deals with an estimation problem for which surface albedo information is not available. In this case, the estimation problem corresponds to an underdetermined one. We show that the combination of regularization by correlation and some user-specified information can be a practical method for solving the underdetermined problem. In addition, as an optimization tool for solving the problem, we develop a constrained Non-Negative Quadratic Programming (NNQP) technique into which not only regularization but also multiple linear constraints induced by user-specified information are easily incorporated. We test and validate our method on both synthetic and real images and present some experimental results. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 143,154, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20047 [source]


    The pure parsimony haplotyping problem: overview and computational advances

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 5 2009
    Daniele Catanzaro
    Abstract Haplotyping estimation from aligned single-nucleotide polymorphism fragments has attracted more and more attention in recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from mapping of complex disease genes to inferring population histories, passing through designing drugs, functional genomics, and pharmacogenetics. The literature proposes a number of estimation criteria to select a set of haplotypes among possible alternatives. Usually, such criteria can be expressed under the form of objective functions, and the sets of haplotypes that optimize them are referred to as optimal. One of the most important estimation criteria is the pure parsimony, which states that the optimal set of haplotypes for a given set of genotypes is that having minimal cardinality. Finding the minimal number of haplotypes necessary to explain a given set of genotypes involves solving an optimization problem, called the pure parsimony haplotyping (PPH) estimation problem, which is notoriously -hard. This article provides an overview of PPH, and discusses the different approaches to solution that occur in the literature. [source]


    Constrained process monitoring: Moving-horizon approach

    AICHE JOURNAL, Issue 1 2002
    Christopher V. Rao
    Moving-horizon estimation (MHE) is an optimization-based strategy for process monitoring and state estimation. One may view MHE as an extension for Kalman filtering for constrained and nonlinear processes. MHE, therefore, subsumes both Kalman and extended Kalman filtering. In addition, MHE allows one to include constraints in the estimation problem. One can significantly improve the quality of state estimates for certain problems by incorporating prior knowledge in the form of inequality constraints. Inequality constraints provide a flexible tool for complementing process knowledge. One also may use inequality constraints as a strategy for model simplification. The ability to include constraints and nonlinear dynamics is what distinguishes MHE from other estimation strategies. Both the practical and theoretical issues related to MHE are discussed. Using a series of example monitoring problems, the practical advantages of MHE are illustrated by demonstrating how the addition of constraints can improve and simplify the process monitoring problem. [source]


    Design for model parameter uncertainty using nonlinear confidence regions

    AICHE JOURNAL, Issue 8 2001
    William C. Rooney
    An accurate method presented accounts for uncertain model parameters in nonlinear process optimization problems. The model representation is considered in terms of algebraic equations. Uncertain quantity parameters are often discretized into a number of finite values that are then used in multiperiod optimization problems. These discrete values usually range between some lower and upper bound that can be derived from individual confidence intervals. Frequently, more than one uncertain parameter is estimated at a time from a single set of experiments. Thus, using simple lower and upper bounds to describe these parameters may not be accurate, since it assumes the parameters are uncorrelated. In 1999 Rooney and Biegler showed the importance of including parameter correlation in design problems by using elliptical joint confidence regions to describe the correlation among the uncertain model parameters. In chemical engineering systems, however, the parameter estimation problem is often highly nonlinear, and the elliptical confidence regions derived from these problems may not be accurate enough to capture the actual model parameter uncertainty. In this work, the description of model parameter uncertainty is improved by using confidence regions derived from the likelihood ratio test. It captures the nonlinearities efficiently and accurately in the parameter estimation problem. Several examples solved show the importance of accurately capturing the actual model parameter uncertainty at the design stage. [source]


    A clustering approach to identify the time of a step change in Shewhart control charts

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2008
    Mehdi Ghazanfari
    Abstract Control charts are the most popular statistical process control tools used to monitor process changes. When a control chart indicates an out-of-control signal it means that the process has changed. However, control chart signals do not indicate the real time of process changes, which is essential for identifying and removing assignable causes and ultimately improving the process. Identifying the real time of the change is known as the change-point estimation problem. Most of the traditional methods of estimating the process change point are developed based on the assumption that the process follows a normal distribution with known parameters, which is seldom true. In this paper, we propose clustering techniques to estimate Shewhart control chart change points. The proposed approach does not depend on the true values of the parameters and even the distribution of the process variables. Accordingly, it is applicable to both phase-I and phase-II of normal and non-normal processes. At the end, we discuss the performance of the proposed method in comparison with the traditional procedures through extensive simulation studies. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Parameter Estimation in the Error-in-Variables Models Using the Gibbs Sampler

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2006
    Jessada J. Jitjareonchai
    Abstract Least squares and maximum likelihood techniques have long been used in parameter estimation problems. However, those techniques provide only point estimates with unknown or approximate uncertainty information. Bayesian inference coupled with the Gibbs Sampler is an approach to parameter estimation that exploits modern computing technology. The estimation results are complete with exact uncertainty information. The Error-in-Variables model (EVM) approach is investigated in this study. In it, both dependent and independent variables contain measurement errors, and the true values and uncertainties of all measurements are estimated. This EVM set-up leads to unusually large dimensionality in the estimation problem, which makes parameter estimation very difficult with classical techniques. In this paper, an innovative way of performing parameter estimation is introduced to chemical engineers. The paper shows that the method is simple and efficient; as well, complete and accurate uncertainty information about parameter estimates is readily available. Two real-world EVM examples are demonstrated: a large-scale linear model and an epidemiological model. The former is simple enough for most readers to understand the new concepts without difficulty. The latter has very interesting features in that a Poisson distribution is assumed, and a parameter with known distribution is retained while other unknown parameters are estimated. The Gibbs Sampler results are compared with those of the least squares. Les techniques de moindres carrés et de similitude maximale sont utilisées depuis longtemps dans les problèmes d'estimation des paramètres. Cependant, ces techniques ne fournissent que des estimations ponctuelles avec de l'information sur les incertitudes inconnue ou approximative. L'inférence de Bayes couplée à l'échantillonneur de Gibbs est une approche d'estimation paramétrique qui exploite la technologie moderne de calcul par ordinateur. Les résultats d'estimation sont complets avec l'information exacte sur les incertitudes. L'approche du modèle d'erreurs dans les variables (EVM) est étudiée dans cette étude. Dans cette méthode, les variables dépendantes et indépendantes contiennent des erreurs de mesure, et les véritables valeurs et incertitudes de toutes les mesures sont estimées. Ce système EVM mène à une dimensionnalité inhabituellement grande dans le problème d'estimation, ce qui rend l'estimation de paramètres très difficile avec les techniques classiques. Dans cet article, une façon innovante d'effectuer l'estimation de paramètres est présentée aux ingénieurs de génie chimique. On montre dans cet article que la méthode est simple et efficace; de même, de l'information complète et précise sur l'incertitude d'estimation de paramètres est accessible. Deux exemples d'EVM en situation réelle sont montrés, soient un modèle linéaire de grande échelle et un modèle épidémiologique. Le premier modèle est suffisamment simple pour la plupart des lecteurs pour comprendre les nouveaux concepts sans difficulté. Le deuxième possède des caractéristiques extrêmement intéressantes, en ce sens qu'on suppose une distribution de Poisson et qu'un paramètre ayant une distribution connue est retenu pendant que d'autres paramètres non connus sont estimés. Les résultats de l'échantillonneur de Gibbs sont comparés à ceux de la méthode des moindres carrés. [source]


    State estimation for time-delay systems with probabilistic sensor gain reductions

    ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 6 2008
    Xiao He
    Abstract This paper presents a new state estimation problem for a class of time-delay systems with probabilistic sensor gain faults. The sensor gain reductions are described by a stochastic variable that obeys the uniform distribution in a known interval [,, ,], which is a natural reflection of the probabilistic performance deterioration of sensors when gain reduction faults occur. Attention is focused on the design of a state estimator such that for all possible sensor faults and all external disturbances, the filtering error dynamic is asymptotically mean-square stable as well as fulfils a prescribed disturbance attenuation level. The existence of desired filters is proved to depend on the feasibility of a certain linear matrix inequality (LMI), and a numerical example is given to illustrate the effectiveness of the proposed design approach. Copyright © 2008 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


    Soil model parameter estimation with ensemble data assimilation

    ATMOSPHERIC SCIENCE LETTERS, Issue 2 2009
    Biljana Orescanin
    Abstract A parameter estimation problem in context of ensemble data assimilation is addressed. In an example using a one-point soil temperature model, the parameters corresponding to the emissivity and to the effective depth between the surface and the lowest atmospheric model level are estimated together with the initial conditions for temperature. The nonlinear synthetic observations representing various fluxes are assimilated using the Maximum Likelihood Ensemble Filter (MLEF). The results indicate a benefit of simultaneous assimilation of initial conditions and parameters. The estimated uncertainties are in general agreement with actual uncertainties. Copyright © 2009 Royal Meteorological Society [source]


    A NOTE ON SAMPLING DESIGNS FOR RANDOM PROCESSES WITH NO QUADRATIC MEAN DERIVATIVE

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2006
    Bhramar Mukherjee
    Summary Several authors have previously discussed the problem of obtaining asymptotically optimal design sequences for estimating the path of a stochastic process using intricate analytical techniques. In this note, an alternative treatment is provided for obtaining asymptotically optimal sampling designs for estimating the path of a second order stochastic process with known covariance function. A simple estimator is proposed which is asymptotically equivalent to the full-fledged best linear unbiased estimator and the entire asymptotics are carried out through studying this estimator. The current approach lends an intuitive statistical perspective to the entire estimation problem. [source]


    Double-Observer Line Transect Methods: Levels of Independence

    BIOMETRICS, Issue 1 2010
    Stephen T. Buckland
    Summary Double-observer line transect methods are becoming increasingly widespread, especially for the estimation of marine mammal abundance from aerial and shipboard surveys when detection of animals on the line is uncertain. The resulting data supplement conventional distance sampling data with two-sample mark,recapture data. Like conventional mark,recapture data, these have inherent problems for estimating abundance in the presence of heterogeneity. Unlike conventional mark,recapture methods, line transect methods use knowledge of the distribution of a covariate, which affects detection probability (namely, distance from the transect line) in inference. This knowledge can be used to diagnose unmodeled heterogeneity in the mark,recapture component of the data. By modeling the covariance in detection probabilities with distance, we show how the estimation problem can be formulated in terms of different levels of independence. At one extreme, full independence is assumed, as in the Petersen estimator (which does not use distance data); at the other extreme, independence only occurs in the limit as detection probability tends to one. Between the two extremes, there is a range of models, including those currently in common use, which have intermediate levels of independence. We show how this framework can be used to provide more reliable analysis of double-observer line transect data. We test the methods by simulation, and by analysis of a dataset for which true abundance is known. We illustrate the approach through analysis of minke whale sightings data from the North Sea and adjacent waters. [source]


    Use of VFSA for resolution, sensitivity and uncertainty analysis in 1D DC resistivity and IP inversion

    GEOPHYSICAL PROSPECTING, Issue 5 2003
    Bimalendu B. Bhattacharya
    ABSTRACT We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding data using a non-linear inversion. The inversion scheme uses a theoretically correct Metropolis,Gibbs' sampling technique and an approximate method using numerous models sampled by a global optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to be computationally efficient in several geophysical parameter estimation problems. Unlike conventional simulated annealing (SA), in VFSA the perturbations are generated from the model parameters according to a Cauchy-like distribution whose shape changes with each iteration. This results in an algorithm that converges much faster than a standard SA. In the course of finding the optimal solution, VFSA samples several models from the search space. All these models can be used to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about the shape of an a posteriori probability density function in the model space. Here, we carry out a VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory. The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and uncertainty of layer parameters, would generally be more useful than the conventional best-fit techniques. [source]


    Examination for adjoint boundary conditions in initial water elevation estimation problems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2010
    T. KurahashiArticle first published online: 23 JUL 200
    Abstract I present here a method of generating a distribution of initial water elevation by employing the adjoint equation and finite element methods. A shallow-water equation is employed to simulate flow behavior. The adjoint equation method is utilized to obtain a distribution of initial water elevation for the observed water elevation. The finite element method, using the stabilized bubble function element, is used for spatial discretization, and the Crank,Nicolson method is used for temporal discretizations. In addition to a method for optimally assimilating water elevation, a method is presented for determining adjoint boundary conditions. An examination using the observation data including noise data is also carried out. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Directional leakage and parameter drift,

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2006
    M. Hovd
    Abstract A new method for eliminating parameter drift in parameter estimation problems is proposed. Existing methods for eliminating parameter drift work either on a limited time horizon, restricts the parameter estimates to a range that has to be determined a priori, or introduces bias in the parameter estimates which will degrade steady state performance. The idea of the new method is to apply leakage only in the directions in parameter space in which the exciting signal is not informative. This avoids the problem of parameter bias associated with conventional leakage. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Guaranteed H, robustness bounds for Wiener filtering and prediction

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2002
    P. Bolzern
    Abstract The paper deals with special classes of H, estimation problems, where the signal to be estimated coincides with the uncorrupted measured output. Explicit bounds on the difference between nominal and actual H, performance are obtained by means of elementary algebraic manipulations. These bounds are new in continuous-time filtering and discrete-time one-step ahead prediction. As for discrete-time filtering, the paper provides new proofs that are alternative to existing derivations based on the Krein spaces formalism. In particular, some remarkable H, robustness properties of Kalman filters and predictors are highlighted. The usefulness of these results for improving the estimator design under a mixed H2/H, viewpoint is also discussed. The dualization of the analysis allows one to evaluate guaranteed H, robustness bounds for state-feedback regulators of systems affected by actuator disturbances. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Image signal-to-noise ratio estimation using Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average model

    MICROSCOPY RESEARCH AND TECHNIQUE, Issue 10 2008
    K.S. Sim
    Abstract We propose to cascade the Shape-Preserving Piecewise Cubic Hermite model with the Autoregressive Moving Average (ARMA) interpolator; we call this technique the Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average (SP2CHARMA) model. In a few test cases involving different images, this model is found to deliver an optimum solution for signal to noise ratio (SNR) estimation problems under different noise environments. The performance of the proposed estimator is compared with two existing methods: the autoregressive-based and autoregressive moving average estimators. Being more robust with noise, the SP2CHARMA estimator has efficiency that is significantly greater than those of the two methods. Microsc. Res. Tech., 2008. © 2008 Wiley-Liss, Inc. [source]


    Statistical basis for positive identification in forensic anthropology

    AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 1 2006
    Dawnie Wolfe Steadman
    Abstract Forensic scientists are often expected to present the likelihood of DNA identifications in US courts based on comparative population data, yet forensic anthropologists tend not to quantify the strength of an osteological identification. Because forensic anthropologists are trained first and foremost as physical anthropologists, they emphasize estimation problems at the expense of evidentiary problems, but this approach must be reexamined. In this paper, the statistical bases for presenting osteological and dental evidence are outlined, using a forensic case as a motivating example. A brief overview of Bayesian statistics is provided, and methods to calculate likelihood ratios for five aspects of the biological profile are demonstrated. This paper emphasizes the definition of appropriate reference samples and of the "population at large," and points out the conceptual differences between them. Several databases are introduced for both reference information and to characterize the "population at large," and new data are compiled to calculate the frequency of specific characters, such as age or fractures, within the "population at large." Despite small individual likelihood ratios for age, sex, and stature in the case example, the power of this approach is that, assuming each likelihood ratio is independent, the product rule can be applied. In this particular example, it is over three million times more likely to obtain the observed osteological and dental data if the identification is correct than if the identification is incorrect. This likelihood ratio is a convincing statistic that can support the forensic anthropologist's opinion on personal identity in court. Am J Phys Anthropol, 2006. © 2006 Wiley-Liss, Inc. [source]


    Parameter Estimation in the Error-in-Variables Models Using the Gibbs Sampler

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2006
    Jessada J. Jitjareonchai
    Abstract Least squares and maximum likelihood techniques have long been used in parameter estimation problems. However, those techniques provide only point estimates with unknown or approximate uncertainty information. Bayesian inference coupled with the Gibbs Sampler is an approach to parameter estimation that exploits modern computing technology. The estimation results are complete with exact uncertainty information. The Error-in-Variables model (EVM) approach is investigated in this study. In it, both dependent and independent variables contain measurement errors, and the true values and uncertainties of all measurements are estimated. This EVM set-up leads to unusually large dimensionality in the estimation problem, which makes parameter estimation very difficult with classical techniques. In this paper, an innovative way of performing parameter estimation is introduced to chemical engineers. The paper shows that the method is simple and efficient; as well, complete and accurate uncertainty information about parameter estimates is readily available. Two real-world EVM examples are demonstrated: a large-scale linear model and an epidemiological model. The former is simple enough for most readers to understand the new concepts without difficulty. The latter has very interesting features in that a Poisson distribution is assumed, and a parameter with known distribution is retained while other unknown parameters are estimated. The Gibbs Sampler results are compared with those of the least squares. Les techniques de moindres carrés et de similitude maximale sont utilisées depuis longtemps dans les problèmes d'estimation des paramètres. Cependant, ces techniques ne fournissent que des estimations ponctuelles avec de l'information sur les incertitudes inconnue ou approximative. L'inférence de Bayes couplée à l'échantillonneur de Gibbs est une approche d'estimation paramétrique qui exploite la technologie moderne de calcul par ordinateur. Les résultats d'estimation sont complets avec l'information exacte sur les incertitudes. L'approche du modèle d'erreurs dans les variables (EVM) est étudiée dans cette étude. Dans cette méthode, les variables dépendantes et indépendantes contiennent des erreurs de mesure, et les véritables valeurs et incertitudes de toutes les mesures sont estimées. Ce système EVM mène à une dimensionnalité inhabituellement grande dans le problème d'estimation, ce qui rend l'estimation de paramètres très difficile avec les techniques classiques. Dans cet article, une façon innovante d'effectuer l'estimation de paramètres est présentée aux ingénieurs de génie chimique. On montre dans cet article que la méthode est simple et efficace; de même, de l'information complète et précise sur l'incertitude d'estimation de paramètres est accessible. Deux exemples d'EVM en situation réelle sont montrés, soient un modèle linéaire de grande échelle et un modèle épidémiologique. Le premier modèle est suffisamment simple pour la plupart des lecteurs pour comprendre les nouveaux concepts sans difficulté. Le deuxième possède des caractéristiques extrêmement intéressantes, en ce sens qu'on suppose une distribution de Poisson et qu'un paramètre ayant une distribution connue est retenu pendant que d'autres paramètres non connus sont estimés. Les résultats de l'échantillonneur de Gibbs sont comparés à ceux de la méthode des moindres carrés. [source]


    When Should Epidemiologic Regressions Use Random Coefficients?

    BIOMETRICS, Issue 3 2000
    Sander Greenland
    Summary. Regression models with random coefficients arise naturally in both frequentist and Bayesian approaches to estimation problems. They are becoming widely available in standard computer packages under the headings of generalized linear mixed models, hierarchical models, and multilevel models. I here argue that such models offer a more scientifically defensible framework for epidemiologic analysis than the fixed-effects models now prevalent in epidemiology. The argument invokes an antiparsimony principle attributed to L. J. Savage, which is that models should be rich enough to reflect the complexity of the relations under study. It also invokes the countervailing principle that you cannot estimate anything if you try to estimate everything (often used to justify parsimony). Regression with random coefficients offers a rational compromise between these principles as well as an alternative to analyses based on standard variable-selection algorithms and their attendant distortion of uncertainty assessments. These points are illustrated with an analysis of data on diet, nutrition, and breast cancer. [source]