Home About us Contact | |||
Proposed Estimators (proposed + estimator)
Selected AbstractsEstimating lifetime or episode-of-illness costs under censoringHEALTH ECONOMICS, Issue 9 2010Anirban Basu Abstract Many analyses of healthcare costs involve use of data with varying periods of observation and right censoring of cases before death or at the end of the episode of illness. The prominence of observations with no expenditure for some short periods of observation and the extreme skewness typical of these data raise concerns about the robustness of estimators based on inverse probability weighting (IPW) with the survival from censoring probabilities. These estimators also cannot distinguish between the effects of covariates on survival and intensity of utilization, which jointly determine costs. In this paper, we propose a new estimator that extends the class of two-part models to deal with random right censoring and for continuous death and censoring times. Our model also addresses issues about the time to death in these analyses and separates the survival effects from the intensity effects. Using simulations, we compare our proposed estimator to the inverse probability estimator, which shows bias when censoring is large and covariates affect survival. We find our estimator to be unbiased and also more efficient for these designs. We apply our method and compare it with the IPW method using data from the Medicare,SEER files on prostate cancer. Copyright © 2010 John Wiley & Sons, Ltd. [source] Approximate channel identification via , -signed correlationINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 4 2002J. Balakrishnan Abstract A method of approximate channel identification is proposed that is based on a simplification of the correlation estimator. Despite the numerical simplification (no multiplications or additions are required, only comparisons and an accumulator), the performance of the proposed estimator is not significantly worse than that of the standard correlation estimator. A free (user selectable) parameter moves ,smoothly' from a situation with small sum-squared channel estimation error but hard-to-identify channel peaks, to one with a larger sum-squared estimation error but easy-to-identify channel peaks. The proposed estimator is shown to be biased and its behaviour is analysed in a number of situations. Applications of the proposed estimator to sparsity detection, symbol timing recovery and to the initialization of blind equalizers are suggested. Copyright © 2002 John Wiley & Sons, Ltd. [source] Sliding,window neural state estimation in a power plant heater lineINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2001A. Alessandri Abstract The state estimation problem for a section of a real power plant is addressed by means of a recently proposed sliding-window neural state estimator. The complexity and the nonlinearity of the considered application prevent us from successfully using standard techniques as Kalman filtering. The statistics of the distribution of the initial state and of noises are assumed to be unknown and the estimator is designed by minimizing a given generalized least-squares cost function. The following approximations are enforced: (i) the state estimator is a finite-memory one, (ii) the estimation functions are given fixed structures in which a certain number of parameters have to be optimized (multilayer feedforward neural networks are chosen from among various possible nonlinear approximators), (iii) the algorithms for optimizing the parameters (i.e., the network weights) rely on a stochastic approximation. Extensive simulation results on a complex model of a part of a real power plant are reported to compare the behaviour of the proposed estimator with the extended Kalman filter. Copyright © 2001 John Wiley & Sons, Ltd. [source] Minimum , -divergence estimation for arch modelsJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2006S. Ajay Chandra Abstract., This paper considers a minimum , -divergence estimation for a class of ARCH(p) models. For these models with unknown volatility parameters, the exact form of the innovation density is supposed to be unknown in detail but is thought to be close to members of some parametric family. To approximate such a density, we first construct an estimator for the unknown volatility parameters using the conditional least squares estimator given by Tjøstheim [Stochastic processes and their applications (1986) Vol. 21, pp. 251,273]. Then, a nonparametric kernel density estimator is constructed for the innovation density based on the estimated residuals. Using techniques of the minimum Hellinger distance estimation for stochastic models and residual empirical process from an ARCH(p) model given by Beran [Annals of Statistics (1977) Vol. 5, pp. 445,463] and Lee and Taniguchi [Statistica Sinica (2005) Vol. 15, pp. 215,234] respectively, it is shown that the proposed estimator is consistent and asymptotically normal. Moreover, a robustness measure for the score of the estimator is introduced. The asymptotic efficiency and robustness of the estimator are illustrated by simulations. The proposed estimator is also applied to daily stock returns of Dell Corporation. [source] Image signal-to-noise ratio estimation using Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average modelMICROSCOPY RESEARCH AND TECHNIQUE, Issue 10 2008K.S. Sim Abstract We propose to cascade the Shape-Preserving Piecewise Cubic Hermite model with the Autoregressive Moving Average (ARMA) interpolator; we call this technique the Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average (SP2CHARMA) model. In a few test cases involving different images, this model is found to deliver an optimum solution for signal to noise ratio (SNR) estimation problems under different noise environments. The performance of the proposed estimator is compared with two existing methods: the autoregressive-based and autoregressive moving average estimators. Being more robust with noise, the SP2CHARMA estimator has efficiency that is significantly greater than those of the two methods. Microsc. Res. Tech., 2008. © 2008 Wiley-Liss, Inc. [source] Equilibrated error estimators for discontinuous Galerkin methodsNUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 5 2008Sarah Cochez-Dhondt Abstract We consider some diffusion problems in domains of ,d, d = 2 or 3 approximated by a discontinuous Galerkin method with polynomials of any degree. We propose a new a posteriori error estimator based on H(div)-conforming elements. It is shown that this estimator gives rise to an upper bound where the constant is one up to higher order terms. The lower bound is also established with a constant depending on the aspect ratio of the mesh, the dependence with respect to the coefficients being also traced. The reliability and efficiency of the proposed estimator is confirmed by some numerical tests. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2008 [source] Identifying the time of polynomial drift in the mean of autocorrelated processesQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2010Marcus B. Perry Abstract Control charts are used to detect changes in a process. Once a change is detected, knowledge of the change point would simplify the search for and identification of the special ause. Consequently, having an estimate of the process change point following a control chart signal would be useful to process engineers. This paper addresses change point estimation for covariance-stationary autocorrelated processes where the mean drifts deterministically with time. For example, the mean of a chemical process might drift linearly over time as a result of a constant pressure leak. The goal of this paper is to derive and evaluate an MLE for the time of polynomial drift in the mean of autocorrelated processes. It is assumed that the behavior in the process mean over time is adequately modeled by the kth-order polynomial trend model. Further, it is assumed that the autocorrelation structure is adequately modeled by the general (stationary and invertible) mixed autoregressive-moving-average model. The estimator is intended to be applied to data obtained following a genuine control chart signal in efforts to help pinpoint the root cause of process change. Application of the estimator is demonstrated using a simulated data set. The performance of the estimator is evaluated through Monte Carlo simulation studies for the k=1 case and across several processes yielding various levels of positive autocorrelation. Results suggest that the proposed estimator provides process engineers with an accurate and useful estimate for the last sample obtained from the unchanged process. Copyright © 2009 John Wiley & Sons, Ltd. [source] Selection Bias and Continuous-Time Duration Models: Consequences and a Proposed SolutionAMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 1 2006Frederick J. Boehmke This article analyzes the consequences of nonrandom sample selection for continuous-time duration analyses and develops a new estimator to correct for it when necessary. We conduct a series of Monte Carlo analyses that estimate common duration models as well as our proposed duration model with selection. These simulations show that ignoring sample selection issues can lead to biased parameter estimates, including the appearance of (nonexistent) duration dependence. In addition, our proposed estimator is found to be superior in root mean-square error terms when nontrivial amounts of selection are present. Finally, we provide an empirical application of our method by studying whether self-selectivity is a problem for studies of leaders' survival during and following militarized conflicts. [source] Specification and estimation of social interaction models with network structuresTHE ECONOMETRICS JOURNAL, Issue 2 2010Lung-fei Lee Summary, This paper considers the specification and estimation of social interaction models with network structures and the presence of endogenous, contextual and correlated effects. With macro group settings, group-specific fixed effects are also incorporated in the model. The network structure provides information on the identification of the various interaction effects. We propose a quasi-maximum likelihood approach for the estimation of the model. We derive the asymptotic distribution of the proposed estimator, and provide Monte Carlo evidence on its small sample performance. [source] A wavelet solution to the spurious regression of fractionally differenced processesAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2003Yanqin Fan Abstract In this paper we propose to overcome the problem of spurious regression between fractionally differenced processes by applying the discrete wavelet transform (DWT) to both processes and then estimating the regression in the wavelet domain. The DWT is known to approximately decorrelate heavily autocorrelated processes and, unlike applying a first difference filter, involves a recursive two-step filtering and downsampling procedure. We prove the asymptotic normality of the proposed estimator and demonstrate via simulation its efficacy in finite samples. Copyright © 2003 John Wiley & Sons, Ltd. [source] KERNEL DENSITY ESTIMATION WITH MISSING DATA AND AUXILIARY VARIABLESAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2009Suzanne R. Dubnicka Summary In most parametric statistical analyses, knowledge of the distribution of the response variable, or of the errors, is important. As this distribution is not typically known with certainty, one might initially construct a histogram or estimate the density of the variable of interest to gain insight regarding the distribution and its characteristics. However, when the response variable is incomplete, a histogram will only provide a representation of the distribution of the observed data. In the AIDS Clinical Trial Study protocol 175, interest lies in the difference in CD4 counts from baseline to final follow-up, but CD4 counts collected at final follow-up were incomplete. A method is therefore proposed for estimating the density of an incomplete response variable when auxiliary data are available. The proposed estimator is based on the Horvitz,Thompson estimator, and the propensity scores are estimated nonparametrically. Simulation studies indicate that the proposed estimator performs well. [source] Cox Regression in Nested Case,Control Studies with Auxiliary CovariatesBIOMETRICS, Issue 2 2010Mengling Liu Summary Nested case,control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor. [source] Design and Inference for Cancer Biomarker Study with an Outcome and Auxiliary-Dependent SubsamplingBIOMETRICS, Issue 2 2010Xiaofei Wang Summary In cancer research, it is important to evaluate the performance of a biomarker (e.g., molecular, genetic, or imaging) that correlates patients' prognosis or predicts patients' response to treatment in a large prospective study. Due to overall budget constraint and high cost associated with bioassays, investigators often have to select a subset from all registered patients for biomarker assessment. To detect a potentially moderate association between the biomarker and the outcome, investigators need to decide how to select the subset of a fixed size such that the study efficiency can be enhanced. We show that, instead of drawing a simple random sample from the study cohort, greater efficiency can be achieved by allowing the selection probability to depend on the outcome and an auxiliary variable; we refer to such a sampling scheme as,outcome and auxiliary-dependent subsampling,(OADS). This article is motivated by the need to analyze data from a lung cancer biomarker study that adopts the OADS design to assess epidermal growth factor receptor (EGFR) mutations as a predictive biomarker for whether a subject responds to a greater extent to EGFR inhibitor drugs. We propose an estimated maximum-likelihood method that accommodates the OADS design and utilizes all observed information, especially those contained in the likelihood score of EGFR mutations (an auxiliary variable of EGFR mutations) that is available to all patients. We derive the asymptotic properties of the proposed estimator and evaluate its finite sample properties via simulation. We illustrate the proposed method with a data example. [source] Exploiting Gene-Environment Independence for Analysis of Case,Control Studies: An Empirical Bayes-Type Shrinkage Estimator to Trade-Off between Bias and EfficiencyBIOMETRICS, Issue 3 2008Bhramar Mukherjee Summary Standard prospective logistic regression analysis of case,control data often leads to very imprecise estimates of gene-environment interactions due to small numbers of cases or controls in cells of crossing genotype and exposure. In contrast, under the assumption of gene-environment independence, modern "retrospective" methods, including the "case-only" approach, can estimate the interaction parameters much more precisely, but they can be seriously biased when the underlying assumption of gene-environment independence is violated. In this article, we propose a novel empirical Bayes-type shrinkage estimator to analyze case,control data that can relax the gene-environment independence assumption in a data-adaptive fashion. In the special case, involving a binary gene and a binary exposure, the method leads to an estimator of the interaction log odds ratio parameter in a simple closed form that corresponds to an weighted average of the standard case-only and case,control estimators. We also describe a general approach for deriving the new shrinkage estimator and its variance within the retrospective maximum-likelihood framework developed by Chatterjee and Carroll (2005, Biometrika92, 399,418). Both simulated and real data examples suggest that the proposed estimator strikes a balance between bias and efficiency depending on the true nature of the gene-environment association and the sample size for a given study. [source] Catch Estimation in the Presence of Declining Catch Rate Due to Gear SaturationBIOMETRICS, Issue 1 2001Philip C. Dauk Summary. One strategy for estimating total catch is to employ two separate surveys that independently estimate total fishing effort and catch rate with the estimator for total catch formed by their product. Survey designs for estimating catch rate often involve interviewing the fishermen during their fishing episodes. Such roving designs result in incomplete episode data and characteristically have employed a model in which the catch rate is assumed to be constant over time. This article extends the problem to that of estimating total catch in the presence of a declining catch rate due, e.g., to gear saturation. Using a gill net fishery as an example, a mean-of-ratios type of estimator for the catch rate together with its variance estimator are developed. Their performance is examined using simulations, with special attention given to effects of restrictions on the roving survey window. Finally, data from a Fraser River gill net fishery are used to illustrate the use of the proposed estimator and to compare results with those from an estimator based on a constant catch rate. [source] Panel Data Discrete Choice Models with Lagged Dependent VariablesECONOMETRICA, Issue 4 2000Bo E. Honoré In this paper, we consider identification and estimation in panel data discrete choice models when the explanatory variable set includes strictly exogenous variables, lags of the endogenous dependent variable as well as unobservable individual-specific effects. For the binary logit model with the dependent variable lagged only once, Chamberlain (1993) gave conditions under which the model is not identified. We present a stronger set of conditions under which the parameters of the model are identified. The identification result suggests estimators of the model, and we show that these are consistent and asymptotically normal, although their rate of convergence is slower than the inverse of the square root of the sample size. We also consider identification in the semiparametric case where the logit assumption is relaxed. We propose an estimator in the spirit of the conditional maximum score estimator (Manski (1987)) and we show that it is consistent. In addition, we discuss an extension of the identification result to multinomial discrete choice models, and to the case where the dependent variable is lagged twice. Finally, we present some Monte Carlo evidence on the small sample performance of the proposed estimators for the binary response model. [source] Ratio estimators in adaptive cluster samplingENVIRONMETRICS, Issue 6 2007Arthur L. Dryver Abstract In most surveys data are collected on many items rather than just the one variable of primary interest. Making the most use of the information collected is a issue of both practical and theoretical interest. Ratio estimates for the population mean or total are often more efficient. Unfortunately, ratio estimation is straightforward with simple random sampling, but this is often not the case when more complicated sampling designs are used, such as adaptive cluster sampling. A serious concern with ratio estimates introduced with many complicated designs is lack of independence, a necessary assumption. In this article, we propose two new ratio estimators under adaptive cluster sampling, one of which is unbiased for adaptive cluster sampling designs. The efficiencies of the new estimators to existing unbiased estimators, which do not utilize the auxiliary information, for adaptive cluster sampling and the conventional ratio estimation under simple random sampling without replacement are compared in this article. Related result shows the proposed estimators can be considered as a robust alternative of the conventional ratio estimator, especially when the correlation between the variable of interest and the auxiliary variable is not high enough for the conventional ratio estimator to have satisfactory performance. Copyright © 2007 John Wiley & Sons, Ltd. [source] Estimation of population size for additive,multiplicative models based on continuous-time recapture experimentsENVIRONMETRICS, Issue 8 2002Yan Wang Abstract An additive,multiplicative model and a Horvitz,Thompson-type estimator are proposed to estimate the unknown population size in a continuous-time recapture experiment. The proposed inference about the model parameters of the capture intensity is similar to that of Lin and Ying (1995). However, the population size in a recapture experiment is not known and a modification is needed. Simulation results are given to assess the properties of the proposed estimators and the associated inference procedures. A set of recapture data for deer mice is presented to demonstrate the performance of the estimating procedure. Copyright © 2002 John Wiley & Sons, Ltd. [source] The estimation of sibling genetic risk parameters revisitedGENETIC EPIDEMIOLOGY, Issue 4 2004Guohua Zou Abstract This report points out that some sibling genetic risk parameters can be regarded as the ratios of the characteristic values in the ascertainment subpopulation. Based on this observation, we reconsider Olson and Cordell's ([2000] Genet. Epidemiol. 18:217,235) and Cordell and Olson's ([2000] Genet. Epidemiol. 18:307,321) estimators, and re-derive these estimators. Furthermore, we provide the closed-form variance estimators. Simulation results suggest that our proposed estimators perform very well, and single ascertainment may be better than complete ascertainment for estimating these genetic parameters. © 2004 Wiley-Liss, Inc. [source] PAIRWISE DIFFERENCE ESTIMATION WITH NONPARAMETRIC CONTROL VARIABLES,INTERNATIONAL ECONOMIC REVIEW, Issue 4 2007Andres Aradillas-Lopez This article extends the pairwise difference estimators for various semilinear limited dependent variable models proposed by Honoré and Powell (Identification and Inference in Econometric Models. Essays in Honor of Thomas Rothenberg Cambridge: Cambridge University Press, 2005) to permit the regressor appearing in the nonparametric component to itself depend upon a conditional expectation that is nonparametrically estimated. This permits the estimation approach to be applied to nonlinear models with sample selectivity and/or endogeneity, in which a "control variable" for selectivity or endogeneity is nonparametrically estimated. We develop the relevant asymptotic theory for the proposed estimators and we illustrate the theory to derive the asymptotic distribution of the estimator for the partially linear logit model. [source] Assessing accuracy of a continuous screening test in the presence of verification biasJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2005Todd A. Alonzo Summary., In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss. [source] Assessment of a capability index sensitive to skewnessQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2001P. C. Nahar Abstract For many quality characteristics, such as circularity, cylindricity, straightness and flatness, positive skewness in the inspection data is the norm, and, in fact, is desirable. Summarizing the process performance using such data in conjunction with capability indices has recently received a considerable amount of attention, with new indices being proposed and compared for usefulness and accuracy. This paper is intended to contribute to this growing discussion, and to add a unique focus. In particular, this investigation concentrates on one form of a neoclassical index, the Cs index, originally proposed to be sensitive to skewness and to decrease in value as the skewness increased in the underlying distribution of the data. In other words, ,skewness is badness'. Looking at this index from an altered perspective, the possibility that this index could serve a useful purpose in summarizing process performance for such non-normal processes by merely changing its interpretation or slightly changing its form is considered. Hence, actual data from circularity measurements are used to identify a relevant group of distributions, and then the accuracy of Cs is investigated along with its modified version for this group of distributions. In particular, this investigation includes several Rayleigh and gamma distributions for various sample sizes and reports on the bias of the proposed estimators. These findings indicate that such a modified index has some useful attributes in reflecting process performance, with respect to the percentage of non-conformance and the accuracy for relatively large samples. Copyright © 2001 John Wiley & Sons, Ltd. [source] Estimation methods for time-dependent AUC models with survival dataTHE CANADIAN JOURNAL OF STATISTICS, Issue 1 2010Hung Hung Abstract The performance of clinical tests for disease screening is often evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Recent developments have extended the traditional setting to the AUC with binary time-varying failure status. Without considering covariates, our first theme is to propose a simple and easily computed nonparametric estimator for the time-dependent AUC. Moreover, we use generalized linear models with time-varying coefficients to characterize the time-dependent AUC as a function of covariate values. The corresponding estimation procedures are proposed to estimate the parameter functions of interest. The derived limiting Gaussian processes and the estimated asymptotic variances enable us to construct the approximated confidence regions for the AUCs. The finite sample properties of our proposed estimators and inference procedures are examined through extensive simulations. An analysis of the AIDS Clinical Trials Group (ACTG) 175 data is further presented to show the applicability of the proposed methods. The Canadian Journal of Statistics 38:8,26; 2010 © 2009 Statistical Society of Canada La performance des tests cliniques pour le dépistage de maladie est souvent évaluée en utilisant l'aire sous la courbe caractéristique de fonctionnements du récepteur (, ROC , ), notée , AUC , . Des développements récents ont généralisé le cadre traditionnel à l'AUC avec un statut de panne binaire variant dans le temps. Sans considérer les covariables, nous commençons par proposer un estimateur non paramétrique pour l'AUC simple et facile à calculer. De plus, nous utilisons des modèles linéaires généralisés avec des coefficients dépendant du temps pour caractériser les AUC, dépendant du temps, comme fonction des covariables. Les procédures d'estimation asociées correspondantes sont proposées afin d'estimer les fonctions paramètres d'intérêt. Les processus gaussiens limites sont obtenus ainsi que les variances asymptotiques estimées afin de construire des régions de confiance approximatives pour les AUC. À l'aide de nombreuses simulations, les propriétés pour de petits échantillons des estimateurs proposés et des procédures d'inférence sont étudiées. Une analyse du groupe d'essais cliniques sur le sida 175 (ACTG 175) est aussi présentée afin de montrer l'applicabilité des méthodes proposées. La revue canadienne de statistique 38: 8,26; 2010 © 2009 Société statistique du Canada [source] A unified approach to estimation of nonlinear mixed effects and Berkson measurement error modelsTHE CANADIAN JOURNAL OF STATISTICS, Issue 2 2007Liqun Wang Abstract Mixed effects models and Berkson measurement error models are widely used. They share features which the author uses to develop a unified estimation framework. He deals with models in which the random effects (or measurement errors) have a general parametric distribution, whereas the random regression coefficients (or unobserved predictor variables) and error terms have nonparametric distributions. He proposes a second-order least squares estimator and a simulation-based estimator based on the first two moments of the conditional response variable given the observed covariates. He shows that both estimators are consistent and asymptotically normally distributed under fairly general conditions. The author also reports Monte Carlo simulation studies showing that the proposed estimators perform satisfactorily for relatively small sample sizes. Compared to the likelihood approach, the proposed methods are computationally feasible and do not rely on the normality assumption for random effects or other variables in the model. Une stratégie d'estimation commune pour les modèles non linéaires à effets mixtes et les modèles d'erreur de mesure de Berkson Les modèles à effets mixtes et les modèles d'erreur de mesure de Berkson sont très usités. Ils par-tagent certaines caractéristiques que l'auteur met à profit pour élaborer une stratégie d'estimation commune. II considère des modèles dans lesquels la loi des effets aléatoires (ou des erreurs de mesure) est paramé-trique tandis que celles des coefficients de régression aléatoires (ou de variables exogènes non observées) et des termes d'erreur ne le sont pas. II propose une estimation des moindres carrés au second ordre et une approche par simulation fondées sur les deux premiers moments conditionnels de la variable endogène, sachant les variables exogènes observées. Les deux estimateurs s'avèrent convergents et asymptotiquement gaussiens sous des conditions assez générales. L'auteur fait aussi état d'études de Monte-Carlo attestant du bon comportement des deux estimations dans des échantillons relativement petits. Les méthodes proposées ne posent aucune difficulté particulière au plan numérique et au contraire de l'approche par vraisemblance, ne supposent ni la normalité des effets aléatoires, ni celle des autres variables du modèle. [source] Robust modelling of DTARCH modelsTHE ECONOMETRICS JOURNAL, Issue 2 2005Yer Van Hui Summary, Autoregressive conditional heteroscedastic (ARCH) models and its extensions are widely used in modelling volatility in financial time series. One of the variants, the double-threshold autoregressive conditional heteroscedastic (DTARCH) model, has been proposed to model the conditional mean and the conditional variance that are piecewise linear. The DTARCH model is also useful for modelling conditional heteroscedasticity with nonlinear structures such as asymmetric cycles, jump resonance and amplitude-frequence dependence. Since asset returns often display heavy tails and outliers, it is worth studying robust DTARCH modelling without specific distribution assumption. This paper studies DTARCH structures for conditional scale instead of conditional variance. We examine L1 -estimation of the DTARCH model and derive limiting distributions for the proposed estimators. A robust portmanteau statistic based on the L1 -norm fit is constructed to test the model adequacy. This approach captures various nonlinear phenomena and stylized facts with desirable robustness. Simulations show that the L1 -estimators are robust against innovation distributions and accurate for a moderate sample size, and the proposed test is not only robust against innovation distributions but also powerful in discriminating the delay parameters and ARCH models. It is noted that the quasi-likelihood modelling approach used in ARCH models is inappropriate to DTARCH models in the presence of outliers and heavy tail innovations. [source] COVARIATE-ADJUSTED REGRESSION FOR LONGITUDINAL DATA INCORPORATING CORRELATION BETWEEN REPEATED MEASUREMENTSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2009Danh V. Nguyen Summary We propose an estimation method that incorporates the correlation/covariance structure between repeated measurements in covariate-adjusted regression models for distorted longitudinal data. In this distorted data setting, neither the longitudinal response nor (possibly time-varying) predictors are directly observable. The unobserved response and predictors are assumed to be distorted/contaminated by unknown functions of a common observable confounder. The proposed estimation methodology adjusts for the distortion effects both in estimation of the covariance structure and in the regression parameters using generalized least squares. The finite-sample performance of the proposed estimators is studied numerically by means of simulations. The consistency and convergence rates of the proposed estimators are also established. The proposed method is illustrated with an application to data from a longitudinal study of cognitive and social development in children. [source] LOWER BOUNDS TO THE POPULATION SIZE WHEN CAPTURE PROBABILITIES VARY OVER INDIVIDUALSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2008Chang Xuan Mao Summary The problem of estimating population sizes has a wide range of applications. Although the size is non-identifiable when a population is heterogeneous, it is often useful to estimate the lower bounds and to construct lower confidence limits. A sequence of lower bounds, including the well-known Chao lower bound, is proposed. The bounds have closed-form expressions and are estimated by the method of moments or by maximum likelihood. Real examples from epidemiology, wildlife management and ecology are investigated. Simulation studies are used to assess the proposed estimators. [source] MOMENT ESTIMATION IN THE CLASS OF BISEXUAL BRANCHING PROCESSES WITH POPULATION,SIZE DEPENDENT MATINGAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2007Miguel González Summary This paper concerns the estimation of the offspring mean vector, the covariance matrix and the growth rate in the class of bisexual branching processes with population-size dependent mating. For the proposed estimators, some unconditional moments and some conditioned to non-extinction are determined and asymptotic properties are established. Confidence intervals are obtained and, as illustration, a simulation example is given. [source] Cox Regression in Nested Case,Control Studies with Auxiliary CovariatesBIOMETRICS, Issue 2 2010Mengling Liu Summary Nested case,control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor. [source] Marginal Hazards Regression for Retrospective Studies within Cohort with Possibly Correlated Failure Time DataBIOMETRICS, Issue 2 2009Sangwook Kang Summary A retrospective dental study was conducted to evaluate the degree to which pulpal involvement affects tooth survival. Due to the clustering of teeth, the survival times within each subject could be correlated and thus the conventional method for the case,control studies cannot be directly applied. In this article, we propose a marginal model approach for this type of correlated case,control within cohort data. Weighted estimating equations are proposed for the estimation of the regression parameters. Different types of weights are also considered for improving the efficiency. Asymptotic properties of the proposed estimators are investigated and their finite sample properties are assessed via simulations studies. The proposed method is applied to the aforementioned dental study. [source] |