Home About us Contact | |||
Likelihood Ratio Statistic (likelihood + ratio_statistic)
Selected AbstractsTesting Parameters in GMM Without Assuming that They Are IdentifiedECONOMETRICA, Issue 4 2005Frank Kleibergen We propose a generalized method of moments (GMM) Lagrange multiplier statistic, i.e., the K statistic, that uses a Jacobian estimator based on the continuous updating estimator that is asymptotically uncorrelated with the sample average of the moments. Its asymptotic ,2 distribution therefore holds under a wider set of circumstances, like weak instruments, than the standard full rank case for the expected Jacobian under which the asymptotic ,2 distributions of the traditional statistics are valid. The behavior of the K statistic can be spurious around inflection points and maxima of the objective function. This inadequacy is overcome by combining the K statistic with a statistic that tests the validity of the moment equations and by an extension of Moreira's (2003) conditional likelihood ratio statistic toward GMM. We conduct a power comparison to test for the risk aversion parameter in a stochastic discount factor model and construct its confidence set for observed consumption growth and asset return series. [source] A Conditional Likelihood Ratio Test for Structural ModelsECONOMETRICA, Issue 4 2003Marcelo J. Moreira This paper develops a general method for constructing exactly similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reduced-form covariance matrix. These tests are shown to be similar under weak-instrument asymptotics when the reduced-form covariance matrix is estimated and the errors are non-normal. The conditional test based on the likelihood ratio statistic is particularly simple and has good power properties. Like the score test, it is optimal under the usual local-to-null asymptotics, but it has better power when identification is weak. [source] Sample Splitting and Threshold EstimationECONOMETRICA, Issue 3 2000Bruce E. Hansen Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously-distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super-consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross-section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995). [source] A statistical method for scanning the genome for regions with rare disease allelesGENETIC EPIDEMIOLOGY, Issue 5 2010Chad GarnerArticle first published online: 21 JUN 2010 Abstract Studying the role of rare alleles in common disease has been prevented by the impractical task of determining the DNA sequence of large numbers of individuals. Next-generation DNA sequencing technologies are being developed that will make it possible for genetic studies of common disease to study the full frequency spectrum of genetic variation, including rare alleles. This report describes a method for scanning the genome for disease susceptibility regions that show an increased number of rare alleles among a sample of disease cases versus an ethnically matched sample of controls. The method was based on a hidden Markov model and the statistical support for a disease susceptibility region characterized by rare alleles was measured by a likelihood ratio statistic. Due to the lack of empirical data, the method was evaluated through simulation. The performance of the method was tested under the null and alternative hypotheses under a range of sequence generating and hidden Markov models parameters. The results showed that the statistical method performs well at identifying true disease susceptibility regions and that performance was primarily affected by the amount of variation in the neutral sequence and the number of rare disease alleles found in the disease susceptibility region. Genet. Epidemiol. 34: 386,395, 2010. © 2010 Wiley-Liss, Inc. [source] Restricted parameter space models for testing gene-gene interactionGENETIC EPIDEMIOLOGY, Issue 5 2009Minsun Song Abstract There is a growing recognition that interactions (gene-gene and gene-environment) can play an important role in common disease etiology. The development of cost-effective genotyping technologies has made genome-wide association studies the preferred tool for searching for loci affecting disease risk. These studies are characterized by a large number of investigated SNPs, and efficient statistical methods are even more important than in classical association studies that are done with a small number of markers. In this article we propose a novel gene-gene interaction test that is more powerful than classical methods. The increase in power is due to the fact that the proposed method incorporates reasonable constraints in the parameter space. The test for both association and interaction is based on a likelihood ratio statistic that has a x,2 distribution asymptotically. We also discuss the definitions used for "no interaction" and argue that tests for pure interaction are useful in genome-wide studies, especially when using two-stage strategies where the analyses in the second stage are done on pairs of loci for which at least one is associated with the trait. Genet. Epidemiol. 33:386,393, 2009. © 2008 Wiley-Liss, Inc. [source] Genetic association tests in the presence of epistasis or gene-environment interactionGENETIC EPIDEMIOLOGY, Issue 7 2008Kai WangArticle first published online: 24 APR 200 Abstract A genetic variant is very likely to manifest its effect on disease through its main effect as well as through its interaction with other genetic variants or environmental factors. Power to detect genetic variants can be greatly improved by modeling their main effects and their interaction effects through a common set of parameters or "generalized association parameters" (Chatterjee et al. [2006] Am. J. Hum. Genet. 79:1002,1016) because of the reduced number of degrees of freedom. Following this idea, I propose two models that extend the work by Chatterjee and colleagues. Particularly, I consider not only the case of relatively weak interaction effect compared to the main effect but also the case of relatively weak main effect. This latter case is perhaps more relevant to genetic association studies. The proposed methods are invariant to the choice of the allele for scoring genotypes or the choice of the reference genotype score. For each model, the asymptotic distribution of the likelihood ratio statistic is derived. Simulation studies suggest that the proposed methods are more powerful than existing ones under certain circumstances. Genet. Epidemiol. 2008. © 2008 Wiley-Liss, Inc. [source] Forecasting with leading indicators revisitedJOURNAL OF FORECASTING, Issue 8 2003Ruey S. Tsay Abstract Transfer function or distributed lag models are commonly used in forecasting. The stability of a constant-coefficient transfer function model, however, may become an issue for many economic variables due in part to the recent advance in technology and improvement in efficiency in data collection and processing. In this paper, we propose a simple functional-coefficient transfer function model that can accommodate the changing environment. A likelihood ratio statistic is used to test the stability of a traditional transfer function model. We investigate the performance of the test statistic in the finite sample case via simulation. Using some well-known examples, we demonstrate clearly that the proposed functional-coefficient model can substantially improve the accuracy of out-of-sample forecasts. In particular, our simple modification results in a 25% reduction in the mean squared errors of out-of-sample one-step-ahead forecasts for the gas-furnace data of Box and Jenkins. Copyright © 2003 John Wiley & Sons, Ltd. [source] Likelihood inference for a class of latent Markov models under linear hypotheses on the transition probabilitiesJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2006Francesco Bartolucci Summary., For a class of latent Markov models for discrete variables having a longitudinal structure, we introduce an approach for formulating and testing linear hypotheses on the transition probabilities of the latent process. For the maximum likelihood estimation of a latent Markov model under hypotheses of this type, we outline an EM algorithm that is based on well-known recursions in the hidden Markov literature. We also show that, under certain assumptions, the asymptotic null distribution of the likelihood ratio statistic for testing a linear hypothesis on the transition probabilities of a latent Markov model, against a less stringent linear hypothesis on the transition probabilities of the same model, is of type. As a particular case, we derive the asymptotic distribution of the likelihood ratio statistic between a latent class model and its latent Markov version, which may be used to test the hypothesis of absence of transition between latent states. The approach is illustrated through a series of simulations and two applications, the first of which is based on educational testing data that have been collected within the National Assessment of Educational Progress 1996, and the second on data, concerning the use of marijuana, which have been collected within the National Youth Survey 1976,1980. [source] Process monitoring for correlated gamma-distributed data using generalized-linear-model-based control chartsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2003Duangporn Jearkpaporn Abstract A model-based scheme is proposed for monitoring multiple gamma-distributed variables. The procedure is based on the deviance residual, which is a likelihood ratio statistic for detecting a mean shift when the shape parameter is assumed to be unchanged and the input and output variables are related in a certain manner. We discuss the distribution of this statistic and the proposed monitoring scheme. An example involving the advance rate of a drill is used to illustrate the implementation of the deviance residual monitoring scheme. Finally, a simulation study is performed to compare the average run length (ARL) performance of the proposed method to the standard Shewhart control chart for individuals. Copyright © 2003 John Wiley & Sons, Ltd. [source] The likelihood ratio test for homogeneity in finite mixture modelsTHE CANADIAN JOURNAL OF STATISTICS, Issue 2 2001Hanfeng Chen Abstract The authors study the asymptotic behaviour of the likelihood ratio statistic for testing homogeneity in the finite mixture models of a general parametric distribution family. They prove that the limiting distribution of this statistic is the squared supremum of a truncated standard Gaussian process. The autocorrelation function of the Gaussian process is explicitly presented. A re-sampling procedure is recommended to obtain the asymptotic p -value. Three kernel functions, normal, binomial and Poisson, are used in a simulation study which illustrates the procedure. [source] A Non-Linear Analysis of Excess Foreign Exchange ReturnsTHE MANCHESTER SCHOOL, Issue 6 2001Jerry Coakley In this paper we explore the dynamics of US dollar excess foreign exchange returns for the G10 currencies and the Swiss franc, 1976,97. The non-linear framework adopted is justified by the results of linearity tests and a parametric bootstrap likelihood ratio statistic which indicate threshold effects or differential adjustment to small and large excess returns. Impulse response analysis suggests that the effect of small shocks to excess returns inside the no-arbitrage band exhibits most persistence. Large shocks outside the band decay most rapidly and also exhibit overshooting. These phenomena are explained in terms of noise trading strategies and transaction costs. [source] Testing for Genetic Association With Constrained Models Using TriadsANNALS OF HUMAN GENETICS, Issue 2 2009J. F. Troendle Summary It has been shown that it is preferable to use a robust model that incorporated constraints on the genotype relative risk rather than rely on a model that assumes the disease operates in a recessive or dominant fashion. Previous methods are applicable to case-control studies, but not to family based studies of case children along with their parents (triads). We show here how to implement analogous constraints while analyzing triad data. The likelihood, conditional on the parents genotype, is maximized over the appropriately constrained parameter space. The asymptotic distribution for the maximized likelihood ratio statistic is found and used to estimate the null distribution of the test statistics. The properties of several methods of testing for association are compared by simulation. The constrained method provides higher power across a wide range of genetic models with little cost when compared to methods that restrict to a dominant, recessive, or multiplicative model, or make no modeling restriction. The methods are applied to two SNPs on the methylenetetrahydrofolate reductase (MTHFR) gene with neural tube defect (NTD) triads. [source] Global Tests for LinkageBIOMETRICAL JOURNAL, Issue 1 2009Rachid el Galta Abstract To test for global linkage along a genome or in a chromosomal region, the maximum over the marker locations of mean alleles shared identical by descent of affected relative pairs, Zmax, can be used. Feingold et al. (1993) derived a Gaussian approximation to the distribution of the Zmax. As an alternative we propose to sum over the observed marker locations along the chromosomal region of interest. Two test statistics can be derived. (1) The likelihood ratio statistic (LR) and (2) the corresponding score statistic. The score statistic appears to be the average mean IBD over all available marker locations. The null distribution of the LR and score tests are asymptotically a 50: 50 mixture of chi-square distributions of null and one degree of freedom and a normal distribution, respectively. We compared empirically the type I error and power of these two new test statistics and Zmax along a chromosome and in a candidate region. Two models were considered, namely (1) one disease locus and (2) two disease loci. The new test statistics appeared to have reasonable type I error. Along the chromosome, for both models we concluded that for very small effect sizes, the score test has slightly more power than the other test statistics. For large effect sizes, the likelihood ratio statistic was comparable to and sometimes performed better than Zmax and both test statistics performed much better than the score test. For candidate regions of about 30 cM, all test statistics were comparable when only one disease-locus existed and the score and likelihood ratio statistics had somewhat better power than Zmax when two disease loci existed (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Adjusted Exponentially Tilted Likelihood with Applications to Brain MorphologyBIOMETRICS, Issue 3 2009Hongtu Zhu Summary In this article, we develop a nonparametric method, called adjusted exponentially tilted (ET) likelihood, and apply it to the analysis of morphometric measures. The adjusted exponential tilting estimator is shown to have the same first-order asymptotic properties as that of the original ET likelihood. The adjusted ET likelihood ratio statistic is applied to test linear hypotheses of unknown parameters, such as the associations of brain measures (e.g., cortical and subcortical surfaces) with covariates of interest, such as age, gender, and gene. Simulation studies show that the adjusted exponential tilted likelihood ratio statistic performs as well as the,t -test when the imaging data are symmetrically distributed, while it is superior when the imaging data have skewed distribution. We demonstrate the application of our new statistical methods to the detection of statistically significant differences in the morphology of the hippocampus between two schizophrenia groups and healthy subjects. [source] Risk Assessment for Quantitative Responses Using a Mixture ModelBIOMETRICS, Issue 2 2000Mehdi Razzaghi Summary. A problem that frequently occurs in biological experiments with laboratory animals is that some subjects are less susceptible to the treatment than others. A mixture model has traditionally been proposed to describe the distribution of responses in treatment groups for such experiments. Using a mixture dose-response model, we derive an upper confidence limit on additional risk, defined as the excess risk over the background risk due to an added dose. Our focus will be on experiments with continuous responses for which risk is the probability of an adverse effect defined as an event that is extremely rare in controls. The asymptotic distribution of the likelihood ratio statistic is used to obtain the upper confidence limit on additional risk. The method can also be used to derive a benchmark dose corresponding to a specified level of increased risk. The EM algorithm is utilized to find the maximum likelihood estimates of model parameters and an extension of the algorithm is proposed to derive the estimates when the model is subject to a specified level of added risk. An example is used to demonstrate the results, and it is shown that by using the mixture model a more accurate measure of added risk is obtained. [source] |