Likelihood Estimation (likelihood + estimation)

Distribution by Scientific Domains

Kinds of Likelihood Estimation

  • maximum likelihood estimation

  • Terms modified by Likelihood Estimation

  • likelihood estimation method
  • likelihood estimation procedure

  • Selected Abstracts


    Chunrong Ai
    This article studies estimation of a conditional moment restriction model with the seminonparametric maximum likelihood approach proposed by Gallant and Nychka (Econometrica 55 (March 1987), 363,90). Under some sufficient conditions, we show that the estimator of the finite dimensional parameter , is asymptotically normally distributed and attains the semiparametric efficiency bound and that the estimator of the density function is consistent under L2 norm. Some results on the convergence rate of the estimated density function are derived. An easy to compute covariance matrix for the asymptotic covariance of the , estimator is presented. [source]


    Empirical likelihood is appropriate to estimate moment condition models when a random sample from the target population is available. However, many economic surveys are subject to some form of stratification, in which case direct application of empirical likelihood will produce inconsistent estimators. In this paper we propose a two-step empirical likelihood estimator to deal with stratified samples in models defined by unconditional moment restrictions in the presence of some aggregate information such as the mean and the variance of the variable of interest. A Monte Carlo simulation study reveals promising results for many versions of the two-step empirical likelihood estimator. [source]


    The nature of the time series properties of real exchange rates remains a contentious issue primarily because of the implications for purchasing power parity. In particular are real exchange rates best characterized as stationary and non-persistent; nonstationary but non-persistent; or nonstationary and persistent? Most assessments of this issue use the I(0)/I(1) paradigm, which only allows the first and last of these options. In contrast, in the I(d) paradigm, d fractional, all three are possible, with the crucial parameter d determining the long-run properties of the process. This study includes estimation of d by three methods of semi-parametric estimation in the frequency domain, using both local and global (Fourier) frequency estimation, and maximum likelihood estimation of ARFIMA models in the time domain. We give a transparent assessment of the key selection parameters in each method, particularly estimation of the truncation parameters for the semi-parametric methods. Two other important developments are also included. We implement Tanaka's locally best invariant parametric tests based on maximum likelihood estimation of the long-memory parameter and include a recent extension of the Dickey,Fuller approach, referred to as fractional Dickey,Fuller (FD-F), to fractionally integrated series, which allows a much wider range of generating processes under the alternative hypothesis. With this more general approach, we find very little evidence of stationarity for 10 real exchange rates for developed countries and some very limited evidence of nonstationarity but non-persistence, and none of the FD-F tests leads to rejection of the null of a unit root. [source]


    James D. Stamey
    Summary This paper proposes a Poisson-based model that uses both error-free data and error-prone data subject to misclassification in the form of false-negative and false-positive counts. It derives maximum likelihood estimators (MLEs) for the Poisson rate parameter and the two misclassification parameters , the false-negative parameter and the false-positive parameter. It also derives expressions for the information matrix and the asymptotic variances of the MLE for the rate parameter, the MLE for the false-positive parameter, and the MLE for the false-negative parameter. Using these expressions the paper analyses the value of the fallible data. It studies characteristics of the new double-sampling rate estimator via a simulation experiment and applies the new MLE estimators and confidence intervals to a real dataset. [source]

    Maximum Likelihood Estimation of VARMA Models Using a State-Space EM Algorithm

    Konstantinos Metaxoglou
    Abstract., We introduce a state-space representation for vector autoregressive moving-average models that enables maximum likelihood estimation using the EM algorithm. We obtain closed-form expressions for both the E- and M-steps; the former requires the Kalman filter and a fixed-interval smoother, and the latter requires least squares-type regression. We show via simulations that our algorithm converges reliably to the maximum, whereas gradient-based methods often fail because of the highly nonlinear nature of the likelihood function. Moreover, our algorithm converges in a smaller number of function evaluations than commonly used direct-search routines. Overall, our approach achieves its largest performance gains when applied to models of high dimension. We illustrate our technique by estimating a high-dimensional vector moving-average model for an efficiency test of California's wholesale electricity market. [source]

    Maximum Likelihood Estimation for a First-Order Bifurcating Autoregressive Process with Exponential Errors

    J. Zhou
    Abstract., Exact and asymptotic distributions of the maximum likelihood estimator of the autoregressive parameter in a first-order bifurcating autoregressive process with exponential innovations are derived. The limit distributions for the stationary, critical and explosive cases are unified via a single pivot using a random normalization. The pivot is shown to be asymptotically exponential for all values of the autoregressive parameter. [source]

    Exact Maximum Likelihood Estimation of an ARMA(1, 1) Model with Incomplete Data

    For a first-order autoregressive and first-order moving average model with nonconsecutively observed or missing data, the closed form of the exact likelihood function is obtained, and the exact maximum likelihood estimation of parameters is derived in the stationary case. [source]

    Parameter and state estimation in nonlinear stochastic continuous-time dynamic models with unknown disturbance intensity

    M. S. Varziri
    Abstract Approximate Maximum Likelihood Estimation (AMLE) is an algorithm for estimating the states and parameters of models described by stochastic differential equations (SDEs). In previous work (Varziri et al., Ind. Eng. Chem. Res., 47(2), 380-393, (2008); Varziri et al., Comp. Chem. Eng., in press), AMLE was developed for SDE systems in which process-disturbance intensities and measurement-noise variances were assumed to be known. In the current article, a new formulation of the AMLE objective function is proposed for the case in which measurement-noise variance is available but the process-disturbance intensity is not known a priori. The revised formulation provides estimates of the model parameters and disturbance intensities, as demonstrated using a nonlinear CSTR simulation study. Parameter confidence intervals are computed using theoretical linearization-based expressions. The proposed method compares favourably with a Kalman-filter-based maximum likelihood method. The resulting parameter estimates and information about model mismatch will be useful to chemical engineers who use fundamental models for process monitoring and control. L'estimation des vraisemblances maximums approximatives (AMLE) est un algorithme destiné à l'estimation des états et des paramètres de modèles décrits par les équations différentielles stochastiques (SDE). Dans un précédent travail (Varziri et al., 2008a, 2008b), l'AMLE a été mis au point pour des systèmes SDE dans lesquels les intensités de perturbations et les variances de bruits de mesure sont supposées connues. On propose dans cet article une nouvelle formulation de la fonction objectif de l'AMLE pour le cas où la variance de bruit de mesure est disponible mais où l'intensité des perturbations de procédé n'est pas connue a priori. La formulation révisée fournit des estimations des paramètres de modèle et des intensités de perturbations, comme le démontre une étude de simulation en CSTR non linéaire. Les intervalles de confiance des paramètres sont calculés par ordinateur au moyen d'expressions basées sur la linéarisation théorique. La méthode proposée se compare favorablement à une méthode de vraisemblance maximun reposant sur le filtre de Kalman. Les estimations de paramètres qui en résultent et l'information sur la discordance de modèle seront utiles aux ingénieurs en génie chimique qui utilisent des modèles fondamentaux pour la surveillance et le contrôle des procédés. [source]

    Maximum Penalized Likelihood Estimation: Volume II: Regression by EGGERMONT, P. P. and LARICCA, V. N.

    BIOMETRICS, Issue 2 2010
    Hao Zhang
    No abstract is available for this article. [source]

    Robustified Maximum Likelihood Estimation in Generalized Partial Linear Mixed Model for Longitudinal Data

    BIOMETRICS, Issue 1 2009
    Guo You Qin
    Summary In this article, we study the robust estimation of both mean and variance components in generalized partial linear mixed models based on the construction of robustified likelihood function. Under some regularity conditions, the asymptotic properties of the proposed robust estimators are shown. Some simulations are carried out to investigate the performance of the proposed robust estimators. Just as expected, the proposed robust estimators perform better than those resulting from robust estimating equations involving conditional expectation like Sinha (2004, Journal of the American Statistical Association99, 451,460) and Qin and Zhu (2007, Journal of Multivariate Analysis98, 1658,1683). In the end, the proposed robust method is illustrated by the analysis of a real data set. [source]

    Maximum Likelihood Estimation in Dynamical Models of HIV

    BIOMETRICS, Issue 4 2007
    J. Guedj
    Summary The study of dynamical models of HIV infection, based on a system of nonlinear ordinary differential equations (ODE), has considerably improved the knowledge of its pathogenesis. While the first models used simplified ODE systems and analyzed each patient separately, recent works dealt with inference in non-simplified models borrowing strength from the whole sample. The complexity of these models leads to great difficulties for inference and only the Bayesian approach has been attempted by now. We propose a full likelihood inference, adapting a Newton-like algorithm for these particular models. We consider a relatively complex ODE model for HIV infection and a model for the observations including the issue of detection limits. We apply this approach to the analysis of a clinical trial of antiretroviral therapy (ALBI ANRS 070) and we show that the whole algorithm works well in a simulation study. [source]

    Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental

    M.H. Lidauer
    Summary A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region × year × month × parity effect and a random herd × test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking. [source]

    Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods

    ECOLOGY LETTERS, Issue 7 2007
    Subhash R. Lele
    Abstract We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise. [source]

    Feature extraction by autoregressive spectral analysis using maximum likelihood estimation: internal carotid arterial Doppler signals

    EXPERT SYSTEMS, Issue 4 2008
    Elif Derya Übeyli
    Abstract: In this study, Doppler signals recorded from the internal carotid artery (ICA) of 97 subjects were processed by personal computer using classical and model-based methods. Fast Fourier transform (classical method) and autoregressive (model-based method) methods were selected for processing the ICA Doppler signals. The parameters in the autoregressive method were found by using maximum likelihood estimation. The Doppler power spectra of the ICA Doppler signals were obtained by using these spectral analysis techniques. The variations in the shape of the Doppler spectra as a function of time were presented in the form of sonograms in order to obtain medical information. These Doppler spectra and sonograms were then used to compare the applied methods in terms of their frequency resolution and the effects in determination of stenosis and occlusion in the ICA. Reliable information on haemodynamic alterations in the ICA can be obtained by evaluation of these sonograms. [source]

    Alternative event study methodology for detecting dividend signals in the context of joint dividend and earnings announcements

    ACCOUNTING & FINANCE, Issue 2 2009
    Warwick Anderson
    C51; D46; G14; N27 Abstract Friction models are used to examine the market reaction to the simultaneous disclosure of earnings and dividends in a thin-trading environment. Friction modelling, a procedure using maximum likelihood estimation, can be used to replace both the market model and restricted least-squares regression in event studies where there are two quantifiable variables and a number of possible interaction effects associated with the news that constitutes the study's event. The results indicate that the dividend signal can be separated from the earnings signal. [source]

    Bayesian estimation of financial models

    ACCOUNTING & FINANCE, Issue 2 2002
    Philip Gray
    This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum,likelihood,based asymptotic distributions have poor finite,sampleproperties. [source]

    A comparison of label-based review and ALE meta-analysis in the Stroop task

    HUMAN BRAIN MAPPING, Issue 1 2005
    Angela R. Laird
    Abstract Meta-analysis is an important tool for interpreting results of functional neuroimaging studies and is highly influential in predicting and testing new outcomes. Although traditional label-based review can be used to search for agreement across multiple studies, a new function-location meta-analysis technique called activation likelihood estimation (ALE) offers great improvements over conventional methods. In ALE, reported foci are modeled as Gaussian functions and pooled to create a statistical whole-brain image. ALE meta-analysis and the label-based review were used to investigate the Stroop task in normal subjects, a paradigm known for its effect of producing conflict and response inhibition due to subjects' tendency to perform word reading as opposed to color naming. Both methods yielded similar activation patterns that were dominated by response in the anterior cingulate and the inferior frontal gyrus. ALE showed greater involvement of the anterior cingulate as compared to that in the label-based technique; however, this was likely due to the increased spatial level of distinction allowed with the ALE method. With ALE, further analysis of the anterior cingulate revealed evidence for somatotopic mapping within the rostral and caudal cingulate zones, an issue that has been the source of some conflict in previous reviews of the anterior cingulate cortex. Hum Brain Mapp 25:6,21, 2005. © 2005 Wiley-Liss, Inc. [source]

    ALE meta-analysis: Controlling the false discovery rate and performing statistical contrasts

    HUMAN BRAIN MAPPING, Issue 1 2005
    Angela R. Laird
    Abstract Activation likelihood estimation (ALE) has greatly advanced voxel-based meta-analysis research in the field of functional neuroimaging. We present two improvements to the ALE method. First, we evaluate the feasibility of two techniques for correcting for multiple comparisons: the single threshold test and a procedure that controls the false discovery rate (FDR). To test these techniques, foci from four different topics within the literature were analyzed: overt speech in stuttering subjects, the color-word Stroop task, picture-naming tasks, and painful stimulation. In addition, the performance of each thresholding method was tested on randomly generated foci. We found that the FDR method more effectively controls the rate of false positives in meta-analyses of small or large numbers of foci. Second, we propose a technique for making statistical comparisons of ALE meta-analyses and investigate its efficacy on different groups of foci divided by task or response type and random groups of similarly obtained foci. We then give an example of how comparisons of this sort may lead to advanced designs in future meta-analytic research. Hum Brain Mapp 25:155,164, 2005. © 2005 Wiley-Liss, Inc. [source]

    Penalized Regression with Ordinal Predictors

    Jan Gertheiss
    Summary Ordered categorial predictors are a common case in regression modelling. In contrast to the case of ordinal response variables, ordinal predictors have been largely neglected in the literature. In this paper, existing methods are reviewed and the use of penalized regression techniques is proposed. Based on dummy coding two types of penalization are explicitly developed; the first imposes a difference penalty, the second is a ridge type refitting procedure. Also a Bayesian motivation is provided. The concept is generalized to the case of non-normal outcomes within the framework of generalized linear models by applying penalized likelihood estimation. Simulation studies and real world data serve for illustration and to compare the approaches to methods often seen in practice, namely simple linear regression on the group labels and pure dummy coding. Especially the proposed difference penalty turns out to be highly competitive. Résumé Les variables indépendantes catégoriques ordinales sont un cas courant dans les modèles de régression. Contrairement au cas des variables dépendantes ordinales, les variables indépendantes ordinales ont été largement négligées par la recherche. Le présent article présente les méthodes existantes et propose l'utilisation de techniques de régression pénalisée. Deux types de pénalisation basés sur des variables dummy sont exposés; le premier impose une pénalité de différence, le second est une procédure basée sur une forme de régression ridge. D'autre part, une motivation baysienne est présentée. La méthode est également appliquée au cas de variables dépendantes non gaussiennes. Des études de simulation et des données réelles servent à illustrer et à comparer les nouvelles méthodes aux méthodes que l'on rencontre souvent dans la pratique - à savoir les régressions linéaires sur les nombres entiers et sur des variables dummy sans penalité. Une pénalité de différence notamment a montré de bons résultats. [source]

    Marginal maximum likelihood estimation of item response theory (IRT) equating coefficients for the common-examinee design

    Haruhiko Ogasawara
    A method of estimating item response theory (IRT) equating coefficients by the common-examinee design with the assumption of the two-parameter logistic model is provided. The method uses the marginal maximum likelihood estimation, in which individual ability parameters in a common-examinee group are numerically integrated out. The abilities of the common examinees are assumed to follow a normal distribution but with an unknown mean and standard deviation on one of the two tests to be equated. The distribution parameters are jointly estimated with the equating coefficients. Further, the asymptotic standard errors of the estimates of the equating coefficients and the parameters for the ability distribution are given. Numerical examples are provided to show the accuracy of the method. [source]

    Normal mixture GARCH(1,1): applications to exchange rate modelling

    Carol Alexander
    Some recent specifications for GARCH error processes explicitly assume a conditional variance that is generated by a mixture of normal components, albeit with some parameter restrictions. This paper analyses the general normal mixture GARCH(1,1) model which can capture time variation in both conditional skewness and kurtosis. A main focus of the paper is to provide evidence that, for modelling exchange rates, generalized two-component normal mixture GARCH(1,1) models perform better than those with three or more components, and better than symmetric and skewed Student's t -GARCH models. In addition to the extensive empirical results based on simulation and on historical data on three US dollar foreign exchange rates (British pound, euro and Japanese yen), we derive: expressions for the conditional and unconditional moments of all models; parameter conditions to ensure that the second and fourth conditional and unconditional moments are positive and finite; and analytic derivatives for the maximum likelihood estimation of the model parameters and standard errors of the estimates. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    The Impact of Omitted Responses on the Accuracy of Ability Estimation in Item Response Theory

    R. J. De Ayala
    Practitioners typically face situations in which examinees have not responded to all test items. This study investigated the effect on an examinee's ability estimate when an examinee is presented an item, has ample time to answer, but decides not to respond to the item. Three approaches to ability estimation (biweight estimation, expected a posteriori, and maximum likelihood estimation) were examined. A Monte Carlo study was performed and the effect of different levels of omissions on the simulee's ability estimates was determined. Results showed that the worst estimation occurred when omits were treated as incorrect. In contrast, substitution of 0.5 for omitted responses resulted in ability estimates that were almost as accurate as those using complete data. Implications for practitioners are discussed. [source]

    Assessing the forecasting accuracy of alternative nominal exchange rate models: the case of long memory

    David Karemera
    Abstract This paper presents an autoregressive fractionally integrated moving-average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long-memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long-memory model is more efficient than the random walk model in steps-ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi-step-ahead forecasts. This new finding strongly suggests that the long-memory model of nominal exchange rates be studied as a viable alternative to the conventional models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Natural resource-collection work and children's schooling in Malawi

    Flora J. Nankhuni
    Abstract This paper presents results of research that investigates if long hours of work spent by children in fuel wood and water-collection activities, i. e., natural resource-collection work, influence the likelihood that a child aged 6,14 attends school. Potential endogeneity of resource-collection work hours is corrected for, using two-stage conditional maximum likelihood estimation. Data from the 1997,1998 Malawi Integrated Household Survey (IHS) conducted by the Malawi National Statistics Office (NSO) in conjunction with the International Food Policy Research Institute (IFPRI) are used. The study finds that Malawian children are significantly involved in resource-collection work and their likelihood of attending school decreases with increases in hours allocated to this work. The study further shows that girls spend more hours on resource-collection work and are more likely to be attending school while burdened by this work. Consequently, girls may find it difficult to progress well in school. However, girls are not necessarily less likely to be attending school. Results further show that presence of more women in a household is associated with a lower burden of resource-collection work on children and a higher probability of children's school attendance. Finally, the research shows that children from the most environmentally degraded districts of central and southern Malawi are less likely to attend school and relatively fewer of them have progressed to secondary school compared to those-from districts in the north. [source]

    Wavelet-based adaptive robust M-estimator for nonlinear system identification

    AICHE JOURNAL, Issue 8 2000
    D. Wang
    A wavelet-based robust M-estimation method for the identification of nonlinear systems is proposed. Because it is not based on the assumption that there is the class of error distribution, it takes a flexible, nonparametric approach and has the advantage of directly estimating the error distribution from the data. This M-estimator is optimal over any error distribution in the sense of maximum likelihood estimation. A Monte-Carlo study on a nonlinear chemical engineering example was used to compare the results with various previously utilized methods. [source]

    Exploring social mobility with latent trajectory groups

    Patrick Sturgis
    Summary., We present a new methodological approach to the study of social mobility. We use a latent class growth analysis framework to identify five qualitatively distinct social class trajectory groups between 1980 and 2000 for male respondents to the 1970 British Cohort Study. We model the antecedents of trajectory group membership via multinomial logistic regression. Non-response, which is a considerable problem in long-term panels and cohort studies, is handled via direct maximum likelihood estimation, which is consistent and efficient when data are missing at random. Our results suggest a combination of meritocratic and ascriptive influences on the probability of membership in the different trajectory groups. [source]

    A latent Markov model for detecting patterns of criminal activity

    Francesco Bartolucci
    Summary., The paper investigates the problem of determining patterns of criminal behaviour from official criminal histories, concentrating on the variety and type of offending convictions. The analysis is carried out on the basis of a multivariate latent Markov model which allows for discrete covariates affecting the initial and the transition probabilities of the latent process. We also show some simplifications which reduce the number of parameters substantially; we include a Rasch-like parameterization of the conditional distribution of the response variables given the latent process and a constraint of partial homogeneity of the latent Markov chain. For the maximum likelihood estimation of the model we outline an EM algorithm based on recursions known in the hidden Markov literature, which make the estimation feasible also when the number of time occasions is large. Through this model, we analyse the conviction histories of a cohort of offenders who were born in England and Wales in 1953. The final model identifies five latent classes and specifies common transition probabilities for males and females between 5-year age periods, but with different initial probabilities. [source]

    Binary models for marginal independence

    Mathias Drton
    Summary., Log-linear models are a classical tool for the analysis of contingency tables. In particular, the subclass of graphical log-linear models provides a general framework for modelling conditional independences. However, with the exception of special structures, marginal independence hypotheses cannot be accommodated by these traditional models. Focusing on binary variables, we present a model class that provides a framework for modelling marginal independences in contingency tables. The approach that is taken is graphical and draws on analogies with multivariate Gaussian models for marginal independence. For the graphical model representation we use bidirected graphs, which are in the tradition of path diagrams. We show how the models can be parameterized in a simple fashion, and how maximum likelihood estimation can be performed by using a version of the iterated conditional fitting algorithm. Finally we consider combining these models with symmetry restrictions. [source]

    Generalized linear models incorporating population level information: an empirical-likelihood-based approach

    Sanjay Chaudhuri
    Summary., In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter estimates. Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are non-linear, making the task of maximum likelihood estimation more difficult. We develop an alternative approach exploiting the notion of an empirical likelihood. It is shown that, within the framework of generalized linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter estimates by using only unconstrained estimation. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to estimate annual birth probabilities by parity. [source]

    Maximum likelihood estimation in semiparametric regression models with censored data

    D. Zeng
    Summary., Semiparametric regression models play a central role in formulating the effects of covariates on potentially censored failure times and in the joint modelling of incomplete repeated measures and failure times in longitudinal studies. The presence of infinite dimensional parameters poses considerable theoretical and computational challenges in the statistical analysis of such models. We present several classes of semiparametric regression models, which extend the existing models in important directions. We construct appropriate likelihood functions involving both finite dimensional and infinite dimensional parameters. The maximum likelihood estimators are consistent and asymptotically normal with efficient variances. We develop simple and stable numerical techniques to implement the corresponding inference procedures. Extensive simulation experiments demonstrate that the inferential and computational methods proposed perform well in practical settings. Applications to three medical studies yield important new insights. We conclude that there is no reason, theoretical or numerical, not to use maximum likelihood estimation for semiparametric regression models. We discuss several areas that need further research. [source]