Likelihood Estimator (likelihood + estimator)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Likelihood Estimator

  • maximum likelihood estimator
  • quasi-maximum likelihood estimator


  • Selected Abstracts


    CALIBRATING A MOLECULAR CLOCK FROM PHYLOGEOGRAPHIC DATA: MOMENTS AND LIKELIHOOD ESTIMATORS

    EVOLUTION, Issue 10 2003
    Michael J. Hickerson
    Abstract We present moments and likelihood methods that estimate a DNA substitution rate from a group of closely related sister species pairs separated at an assumed time, and we test these methods with simulations. The methods also estimate ancestral population size and can test whether there is a significant difference among the ancestral population sizes of the sister species pairs. Estimates presented in the literature often ignore the ancestral coalescent prior to speciation and therefore should be biased upward. The simulations show that both methods yield accurate estimates given sample sizes of five or more species pairs and that better likelihood estimates are obtained if there is no significant difference among ancestral population sizes. The model presented here indicates that the larger than expected variation found in multitaxa datasets can be explained by variation in the ancestral coalescence and the Poisson mutation process. In this context, observed variation can often be accounted for by variation in ancestral population sizes rather than invoking variation in other parameters, such as divergence time or mutation rate. The methods are applied to data from two groups of species pairs (sea urchins and Alpheus snapping shrimp) that are thought to have separated by the rise of Panama three million years ago. [source]


    Heterogeneity in dynamic discrete choice models

    THE ECONOMETRICS JOURNAL, Issue 1 2010
    Martin Browning
    Summary, We consider dynamic discrete choice models with heterogeneity in both the levels parameter and the state dependence parameter. We first present an empirical analysis that motivates the theoretical analysis which follows. The theoretical analysis considers a simple two-state, first-order Markov chain model without covariates in which both transition probabilities are heterogeneous. Using such a model we are able to derive exact small sample results for bias and mean squared error (MSE). We discuss the maximum likelihood approach and derive two novel estimators. The first is a bias corrected version of the Maximum Likelihood Estimator (MLE) although the second, which we term MIMSE, minimizes the integrated mean square error. The MIMSE estimator is always well defined, has a closed-form expression and inherits the desirable large sample properties of the MLE. Our main finding is that in almost all short panel contexts the MIMSE significantly outperforms the other two estimators in terms of MSE. A final section extends the MIMSE estimator to allow for exogenous covariates. [source]


    Bayesian semiparametric estimation of discrete duration models: an application of the dirichlet process prior

    JOURNAL OF APPLIED ECONOMETRICS, Issue 1 2001
    Michele Campolieti
    This paper proposes a Bayesian estimator for a discrete time duration model which incorporates a non-parametric specification of the unobserved heterogeneity distribution, through the use of a Dirichlet process prior. This estimator offers distinct advantages over the Nonparametric Maximum Likelihood estimator of this model. First, it allows for exact finite sample inference. Second, it is easily estimated and mixed with flexible specifications of the baseline hazard. An application of the model to employment duration data from the Canadian province of New Brunswick is provided. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Empirical Likelihood-Based Inference in Conditional Moment Restriction Models

    ECONOMETRICA, Issue 6 2004
    Yuichi Kitamura
    This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood-based procedure. This yields a one-step estimator which avoids estimating optimal instruments. Our likelihood ratio-type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples. [source]


    Parametric estimation for the location parameter for symmetric distributions using moving extremes ranked set sampling with application to trees data

    ENVIRONMETRICS, Issue 7 2003
    Mohammad Fraiwan Al-Saleh
    Abstract A modification of ranked set sampling (RSS) called moving extremes ranked set sampling (MERSS) is considered parametrically, for the location parameter of symmetric distributions. A maximum likelihood estimator (MLE) and a modified MLE are considered and their properties are studied. Their efficiency with respect to the corresponding estimators based on simple random sampling (SRS) are compared for the case of normal distribution. The method is studied under both perfect and imperfect ranking (with error in ranking). It appears that these estimators can be real competitors to the MLE using (SRS). The procedure is illustrated using tree data. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Estimation of rate ratio and relative difference in matched-pairs under inverse sampling

    ENVIRONMETRICS, Issue 6 2001
    Kung-Jong Lui
    Abstract To increase the efficiency of a study and to eliminate the effects of some nuisance confounders, we may consider employing a matched-pair design. Under the commonly assumed quadrinomial sampling, in which the total number of matched-pairs is fixed, we note that the maximum likelihood estimator (MLE) of rate ratio (RR) has an infinitely large bias and no finite variance, and so does the MLE of relative difference (RD). To avoid this theoretical concern, this paper suggests use of an inverse sampling and notes that the MLEs of these parameters, which are actually of the same forms as those under the quadrinomial sampling, are also the uniformly minimum variance estimators (UMVUEs) under the proposed samplings. This paper further derives the exact variances of these MLEs and the corresponding UMVUEs of these variances. Finally, this paper includes a discussion on interval estimation of the RR and RD using these results as well. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    A Structural Equation Approach to Models with Spatial Dependence

    GEOGRAPHICAL ANALYSIS, Issue 2 2008
    Johan H. L. Oud
    We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it possible to obtain a closer correspondence between theory and empirics, to explicitly account for measurement errors, and to reduce multicollinearity. We extend the standard SEM maximum likelihood estimator to allow for spatial dependence and propose easily accessible SEM software like LISREL 8 and Mx. We present an illustration based on Anselin's Columbus, OH, crime data set. Furthermore, we combine the spatial lag model with the latent multiple-indicators,multiple-causes model and discuss estimation of this latent spatial lag model. We present an illustration based on the Anselin crime data set again. [source]


    Power of Tests for a Dichotomous Independent Variable Measured with Error

    HEALTH SERVICES RESEARCH, Issue 3 2008
    Daniel F. McCaffrey
    Objective. To examine the implications for statistical power of using predicted probabilities for a dichotomous independent variable, rather than the actual variable. Data Sources/Study Setting. An application uses 271,479 observations from the 2000 to 2002 CAHPS Medicare Fee-for-Service surveys. Study Design and Data. A methodological study with simulation results and a substantive application to previously collected data. Principle Findings. Researchers often must employ key dichotomous predictors that are unobserved but for which predictions exist. We consider three approaches to such data: the classification estimator (1); the direct substitution estimator (2); the partial information maximum likelihood estimator (3, PIMLE). The efficiency of (1) (its power relative to testing with the true variable) roughly scales with the square of one less the classification error. The efficiency of (2) roughly scales with the R2 for predicting the unobserved dichotomous variable, and is usually more powerful than (1). Approach (3) is most powerful, but for testing differences in means of 0.2,0.5 standard deviations, (2) is typically more than 95 percent as efficient as (3). Conclusions. The information loss from not observing actual values of dichotomous predictors can be quite large. Direct substitution is easy to implement and interpret and nearly as efficient as the PIMLE. [source]


    On Estimation in M/G/c/c Queues

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2001
    Mei Ling Huang
    We derive the minimum variance unbiased estimator (MVUE) and the maximum likelihood estimator (MLE) of the stationary probability function (pf) of the number of customers in a collection of independent M/G/c/c subsystems. It is assumed that the offered load and number of servers in each subsystem are unknown. We assume that observations of the total number of customers in the system are utilized because it may be impractical or impossible to observe individual server occupancies. Both estimators depend on the R distribution (the distribution of the sum of independent right truncated Poisson random variables) and R numbers. [source]


    Fixed or random contemporary groups in genetic evaluation for litter size in pigs using a single trait repeatability animal model

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 1 2003
    D. Babot
    Summary The importance of using fixed or random contemporary groups in the genetic evaluation of litter size in pigs was analysed by using farm and simulated data. Farm data were from four Spanish pig breeding populations, two Landrace (13 084 and 13 619 records) and two Large White (2762 and 8455 records). A simulated population (200 sows and 10 boars) selected for litter size, in which litter size was simulated using a repeatability animal model with random herd,year,season (HYS), was used to obtain simulated data. With farm data, the goodness-of-fit and the predictive ability of a repeatability animal model were calculated under several definitions of the HYS effect. A residual maximum likelihood estimator of the HYS variance in each population was obtained as well. In this sense, HYS was considered as either fixed or random with different number of individuals per level. Results from farm data showed that HYS variance was small in relation to the total variance (ranging from 0.01 to 0.04). The treatment of HYS effect as fixed, reduced the residual variance but the size of HYS levels does not explain by itself the goodness-of-fit of the model. The results obtained by simulation showed that the predictive ability of the model is better for random than for fixed HYS models. However, the improvement of predictive ability does not lead to a significant increase of the genetic response. Finally, results showed that random HYS models biased the estimates of genetic response when there is an environmental trend. Zusammenfassung Fixe oder zufällige Vergleichsgruppen bei der Zuchtwertschätzung für Wurfgröße beim Schwein mit einem Wiederholbarkeits-Tiermodell Der Einfluss von fixen oder zufälligen Vergleichgruppen bei der Zuchtwertschätzung für Wurfgröße beim Schwein wurde an realen Betriebsdaten und an simulierten Daten untersucht. Die Betriebsdaten stammen von vier spanischen Zuchtpopulationen, zwei Landrasse Populationen (13084 und 13619 Datensätze) und zwei Large White Populationen (2762 und 8455 Datensätze). Für die Simulation wurde eine Population (200 Sauen und 10 Eber), die auf Wurfgröße selektiert wurde, unter Berücksichtigung eines Wiederholbarkeitsmodelles und mit zufälligen Herden-Jahr-Saisonklassen simuliert. Anhand der Betriebsdaten wurde die Güte des Modells und die Vorhersagegenauigkeit des Wiederholbarkeitsmodelles mit verschiedenen Definitionen der Herden-Jahr-Saisonklassen geprüft. Mittels der REML-Methode wurden auch Varianzkomponenten für die Herden-Jahr-Saisonklassen geschätzt. Die Herden-Jahr-Saisonklassen wurden als fixer bzw. zufälliger Effekt mit unterschiedlicher Anzahl an Tieren pro Klasse im Modell berücksichtigt. Die Ergebnisse der Betriebsdaten ergaben, dass die Varianz für die Herden-Jahr-Saisonklassen nur einen kleinen Teil der Totalvarianz (von 0,01 bis 0,04) ausmachte. Mit den Herden-Jahr-Saisonklassen als fixer Effekt reduzierte sich die Restvarianz, aber die Größe der Herden-Jahr-Saisonklassen bestimmte nicht allein die Güte des Modells. Die Erhöhung der Vorhersagegenauigkeit ergab keinen signifikanten Anstieg des genetischen Fortschrittes. Abschließend bleibt festzustellen, dass Modelle mit zufälligen Herde-Jahr-Saisonklassen zu einem Bias des geschätzten genetischen Erfolges führten, wenn ein Umwelttrend vorhanden war. [source]


    Asymmetric power distribution: Theory and applications to risk measurement

    JOURNAL OF APPLIED ECONOMETRICS, Issue 5 2007
    Ivana Komunjer
    Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew- t -GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Estimation of multivariate models for time series of possibly different lengths

    JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2006
    Andrew J. Patton
    We consider the problem of estimating parametric multivariate density models when unequal amounts of data are available on each variable. We focus in particular on the case that the unknown parameter vector may be partitioned into elements relating only to a marginal distribution and elements relating to the copula. In such a case we propose using a multi-stage maximum likelihood estimator (MSMLE) based on all available data rather than the usual one-stage maximum likelihood estimator (1SMLE) based only on the overlapping data. We provide conditions under which the MSMLE is not less asymptotically efficient than the 1SMLE, and we examine the small sample efficiency of the estimators via simulations. The analysis in this paper is motivated by a model of the joint distribution of daily Japanese yen,US dollar and euro,US dollar exchange rates. We find significant evidence of time variation in the conditional copula of these exchange rates, and evidence of greater dependence during extreme events than under the normal distribution. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    The Heckman Correction for Sample Selection and Its Critique

    JOURNAL OF ECONOMIC SURVEYS, Issue 1 2000
    Patrick Puhani
    This paper gives a short overview of Monte Carlo studies on the usefulness of Heckman's (1976, 1979) two-step estimator for estimating selection models. Such models occur frequently in empirical work, especially in microeconometrics when estimating wage equations or consumer expenditures. It is shown that exploratory work to check for collinearity problems is strongly recommended before deciding on which estimator to apply. In the absence of collinearity problems, the full-information maximum likelihood estimator is preferable to the limited-information two-step method of Heckman, although the latter also gives reasonable results. If, however, collinearity problems prevail, subsample OLS (or the Two-Part Model) is the most robust amongst the simple-to-calculate estimators. [source]


    Bankruptcy Prediction: Evidence from Korean Listed Companies during the IMF Crisis

    JOURNAL OF INTERNATIONAL FINANCIAL MANAGEMENT & ACCOUNTING, Issue 3 2000
    Joo-Ha Nam
    This paper empirically studies the predictive model of business failure using the sample of listed companies that went bankrupt during the period from 1997 to 1998 when deep recession driven by the IMF crisis started in Korea. Logit maximum likelihood estimator is employed as the statistical technique. The model demonstrated decent prediction accuracy and robustness. The type I accuracy is 80.4 per cent and the Type II accuracy is 73.9 per cent. The accuracy remains almost at the same level when the model is applied to an independent holdout sample. In addition to building a bankruptcy prediction model this paper finds that most of firms that went bankrupt during the Korean economic crisis from 1997 to 1998 had shown signs of financial distress long before the crisis. Bankruptcy probabilities of the sample are consistently high during the period from 1991 to 1996. The evidence of this paper can be seen as complementary to the perspective that traces Asian economic crisis to the vulnerabilities of corporate governance of Asian countries. [source]


    On-line expectation,maximization algorithm for latent data models

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2009
    Olivier Cappé
    Summary., We propose a generic on-line (also sometimes called adaptive or recursive) version of the expectation,maximization (EM) algorithm applicable to latent variable models of independent observations. Compared with the algorithm of Titterington, this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete-data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback,Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e. that of the maximum likelihood estimator. In addition, the approach proposed is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model. [source]


    Semiparametric estimation by model selection for locally stationary processes

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2006
    Sébastien Van Bellegem
    Summary., Over recent decades increasingly more attention has been paid to the problem of how to fit a parametric model of time series with time-varying parameters. A typical example is given by autoregressive models with time-varying parameters. We propose a procedure to fit such time-varying models to general non-stationary processes. The estimator is a maximum Whittle likelihood estimator on sieves. The results do not assume that the observed process belongs to a specific class of time-varying parametric models. We discuss in more detail the fitting of time-varying AR(p) processes for which we treat the problem of the selection of the order p, and we propose an iterative algorithm for the computation of the estimator. A comparison with model selection by Akaike's information criterion is provided through simulations. [source]


    A theory of statistical models for Monte Carlo integration

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2003
    A. Kong
    Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers. [source]


    Correlating two continuous variables subject to detection limits in the context of mixture distributions

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 5 2005
    Haitao Chu
    Summary., In individuals who are infected with human immunodeficiency virus (HIV), distributions of quantitative HIV ribonucleic acid measurements may be highly left censored with an extra spike below the limit of detection LD of the assay. A two-component mixture model with the lower component entirely supported on [0, LD] is recommended to model the extra spike in univariate analysis better. Let LD1 and LD2 be the limits of detection for the two HIV viral load measurements. When estimating the correlation coefficient between two different measures of viral load obtained from each of a sample of patients, a bivariate Gaussian mixture model is recommended to model the extra spike on [0, LD1] and [0, LD2] better when the proportion below LD is incompatible with the left-hand tail of a bivariate Gaussian distribution. When the proportion of both variables falling below LD is very large, the parameters of the lower component may not be estimable since almost all observations from the lower component are falling below LD. A partial solution is to assume that the lower component's entire support is on [0, LD1]×[0, LD2]. Maximum likelihood is used to estimate the parameters of the lower and higher components. To evaluate whether there is a lower component, we apply a Monte Carlo approach to assess the p -value of the likelihood ratio test and two information criteria: a bootstrap-based information criterion and a cross-validation-based information criterion. We provide simulation results to evaluate the performance and compare it with two ad hoc estimators and a single-component bivariate Gaussian likelihood estimator. These methods are applied to the data from a cohort study of HIV-infected men in Rio de Janeiro, Brazil, and the data from the Women's Interagency HIV oral study. These results emphasize the need for caution when estimating correlation coefficients from data with a large proportion of non-detectable values when the proportion below LD is incompatible with the left-hand tail of a bivariate Gaussian distribution. [source]


    Estimation in nonstationary random coefficient autoregressive models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2009
    István Berkes
    Primary 62F05; secondary 62M10 Abstract., We investigate the estimation of parameters in the random coefficient autoregressive (RCA) model Xk = (, + bk)Xk,1 + ek, where (,, ,2, ,2) is the parameter of the process, , . We consider a nonstationary RCA process satisfying E log |, + b0| , 0 and show that ,2 cannot be estimated by the quasi-maximum likelihood method. The asymptotic normality of the quasi-maximum likelihood estimator for (,, ,2) is proven so that the unit root problem does not exist in the RCA model. [source]


    Bootstrapping a weighted linear estimator of the ARCH parameters

    JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2009
    Arup Bose
    Abstract., A standard assumption while deriving the asymptotic distribution of the quasi maximum likelihood estimator in ARCH models is that all ARCH parameters must be strictly positive. This assumption is also crucial in deriving the limit distribution of appropriate linear estimators (LE). We propose a weighted linear estimator (WLE) of the ARCH parameters in the classical ARCH model and show that its limit distribution is multivariate normal even when some of the ARCH coefficients are zero. The asymptotic dispersion matrix involves unknown quantities. We consider appropriate bootstrapped version of this WLE and prove that it is asymptotically valid in the sense that the bootstrapped distribution (given the data) is a consistent estimate (in probability) of the distribution of the WLE. Although we do not show theoretically that the bootstrap outperforms the normal approximation, our simulations demonstrate that it yields better approximations than the limiting normal. [source]


    Quasi-maximum likelihood estimation of periodic GARCH and periodic ARMA-GARCH processes

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2009
    Abdelhakim Aknouche
    Primary: 62F12; Secondary: 62M10, 91B84 Abstract., This article establishes the strong consistency and asymptotic normality (CAN) of the quasi-maximum likelihood estimator (QMLE) for generalized autoregressive conditionally heteroscedastic (GARCH) and autoregressive moving-average (ARMA)-GARCH processes with periodically time-varying parameters. We first give a necessary and sufficient condition for the existence of a strictly periodically stationary solution of the periodic GARCH (PGARCH) equation. As a result, it is shown that the moment of some positive order of the PGARCH solution is finite, under which we prove the strong consistency and asymptotic normality of the QMLE for a PGARCH process without any condition on its moments and for a periodic ARMA-GARCH (PARMA-PGARCH) under mild conditions. [source]


    A light-tailed conditionally heteroscedastic model with applications to river flows

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2008
    Péter Elek
    Abstract., A conditionally heteroscedastic model, different from the more commonly used autoregressive moving average,generalized autoregressive conditionally heteroscedastic (ARMA-GARCH) processes, is established and analysed here. The time-dependent variance of innovations passing through an ARMA filter is conditioned on the lagged values of the generated process, rather than on the lagged innovations, and is defined to be asymptotically proportional to those past values. Designed this way, the model incorporates certain feedback from the modelled process, the innovation is no longer of GARCH type, and all moments of the modelled process are finite provided the same is true for the generating noise. The article gives the condition of stationarity, and proves consistency and asymptotic normality of the Gaussian quasi-maximum likelihood estimator of the variance parameters, even though the estimated parameters of the linear filter contain an error. An analysis of six diurnal water discharge series observed along Rivers Danube and Tisza in Hungary demonstrates the usefulness of such a model. The effect of lagged river discharge turns out to be highly significant on the variance of innovations, and nonparametric estimation approves its approximate linearity. Simulations from the new model preserve well the probability distribution, the high quantiles, the tail behaviour and the high-level clustering of the original series, further justifying model choice. [source]


    Impact of the Sampling Rate on the Estimation of the Parameters of Fractional Brownian Motion

    JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2006
    Zhengyuan Zhu
    Primary 60G18; secondary 62D05, 62F12 Abstract., Fractional Brownian motion is a mean-zero self-similar Gaussian process with stationary increments. Its covariance depends on two parameters, the self-similar parameter H and the variance C. Suppose that one wants to estimate optimally these parameters by using n equally spaced observations. How should these observations be distributed? We show that the spacing of the observations does not affect the estimation of H (this is due to the self-similarity of the process), but the spacing does affect the estimation of the variance C. For example, if the observations are equally spaced on [0, n] (unit-spacing), the rate of convergence of the maximum likelihood estimator (MLE) of the variance C is . However, if the observations are equally spaced on [0, 1] (1/n -spacing), or on [0, n2] (n -spacing), the rate is slower, . We also determine the optimal choice of the spacing , when it is constant, independent of the sample size n. While the rate of convergence of the MLE of C is in this case, irrespective of the value of ,, the value of the optimal spacing depends on H. It is 1 (unit-spacing) if H = 1/2 but is very large if H is close to 1. [source]


    Maximum Likelihood Estimation for a First-Order Bifurcating Autoregressive Process with Exponential Errors

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2005
    J. Zhou
    Abstract., Exact and asymptotic distributions of the maximum likelihood estimator of the autoregressive parameter in a first-order bifurcating autoregressive process with exponential innovations are derived. The limit distributions for the stationary, critical and explosive cases are unified via a single pivot using a random normalization. The pivot is shown to be asymptotically exponential for all values of the autoregressive parameter. [source]


    Efficient use of higher-lag autocorrelations for estimating autoregressive processes

    JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2002
    LAURENCE BROZE
    The Yule,Walker estimator is commonly used in time-series analysis, as a simple way to estimate the coefficients of an autoregressive process. Under strong assumptions on the noise process, this estimator possesses the same asymptotic properties as the Gaussian maximum likelihood estimator. However, when the noise is a weak one, other estimators based on higher-order empirical autocorrelations can provide substantial efficiency gains. This is illustrated by means of a first-order autoregressive process with a Markov-switching white noise. We show how to optimally choose a linear combination of a set of estimators based on empirical autocorrelations. The asymptotic variance of the optimal estimator is derived. Empirical experiments based on simulations show that the new estimator performs well on the illustrative model. [source]


    Optimum step-stress accelerated life test plans for log-location-scale distributions

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 6 2008
    Haiming Ma
    Abstract This article presents new tools and methods for finding optimum step-stress accelerated life test plans. First, we present an approach to calculate the large-sample approximate variance of the maximum likelihood estimator of a quantile of the failure time distribution at use conditions from a step-stress accelerated life test. The approach allows for multistep stress changes and censoring for general log-location-scale distributions based on a cumulative exposure model. As an application of this approach, the optimum variance is studied as a function of shape parameter for both Weibull and lognormal distributions. Graphical comparisons among test plans using step-up, step-down, and constant-stress patterns are also presented. The results show that depending on the values of the model parameters and quantile of interest, each of the three test plans can be preferable in terms of optimum variance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]


    Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2004
    B. Chandrasekar
    Abstract Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Commun Statist Theory Methods 17 (1988), 1857,1870] considered a hybrid censoring scheme and obtained the exact distribution of the maximum likelihood estimator of the mean of an exponential distribution along with an exact lower confidence bound. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann Inst Statist Math 55 (2003), 319,330] recently derived an alternative simpler expression for the distribution of the MLE. These authors also proposed a new hybrid censoring scheme and derived similar results for the exponential model. In this paper, we propose two generalized hybrid censoring schemes which have some advantages over the hybrid censoring schemes already discussed in the literature. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under these generalized hybrid censoring schemes. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004 [source]


    Estimating the Change Point of a Poisson Rate Parameter with a Linear Trend Disturbance

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2006
    Marcus B. Perry
    Abstract Knowing when a process changed would simplify the search and identification of the special cause. In this paper, we compare the maximum likelihood estimator (MLE) of the process change point designed for linear trends to the MLE of the process change point designed for step changes when a linear trend disturbance is present. We conclude that the MLE of the process change point designed for linear trends outperforms the MLE designed for step changes when a linear trend disturbance is present. We also present an approach based on the likelihood function for estimating a confidence set for the process change point. We study the performance of this estimator when it is used with a cumulative sum (CUSUM) control chart and make direct performance comparisons with the estimated confidence sets obtained from the MLE for step changes. The results show that better confidence can be obtained using the MLE for linear trends when a linear trend disturbance is present. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    On the incidence,prevalence relation and length-biased sampling

    THE CANADIAN JOURNAL OF STATISTICS, Issue 2 2009
    Vittorio Addona
    MSC 2000: Primary 62N99; secondary 62G99 Abstract For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow-up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, ,, as well as age-specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds),=,(incidence rate),×,(mean duration) (P/[1,,,P],=,,,×,µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of , is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1,,,P],=,,,×,µ. Following-up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada Pour beaucoup de maladies, des contraintes de logistique rendent les grandes études de cohortes difficiles à effectuer. Ceci devient un inconvénient particulièrement lorsqu'une nouvelle étude est devenue nécessaire afin d'étudier le taux d'indicence d'une nouvelle population. En menant une étude de cohorte prévalente avec relance, il est possible d'estimer le taux d'incidence lorsqu'il est constant. Les auteurs obtiennent l'estimateur du maximum de vraisemblance (EMV) pour le taux d'incidence globale, ,, ainsi que ceux par âge en exploitant la relation épidémiologique, (cote de prévalence)=(taux d'incidence),×,(durée moyenne) (P/[1,,,P],=,,,×,µ). Les auteurs obtiennent aussi les distributions des EMV et ils donnent des intervalles de confiance asymptotiques pour les paramètres. De plus, l'EMV de , est asymptotiquement le plus efficace et il est un estimateur naturel obtenu en substituant les estimateurs du maximum de vraisemblance marginale de P et µ dans P/[1,,,P],=,,,×,µ. La relance des sujets permet aux auteurs de développer ces procédures largement applicables. Les auteurs appliquent leur méthode à des données provenant de l'étude canadienne sur la santé et le vieillissement afin d'estimer le taux d'incidence de la démence parmi les Canadiens âgés. La revue canadienne de statistique © 2009 Société statistique du Canada [source]


    The Tobit model with a non-zero threshold

    THE ECONOMETRICS JOURNAL, Issue 3 2007
    Richard T. Carson
    Summary, The standard Tobit maximum likelihood estimator under zero censoring threshold produces inconsistent parameter estimates, when the constant censoring threshold , is non-zero and unknown. Unfortunately, the recording of a zero rather than the actual censoring threshold value is typical of economic data. Non-trivial minimum purchase prices for most goods, fixed cost for doing business or trading, social customs such as those involving charitable donations, and informal administrative recording practices represent common examples of non-zero constant censoring threshold where the constant threshold is not readily available to the econometrician. Monte Carlo results show that this bias can be extremely large in practice. A new estimator is proposed to estimate the unknown censoring threshold. It is shown that the estimator is superconsistent and follows an exponential distribution in large samples. Due to the superconsistency, the asymptotic distribution of the maximum likelihood estimator of other parameters is not affected by the estimation uncertainty of the censoring threshold. [source]