Maximum Likelihood Estimator (maximum + likelihood_estimator)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics


Selected Abstracts


Heterogeneity in dynamic discrete choice models

THE ECONOMETRICS JOURNAL, Issue 1 2010
Martin Browning
Summary, We consider dynamic discrete choice models with heterogeneity in both the levels parameter and the state dependence parameter. We first present an empirical analysis that motivates the theoretical analysis which follows. The theoretical analysis considers a simple two-state, first-order Markov chain model without covariates in which both transition probabilities are heterogeneous. Using such a model we are able to derive exact small sample results for bias and mean squared error (MSE). We discuss the maximum likelihood approach and derive two novel estimators. The first is a bias corrected version of the Maximum Likelihood Estimator (MLE) although the second, which we term MIMSE, minimizes the integrated mean square error. The MIMSE estimator is always well defined, has a closed-form expression and inherits the desirable large sample properties of the MLE. Our main finding is that in almost all short panel contexts the MIMSE significantly outperforms the other two estimators in terms of MSE. A final section extends the MIMSE estimator to allow for exogenous covariates. [source]


Bayesian semiparametric estimation of discrete duration models: an application of the dirichlet process prior

JOURNAL OF APPLIED ECONOMETRICS, Issue 1 2001
Michele Campolieti
This paper proposes a Bayesian estimator for a discrete time duration model which incorporates a non-parametric specification of the unobserved heterogeneity distribution, through the use of a Dirichlet process prior. This estimator offers distinct advantages over the Nonparametric Maximum Likelihood estimator of this model. First, it allows for exact finite sample inference. Second, it is easily estimated and mixed with flexible specifications of the baseline hazard. An application of the model to employment duration data from the Canadian province of New Brunswick is provided. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Maximum likelihood estimators of population parameters from doubly left-censored samples

ENVIRONMETRICS, Issue 8 2006
Abou El-Makarim A. Aboueissa
Abstract Left-censored data often arise in environmental contexts with one or more detection limits, DLs. Estimators of the parameters are derived for left-censored data having two detection limits: DL1 and DL2 assuming an underlying normal distribution. Two different approaches for calculating the maximum likelihood estimates (MLE) are given and examined. These methods also apply to lognormally distributed environmental data with two distinct detection limits. The performance of the new estimators is compared utilizing many simulated data sets. Examples are given illustrating the use of these methods utilizing a computer program given in the Appendix. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Maximum likelihood estimators of clock offset and skew under exponential delays

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2009
Jun Li
Abstract Accurate clock synchronization is essential for many data network applications. Various algorithms for synchronizing clocks rely on estimators of the offset and skew parameters that describe the relation between times measured by two different clocks. Maximum likelihood estimation (MLE) of these parameters has previously been considered under the assumption of exponentially distributed network delays with known means. We derive the MLEs under the more common case of exponentially distributed network delays with unknown means and compare their mean-squared error properties to a recently proposed alternative estimator. We investigate the robustness of the derived MLE to the assumption of non-exponential network delays, and demonstrate the effectiveness of a bootstrap bias-correction technique. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Maximum likelihood estimators of clock offset and skew under exponential delays

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2009
Jun Li
No abstract is available for this article. [source]


Parametric estimation for the location parameter for symmetric distributions using moving extremes ranked set sampling with application to trees data

ENVIRONMETRICS, Issue 7 2003
Mohammad Fraiwan Al-Saleh
Abstract A modification of ranked set sampling (RSS) called moving extremes ranked set sampling (MERSS) is considered parametrically, for the location parameter of symmetric distributions. A maximum likelihood estimator (MLE) and a modified MLE are considered and their properties are studied. Their efficiency with respect to the corresponding estimators based on simple random sampling (SRS) are compared for the case of normal distribution. The method is studied under both perfect and imperfect ranking (with error in ranking). It appears that these estimators can be real competitors to the MLE using (SRS). The procedure is illustrated using tree data. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Estimation of rate ratio and relative difference in matched-pairs under inverse sampling

ENVIRONMETRICS, Issue 6 2001
Kung-Jong Lui
Abstract To increase the efficiency of a study and to eliminate the effects of some nuisance confounders, we may consider employing a matched-pair design. Under the commonly assumed quadrinomial sampling, in which the total number of matched-pairs is fixed, we note that the maximum likelihood estimator (MLE) of rate ratio (RR) has an infinitely large bias and no finite variance, and so does the MLE of relative difference (RD). To avoid this theoretical concern, this paper suggests use of an inverse sampling and notes that the MLEs of these parameters, which are actually of the same forms as those under the quadrinomial sampling, are also the uniformly minimum variance estimators (UMVUEs) under the proposed samplings. This paper further derives the exact variances of these MLEs and the corresponding UMVUEs of these variances. Finally, this paper includes a discussion on interval estimation of the RR and RD using these results as well. Copyright © 2001 John Wiley & Sons, Ltd. [source]


A Structural Equation Approach to Models with Spatial Dependence

GEOGRAPHICAL ANALYSIS, Issue 2 2008
Johan H. L. Oud
We introduce the class of structural equation models (SEMs) and corresponding estimation procedures into a spatial dependence framework. SEM allows both latent and observed variables within one and the same (causal) model. Compared with models with observed variables only, this feature makes it possible to obtain a closer correspondence between theory and empirics, to explicitly account for measurement errors, and to reduce multicollinearity. We extend the standard SEM maximum likelihood estimator to allow for spatial dependence and propose easily accessible SEM software like LISREL 8 and Mx. We present an illustration based on Anselin's Columbus, OH, crime data set. Furthermore, we combine the spatial lag model with the latent multiple-indicators,multiple-causes model and discuss estimation of this latent spatial lag model. We present an illustration based on the Anselin crime data set again. [source]


Power of Tests for a Dichotomous Independent Variable Measured with Error

HEALTH SERVICES RESEARCH, Issue 3 2008
Daniel F. McCaffrey
Objective. To examine the implications for statistical power of using predicted probabilities for a dichotomous independent variable, rather than the actual variable. Data Sources/Study Setting. An application uses 271,479 observations from the 2000 to 2002 CAHPS Medicare Fee-for-Service surveys. Study Design and Data. A methodological study with simulation results and a substantive application to previously collected data. Principle Findings. Researchers often must employ key dichotomous predictors that are unobserved but for which predictions exist. We consider three approaches to such data: the classification estimator (1); the direct substitution estimator (2); the partial information maximum likelihood estimator (3, PIMLE). The efficiency of (1) (its power relative to testing with the true variable) roughly scales with the square of one less the classification error. The efficiency of (2) roughly scales with the R2 for predicting the unobserved dichotomous variable, and is usually more powerful than (1). Approach (3) is most powerful, but for testing differences in means of 0.2,0.5 standard deviations, (2) is typically more than 95 percent as efficient as (3). Conclusions. The information loss from not observing actual values of dichotomous predictors can be quite large. Direct substitution is easy to implement and interpret and nearly as efficient as the PIMLE. [source]


On Estimation in M/G/c/c Queues

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2001
Mei Ling Huang
We derive the minimum variance unbiased estimator (MVUE) and the maximum likelihood estimator (MLE) of the stationary probability function (pf) of the number of customers in a collection of independent M/G/c/c subsystems. It is assumed that the offered load and number of servers in each subsystem are unknown. We assume that observations of the total number of customers in the system are utilized because it may be impractical or impossible to observe individual server occupancies. Both estimators depend on the R distribution (the distribution of the sum of independent right truncated Poisson random variables) and R numbers. [source]


Fixed or random contemporary groups in genetic evaluation for litter size in pigs using a single trait repeatability animal model

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 1 2003
D. Babot
Summary The importance of using fixed or random contemporary groups in the genetic evaluation of litter size in pigs was analysed by using farm and simulated data. Farm data were from four Spanish pig breeding populations, two Landrace (13 084 and 13 619 records) and two Large White (2762 and 8455 records). A simulated population (200 sows and 10 boars) selected for litter size, in which litter size was simulated using a repeatability animal model with random herd,year,season (HYS), was used to obtain simulated data. With farm data, the goodness-of-fit and the predictive ability of a repeatability animal model were calculated under several definitions of the HYS effect. A residual maximum likelihood estimator of the HYS variance in each population was obtained as well. In this sense, HYS was considered as either fixed or random with different number of individuals per level. Results from farm data showed that HYS variance was small in relation to the total variance (ranging from 0.01 to 0.04). The treatment of HYS effect as fixed, reduced the residual variance but the size of HYS levels does not explain by itself the goodness-of-fit of the model. The results obtained by simulation showed that the predictive ability of the model is better for random than for fixed HYS models. However, the improvement of predictive ability does not lead to a significant increase of the genetic response. Finally, results showed that random HYS models biased the estimates of genetic response when there is an environmental trend. Zusammenfassung Fixe oder zufällige Vergleichsgruppen bei der Zuchtwertschätzung für Wurfgröße beim Schwein mit einem Wiederholbarkeits-Tiermodell Der Einfluss von fixen oder zufälligen Vergleichgruppen bei der Zuchtwertschätzung für Wurfgröße beim Schwein wurde an realen Betriebsdaten und an simulierten Daten untersucht. Die Betriebsdaten stammen von vier spanischen Zuchtpopulationen, zwei Landrasse Populationen (13084 und 13619 Datensätze) und zwei Large White Populationen (2762 und 8455 Datensätze). Für die Simulation wurde eine Population (200 Sauen und 10 Eber), die auf Wurfgröße selektiert wurde, unter Berücksichtigung eines Wiederholbarkeitsmodelles und mit zufälligen Herden-Jahr-Saisonklassen simuliert. Anhand der Betriebsdaten wurde die Güte des Modells und die Vorhersagegenauigkeit des Wiederholbarkeitsmodelles mit verschiedenen Definitionen der Herden-Jahr-Saisonklassen geprüft. Mittels der REML-Methode wurden auch Varianzkomponenten für die Herden-Jahr-Saisonklassen geschätzt. Die Herden-Jahr-Saisonklassen wurden als fixer bzw. zufälliger Effekt mit unterschiedlicher Anzahl an Tieren pro Klasse im Modell berücksichtigt. Die Ergebnisse der Betriebsdaten ergaben, dass die Varianz für die Herden-Jahr-Saisonklassen nur einen kleinen Teil der Totalvarianz (von 0,01 bis 0,04) ausmachte. Mit den Herden-Jahr-Saisonklassen als fixer Effekt reduzierte sich die Restvarianz, aber die Größe der Herden-Jahr-Saisonklassen bestimmte nicht allein die Güte des Modells. Die Erhöhung der Vorhersagegenauigkeit ergab keinen signifikanten Anstieg des genetischen Fortschrittes. Abschließend bleibt festzustellen, dass Modelle mit zufälligen Herde-Jahr-Saisonklassen zu einem Bias des geschätzten genetischen Erfolges führten, wenn ein Umwelttrend vorhanden war. [source]


Asymmetric power distribution: Theory and applications to risk measurement

JOURNAL OF APPLIED ECONOMETRICS, Issue 5 2007
Ivana Komunjer
Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew- t -GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Estimation of multivariate models for time series of possibly different lengths

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2006
Andrew J. Patton
We consider the problem of estimating parametric multivariate density models when unequal amounts of data are available on each variable. We focus in particular on the case that the unknown parameter vector may be partitioned into elements relating only to a marginal distribution and elements relating to the copula. In such a case we propose using a multi-stage maximum likelihood estimator (MSMLE) based on all available data rather than the usual one-stage maximum likelihood estimator (1SMLE) based only on the overlapping data. We provide conditions under which the MSMLE is not less asymptotically efficient than the 1SMLE, and we examine the small sample efficiency of the estimators via simulations. The analysis in this paper is motivated by a model of the joint distribution of daily Japanese yen,US dollar and euro,US dollar exchange rates. We find significant evidence of time variation in the conditional copula of these exchange rates, and evidence of greater dependence during extreme events than under the normal distribution. Copyright © 2006 John Wiley & Sons, Ltd. [source]


The Heckman Correction for Sample Selection and Its Critique

JOURNAL OF ECONOMIC SURVEYS, Issue 1 2000
Patrick Puhani
This paper gives a short overview of Monte Carlo studies on the usefulness of Heckman's (1976, 1979) two-step estimator for estimating selection models. Such models occur frequently in empirical work, especially in microeconometrics when estimating wage equations or consumer expenditures. It is shown that exploratory work to check for collinearity problems is strongly recommended before deciding on which estimator to apply. In the absence of collinearity problems, the full-information maximum likelihood estimator is preferable to the limited-information two-step method of Heckman, although the latter also gives reasonable results. If, however, collinearity problems prevail, subsample OLS (or the Two-Part Model) is the most robust amongst the simple-to-calculate estimators. [source]


Bankruptcy Prediction: Evidence from Korean Listed Companies during the IMF Crisis

JOURNAL OF INTERNATIONAL FINANCIAL MANAGEMENT & ACCOUNTING, Issue 3 2000
Joo-Ha Nam
This paper empirically studies the predictive model of business failure using the sample of listed companies that went bankrupt during the period from 1997 to 1998 when deep recession driven by the IMF crisis started in Korea. Logit maximum likelihood estimator is employed as the statistical technique. The model demonstrated decent prediction accuracy and robustness. The type I accuracy is 80.4 per cent and the Type II accuracy is 73.9 per cent. The accuracy remains almost at the same level when the model is applied to an independent holdout sample. In addition to building a bankruptcy prediction model this paper finds that most of firms that went bankrupt during the Korean economic crisis from 1997 to 1998 had shown signs of financial distress long before the crisis. Bankruptcy probabilities of the sample are consistently high during the period from 1991 to 1996. The evidence of this paper can be seen as complementary to the perspective that traces Asian economic crisis to the vulnerabilities of corporate governance of Asian countries. [source]


On-line expectation,maximization algorithm for latent data models

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2009
Olivier Cappé
Summary., We propose a generic on-line (also sometimes called adaptive or recursive) version of the expectation,maximization (EM) algorithm applicable to latent variable models of independent observations. Compared with the algorithm of Titterington, this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete-data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback,Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e. that of the maximum likelihood estimator. In addition, the approach proposed is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model. [source]


A theory of statistical models for Monte Carlo integration

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2003
A. Kong
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers. [source]


Bootstrapping a weighted linear estimator of the ARCH parameters

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2009
Arup Bose
Abstract., A standard assumption while deriving the asymptotic distribution of the quasi maximum likelihood estimator in ARCH models is that all ARCH parameters must be strictly positive. This assumption is also crucial in deriving the limit distribution of appropriate linear estimators (LE). We propose a weighted linear estimator (WLE) of the ARCH parameters in the classical ARCH model and show that its limit distribution is multivariate normal even when some of the ARCH coefficients are zero. The asymptotic dispersion matrix involves unknown quantities. We consider appropriate bootstrapped version of this WLE and prove that it is asymptotically valid in the sense that the bootstrapped distribution (given the data) is a consistent estimate (in probability) of the distribution of the WLE. Although we do not show theoretically that the bootstrap outperforms the normal approximation, our simulations demonstrate that it yields better approximations than the limiting normal. [source]


Impact of the Sampling Rate on the Estimation of the Parameters of Fractional Brownian Motion

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2006
Zhengyuan Zhu
Primary 60G18; secondary 62D05, 62F12 Abstract., Fractional Brownian motion is a mean-zero self-similar Gaussian process with stationary increments. Its covariance depends on two parameters, the self-similar parameter H and the variance C. Suppose that one wants to estimate optimally these parameters by using n equally spaced observations. How should these observations be distributed? We show that the spacing of the observations does not affect the estimation of H (this is due to the self-similarity of the process), but the spacing does affect the estimation of the variance C. For example, if the observations are equally spaced on [0, n] (unit-spacing), the rate of convergence of the maximum likelihood estimator (MLE) of the variance C is . However, if the observations are equally spaced on [0, 1] (1/n -spacing), or on [0, n2] (n -spacing), the rate is slower, . We also determine the optimal choice of the spacing , when it is constant, independent of the sample size n. While the rate of convergence of the MLE of C is in this case, irrespective of the value of ,, the value of the optimal spacing depends on H. It is 1 (unit-spacing) if H = 1/2 but is very large if H is close to 1. [source]


Maximum Likelihood Estimation for a First-Order Bifurcating Autoregressive Process with Exponential Errors

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2005
J. Zhou
Abstract., Exact and asymptotic distributions of the maximum likelihood estimator of the autoregressive parameter in a first-order bifurcating autoregressive process with exponential innovations are derived. The limit distributions for the stationary, critical and explosive cases are unified via a single pivot using a random normalization. The pivot is shown to be asymptotically exponential for all values of the autoregressive parameter. [source]


Efficient use of higher-lag autocorrelations for estimating autoregressive processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2002
LAURENCE BROZE
The Yule,Walker estimator is commonly used in time-series analysis, as a simple way to estimate the coefficients of an autoregressive process. Under strong assumptions on the noise process, this estimator possesses the same asymptotic properties as the Gaussian maximum likelihood estimator. However, when the noise is a weak one, other estimators based on higher-order empirical autocorrelations can provide substantial efficiency gains. This is illustrated by means of a first-order autoregressive process with a Markov-switching white noise. We show how to optimally choose a linear combination of a set of estimators based on empirical autocorrelations. The asymptotic variance of the optimal estimator is derived. Empirical experiments based on simulations show that the new estimator performs well on the illustrative model. [source]


Optimum step-stress accelerated life test plans for log-location-scale distributions

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 6 2008
Haiming Ma
Abstract This article presents new tools and methods for finding optimum step-stress accelerated life test plans. First, we present an approach to calculate the large-sample approximate variance of the maximum likelihood estimator of a quantile of the failure time distribution at use conditions from a step-stress accelerated life test. The approach allows for multistep stress changes and censoring for general log-location-scale distributions based on a cumulative exposure model. As an application of this approach, the optimum variance is studied as a function of shape parameter for both Weibull and lognormal distributions. Graphical comparisons among test plans using step-up, step-down, and constant-stress patterns are also presented. The results show that depending on the values of the model parameters and quantile of interest, each of the three test plans can be preferable in terms of optimum variance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]


Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2004
B. Chandrasekar
Abstract Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Commun Statist Theory Methods 17 (1988), 1857,1870] considered a hybrid censoring scheme and obtained the exact distribution of the maximum likelihood estimator of the mean of an exponential distribution along with an exact lower confidence bound. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann Inst Statist Math 55 (2003), 319,330] recently derived an alternative simpler expression for the distribution of the MLE. These authors also proposed a new hybrid censoring scheme and derived similar results for the exponential model. In this paper, we propose two generalized hybrid censoring schemes which have some advantages over the hybrid censoring schemes already discussed in the literature. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under these generalized hybrid censoring schemes. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004 [source]


Estimating the Change Point of a Poisson Rate Parameter with a Linear Trend Disturbance

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2006
Marcus B. Perry
Abstract Knowing when a process changed would simplify the search and identification of the special cause. In this paper, we compare the maximum likelihood estimator (MLE) of the process change point designed for linear trends to the MLE of the process change point designed for step changes when a linear trend disturbance is present. We conclude that the MLE of the process change point designed for linear trends outperforms the MLE designed for step changes when a linear trend disturbance is present. We also present an approach based on the likelihood function for estimating a confidence set for the process change point. We study the performance of this estimator when it is used with a cumulative sum (CUSUM) control chart and make direct performance comparisons with the estimated confidence sets obtained from the MLE for step changes. The results show that better confidence can be obtained using the MLE for linear trends when a linear trend disturbance is present. Copyright © 2005 John Wiley & Sons, Ltd. [source]


On the incidence,prevalence relation and length-biased sampling

THE CANADIAN JOURNAL OF STATISTICS, Issue 2 2009
Vittorio Addona
MSC 2000: Primary 62N99; secondary 62G99 Abstract For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow-up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, ,, as well as age-specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds),=,(incidence rate),×,(mean duration) (P/[1,,,P],=,,,×,µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of , is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1,,,P],=,,,×,µ. Following-up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada Pour beaucoup de maladies, des contraintes de logistique rendent les grandes études de cohortes difficiles à effectuer. Ceci devient un inconvénient particulièrement lorsqu'une nouvelle étude est devenue nécessaire afin d'étudier le taux d'indicence d'une nouvelle population. En menant une étude de cohorte prévalente avec relance, il est possible d'estimer le taux d'incidence lorsqu'il est constant. Les auteurs obtiennent l'estimateur du maximum de vraisemblance (EMV) pour le taux d'incidence globale, ,, ainsi que ceux par âge en exploitant la relation épidémiologique, (cote de prévalence)=(taux d'incidence),×,(durée moyenne) (P/[1,,,P],=,,,×,µ). Les auteurs obtiennent aussi les distributions des EMV et ils donnent des intervalles de confiance asymptotiques pour les paramètres. De plus, l'EMV de , est asymptotiquement le plus efficace et il est un estimateur naturel obtenu en substituant les estimateurs du maximum de vraisemblance marginale de P et µ dans P/[1,,,P],=,,,×,µ. La relance des sujets permet aux auteurs de développer ces procédures largement applicables. Les auteurs appliquent leur méthode à des données provenant de l'étude canadienne sur la santé et le vieillissement afin d'estimer le taux d'incidence de la démence parmi les Canadiens âgés. La revue canadienne de statistique © 2009 Société statistique du Canada [source]


The Tobit model with a non-zero threshold

THE ECONOMETRICS JOURNAL, Issue 3 2007
Richard T. Carson
Summary, The standard Tobit maximum likelihood estimator under zero censoring threshold produces inconsistent parameter estimates, when the constant censoring threshold , is non-zero and unknown. Unfortunately, the recording of a zero rather than the actual censoring threshold value is typical of economic data. Non-trivial minimum purchase prices for most goods, fixed cost for doing business or trading, social customs such as those involving charitable donations, and informal administrative recording practices represent common examples of non-zero constant censoring threshold where the constant threshold is not readily available to the econometrician. Monte Carlo results show that this bias can be extremely large in practice. A new estimator is proposed to estimate the unknown censoring threshold. It is shown that the estimator is superconsistent and follows an exponential distribution in large samples. Due to the superconsistency, the asymptotic distribution of the maximum likelihood estimator of other parameters is not affected by the estimation uncertainty of the censoring threshold. [source]


Robust Estimation and Outlier Detection for Overdispersed Multinomial Models of Count Data

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2004
Walter R. Mebane Jr.
We develop a robust estimator,the hyperbolic tangent (tanh) estimator,for overdispersed multinomial regression models of count data. The tanh estimator provides accurate estimates and reliable inferences even when the specified model is not good for as much as half of the data. Seriously ill-fitted counts,outliers,are identified as part of the estimation. A Monte Carlo sampling experiment shows that the tanh estimator produces good results at practical sample sizes even when ten percent of the data are generated by a significantly different process. The experiment shows that, with contaminated data, estimation fails using four other estimators: the nonrobust maximum likelihood estimator, the additive logistic model and two SUR models. Using the tanh estimator to analyze data from Florida for the 2000 presidential election matches well-known features of the election that the other four estimators fail to capture. In an analysis of data from the 1993 Polish parliamentary election, the tanh estimator gives sharper inferences than does a previously proposed heteroskedastic SUR model. [source]


Long-term effective population size of three endangered Colorado River fishes

ANIMAL CONSERVATION, Issue 2 2002
Daniel Garrigan
The extant genetic variation of a population is the legacy of both long-term and recent population dynamics. Most practical methods for estimating effective population size are only able to detect recent effects on genetic variation and do not account for long-term fluctuations in species abundance. The utility of a maximum likelihood estimator of long-term effective population size based upon the coalescent theory of gene genealogies is examined for three endangered Colorado River fishes: humpback chub (Gila cypha), bonytail chub (Gila elegans) and razorback sucker (Xyrauchen texanus). Extant mitochondrial DNA (mtDNA) variation in humpback chub suggests this species has retained its historical equilibrium genetic variation despite recent declines in abundance. The mtDNA variation in razorback suckers indicates the population was quite large and expanding prior to recent declines and that rare alleles still survive in the remnant populations. The remaining mtDNA variation in bonytail chub indicates that dramatic, recent declines may have already obliterated a substantial portion of any historical variation. The results from long-term effective population size analyses are consistent with known natural history and illustrate the utility of the analysis for endangered species management. [source]


Shrinkage drift parameter estimation for multi-factor Ornstein,Uhlenbeck processes

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2010
Sévérien Nkurunziza
Abstract We consider some inference problems concerning the drift parameters of multi-factors Vasicek model (or multivariate Ornstein,Uhlebeck process). For example, in modeling for interest rates, the Vasicek model asserts that the term structure of interest rate is not just a single process, but rather a superposition of several analogous processes. This motivates us to develop an improved estimation theory for the drift parameters when homogeneity of several parameters may hold. However, the information regarding the equality of these parameters may be imprecise. In this context, we consider Stein-rule (or shrinkage) estimators that allow us to improve on the performance of the classical maximum likelihood estimator (MLE). Under an asymptotic distributional quadratic risk criterion, their relative dominance is explored and assessed. We illustrate the suggested methods by analyzing interbank interest rates of three European countries. Further, a simulation study illustrates the behavior of the suggested method for observation periods of small and moderate lengths of time. Our analytical and simulation results demonstrate that shrinkage estimators (SEs) provide excellent estimation accuracy and outperform the MLE uniformly. An over-ridding theme of this paper is that the SEs provide powerful extensions of their classical counterparts. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Modelling the process of incoming problem reports on released software products

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2004
Geurt Jongbloed
Abstract For big software developing companies, it is important to know the amount of problems of a new software product that are expected to be reported in a period after the date of release, on a weekly basis. For each of a number of past releases, weekly data are present on the number of such reports. Based on the type of data that is present, we construct a stochastic model for the weekly number of problems to be reported. The (non-parametric) maximum likelihood estimator for the crucial model parameter, the intensity of an inhomogeneous Poisson process, is defined. Moreover, the expectation maximization algorithm is described, which can be used to compute this estimate. The method is illustrated using simulated data. Copyright © 2004 John Wiley & Sons, Ltd. [source]