Maximum Likelihood Estimation (maximum + likelihood_estimation)

Distribution by Scientific Domains

Terms modified by Maximum Likelihood Estimation

  • maximum likelihood estimation method
  • maximum likelihood estimation procedure

  • Selected Abstracts


    SEMINONPARAMETRIC MAXIMUM LIKELIHOOD ESTIMATION OF CONDITIONAL MOMENT RESTRICTION MODELS,

    INTERNATIONAL ECONOMIC REVIEW, Issue 4 2007
    Chunrong Ai
    This article studies estimation of a conditional moment restriction model with the seminonparametric maximum likelihood approach proposed by Gallant and Nychka (Econometrica 55 (March 1987), 363,90). Under some sufficient conditions, we show that the estimator of the finite dimensional parameter , is asymptotically normally distributed and attains the semiparametric efficiency bound and that the estimator of the density function is consistent under L2 norm. Some results on the convergence rate of the estimated density function are derived. An easy to compute covariance matrix for the asymptotic covariance of the , estimator is presented. [source]


    OPTIMAL AND ADAPTIVE SEMI-PARAMETRIC NARROWBAND AND BROADBAND AND MAXIMUM LIKELIHOOD ESTIMATION OF THE LONG-MEMORY PARAMETER FOR REAL EXCHANGE RATES,

    THE MANCHESTER SCHOOL, Issue 2 2005
    SAEED HERAVI
    The nature of the time series properties of real exchange rates remains a contentious issue primarily because of the implications for purchasing power parity. In particular are real exchange rates best characterized as stationary and non-persistent; nonstationary but non-persistent; or nonstationary and persistent? Most assessments of this issue use the I(0)/I(1) paradigm, which only allows the first and last of these options. In contrast, in the I(d) paradigm, d fractional, all three are possible, with the crucial parameter d determining the long-run properties of the process. This study includes estimation of d by three methods of semi-parametric estimation in the frequency domain, using both local and global (Fourier) frequency estimation, and maximum likelihood estimation of ARFIMA models in the time domain. We give a transparent assessment of the key selection parameters in each method, particularly estimation of the truncation parameters for the semi-parametric methods. Two other important developments are also included. We implement Tanaka's locally best invariant parametric tests based on maximum likelihood estimation of the long-memory parameter and include a recent extension of the Dickey,Fuller approach, referred to as fractional Dickey,Fuller (FD-F), to fractionally integrated series, which allows a much wider range of generating processes under the alternative hypothesis. With this more general approach, we find very little evidence of stationarity for 10 real exchange rates for developed countries and some very limited evidence of nonstationarity but non-persistence, and none of the FD-F tests leads to rejection of the null of a unit root. [source]


    MAXIMUM LIKELIHOOD ESTIMATION FOR A POISSON RATE PARAMETER WITH MISCLASSIFIED COUNTS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2005
    James D. Stamey
    Summary This paper proposes a Poisson-based model that uses both error-free data and error-prone data subject to misclassification in the form of false-negative and false-positive counts. It derives maximum likelihood estimators (MLEs) for the Poisson rate parameter and the two misclassification parameters , the false-negative parameter and the false-positive parameter. It also derives expressions for the information matrix and the asymptotic variances of the MLE for the rate parameter, the MLE for the false-positive parameter, and the MLE for the false-negative parameter. Using these expressions the paper analyses the value of the fallible data. It studies characteristics of the new double-sampling rate estimator via a simulation experiment and applies the new MLE estimators and confidence intervals to a real dataset. [source]


    Maximum Likelihood Estimation of VARMA Models Using a State-Space EM Algorithm

    JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2007
    Konstantinos Metaxoglou
    Abstract., We introduce a state-space representation for vector autoregressive moving-average models that enables maximum likelihood estimation using the EM algorithm. We obtain closed-form expressions for both the E- and M-steps; the former requires the Kalman filter and a fixed-interval smoother, and the latter requires least squares-type regression. We show via simulations that our algorithm converges reliably to the maximum, whereas gradient-based methods often fail because of the highly nonlinear nature of the likelihood function. Moreover, our algorithm converges in a smaller number of function evaluations than commonly used direct-search routines. Overall, our approach achieves its largest performance gains when applied to models of high dimension. We illustrate our technique by estimating a high-dimensional vector moving-average model for an efficiency test of California's wholesale electricity market. [source]


    Maximum Likelihood Estimation for a First-Order Bifurcating Autoregressive Process with Exponential Errors

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2005
    J. Zhou
    Abstract., Exact and asymptotic distributions of the maximum likelihood estimator of the autoregressive parameter in a first-order bifurcating autoregressive process with exponential innovations are derived. The limit distributions for the stationary, critical and explosive cases are unified via a single pivot using a random normalization. The pivot is shown to be asymptotically exponential for all values of the autoregressive parameter. [source]


    Exact Maximum Likelihood Estimation of an ARMA(1, 1) Model with Incomplete Data

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2002
    CHUNSHENG MA
    For a first-order autoregressive and first-order moving average model with nonconsecutively observed or missing data, the closed form of the exact likelihood function is obtained, and the exact maximum likelihood estimation of parameters is derived in the stationary case. [source]


    Parameter and state estimation in nonlinear stochastic continuous-time dynamic models with unknown disturbance intensity

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2008
    M. S. Varziri
    Abstract Approximate Maximum Likelihood Estimation (AMLE) is an algorithm for estimating the states and parameters of models described by stochastic differential equations (SDEs). In previous work (Varziri et al., Ind. Eng. Chem. Res., 47(2), 380-393, (2008); Varziri et al., Comp. Chem. Eng., in press), AMLE was developed for SDE systems in which process-disturbance intensities and measurement-noise variances were assumed to be known. In the current article, a new formulation of the AMLE objective function is proposed for the case in which measurement-noise variance is available but the process-disturbance intensity is not known a priori. The revised formulation provides estimates of the model parameters and disturbance intensities, as demonstrated using a nonlinear CSTR simulation study. Parameter confidence intervals are computed using theoretical linearization-based expressions. The proposed method compares favourably with a Kalman-filter-based maximum likelihood method. The resulting parameter estimates and information about model mismatch will be useful to chemical engineers who use fundamental models for process monitoring and control. L'estimation des vraisemblances maximums approximatives (AMLE) est un algorithme destiné à l'estimation des états et des paramètres de modèles décrits par les équations différentielles stochastiques (SDE). Dans un précédent travail (Varziri et al., 2008a, 2008b), l'AMLE a été mis au point pour des systèmes SDE dans lesquels les intensités de perturbations et les variances de bruits de mesure sont supposées connues. On propose dans cet article une nouvelle formulation de la fonction objectif de l'AMLE pour le cas où la variance de bruit de mesure est disponible mais où l'intensité des perturbations de procédé n'est pas connue a priori. La formulation révisée fournit des estimations des paramètres de modèle et des intensités de perturbations, comme le démontre une étude de simulation en CSTR non linéaire. Les intervalles de confiance des paramètres sont calculés par ordinateur au moyen d'expressions basées sur la linéarisation théorique. La méthode proposée se compare favorablement à une méthode de vraisemblance maximun reposant sur le filtre de Kalman. Les estimations de paramètres qui en résultent et l'information sur la discordance de modèle seront utiles aux ingénieurs en génie chimique qui utilisent des modèles fondamentaux pour la surveillance et le contrôle des procédés. [source]


    Robustified Maximum Likelihood Estimation in Generalized Partial Linear Mixed Model for Longitudinal Data

    BIOMETRICS, Issue 1 2009
    Guo You Qin
    Summary In this article, we study the robust estimation of both mean and variance components in generalized partial linear mixed models based on the construction of robustified likelihood function. Under some regularity conditions, the asymptotic properties of the proposed robust estimators are shown. Some simulations are carried out to investigate the performance of the proposed robust estimators. Just as expected, the proposed robust estimators perform better than those resulting from robust estimating equations involving conditional expectation like Sinha (2004, Journal of the American Statistical Association99, 451,460) and Qin and Zhu (2007, Journal of Multivariate Analysis98, 1658,1683). In the end, the proposed robust method is illustrated by the analysis of a real data set. [source]


    Maximum Likelihood Estimation in Dynamical Models of HIV

    BIOMETRICS, Issue 4 2007
    J. Guedj
    Summary The study of dynamical models of HIV infection, based on a system of nonlinear ordinary differential equations (ODE), has considerably improved the knowledge of its pathogenesis. While the first models used simplified ODE systems and analyzed each patient separately, recent works dealt with inference in non-simplified models borrowing strength from the whole sample. The complexity of these models leads to great difficulties for inference and only the Bayesian approach has been attempted by now. We propose a full likelihood inference, adapting a Newton-like algorithm for these particular models. We consider a relatively complex ODE model for HIV infection and a model for the observations including the issue of detection limits. We apply this approach to the analysis of a clinical trial of antiretroviral therapy (ALBI ANRS 070) and we show that the whole algorithm works well in a simulation study. [source]


    Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2008
    M.H. Lidauer
    Summary A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region × year × month × parity effect and a random herd × test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking. [source]


    Maximum likelihood estimation in semiparametric regression models with censored data

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2007
    D. Zeng
    Summary., Semiparametric regression models play a central role in formulating the effects of covariates on potentially censored failure times and in the joint modelling of incomplete repeated measures and failure times in longitudinal studies. The presence of infinite dimensional parameters poses considerable theoretical and computational challenges in the statistical analysis of such models. We present several classes of semiparametric regression models, which extend the existing models in important directions. We construct appropriate likelihood functions involving both finite dimensional and infinite dimensional parameters. The maximum likelihood estimators are consistent and asymptotically normal with efficient variances. We develop simple and stable numerical techniques to implement the corresponding inference procedures. Extensive simulation experiments demonstrate that the inferential and computational methods proposed perform well in practical settings. Applications to three medical studies yield important new insights. We conclude that there is no reason, theoretical or numerical, not to use maximum likelihood estimation for semiparametric regression models. We discuss several areas that need further research. [source]


    Maximum likelihood estimation of bivariate logistic models for incomplete responses with indicators of ignorable and non-ignorable missingness

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2002
    Nicholas J. Horton
    Summary. Missing observations are a common problem that complicate the analysis of clustered data. In the Connecticut child surveys of childhood psychopathology, it was possible to identify reasons why outcomes were not observed. Of note, some of these causes of missingness may be assumed to be ignorable, whereas others may be non-ignorable. We consider logistic regression models for incomplete bivariate binary outcomes and propose mixture models that permit estimation assuming that there are two distinct types of missingness mechanisms: one that is ignorable; the other non-ignorable. A feature of the mixture modelling approach is that additional analyses to assess the sensitivity to assumptions about the missingness are relatively straightforward to incorporate. The methods were developed for analysing data from the Connecticut child surveys, where there are missing informant reports of child psychopathology and different reasons for missingness can be distinguished. [source]


    Maximum likelihood estimation of higher-order integer-valued autoregressive processes

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2008
    Ruijun Bu
    Abstract., In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701,722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) specification with binomial thinning and Poisson innovations, we examine both the asymptotic efficiency and finite sample properties of the ML estimator in relation to the widely used conditional least squares (CLS) and Yule,Walker (YW) estimators. We conclude that, if the Poisson assumption can be justified, there are substantial gains to be had from using ML especially when the thinning parameters are large. [source]


    Maximum likelihood estimation in space time bilinear models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2003
    YUQING DAI
    The space time bilinear (STBL) model is a special form of a multiple bilinear time series that can be used to model time series which exhibit bilinear behaviour on a spatial neighbourhood structure. The STBL model and its identification have been proposed and discussed by Dai and Billard (1998). The present work considers the problem of parameter estimation for the STBL model. A conditional maximum likelihood estimation procedure is provided through the use of a Newton,Raphson numerical optimization algorithm. The gradient vector and Hessian matrix are derived together with recursive equations for computation implementation. The methodology is illustrated with two simulated data sets, and one real-life data set. [source]


    Maximum likelihood estimators of clock offset and skew under exponential delays

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2009
    Jun Li
    Abstract Accurate clock synchronization is essential for many data network applications. Various algorithms for synchronizing clocks rely on estimators of the offset and skew parameters that describe the relation between times measured by two different clocks. Maximum likelihood estimation (MLE) of these parameters has previously been considered under the assumption of exponentially distributed network delays with known means. We derive the MLEs under the more common case of exponentially distributed network delays with unknown means and compare their mean-squared error properties to a recently proposed alternative estimator. We investigate the robustness of the derived MLE to the assumption of non-exponential network delays, and demonstrate the effectiveness of a bootstrap bias-correction technique. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Maximum likelihood estimation of a latent variable time-series model

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2001
    Francesco Bartolucci
    Abstract Recently, Fridman and Harris proposed a method which allows one to approximate the likelihood of the basic stochastic volatility model. They also propose to estimate the parameters of such a model maximising the approximate likelihood by an algorithm which makes use of numerical derivatives. In this paper we propose an extension of their method which enables the computation of the first and second analytical derivatives of the approximate likelihood. As will be shown, these derivatives may be used to maximize the approximate likelihood through the Newton,Raphson algorithm, with a saving in the computational time. Moreover, these derivatives approximate the corresponding derivatives of the exact likelihood. In particular, the second derivative may be used to compute the standard error of the estimator and confidence intervals for the parameters. The paper presents also the results of a simulation study which allows one to compare our approach with other existing approaches. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Efficiency of Functional Regression Estimators for Combining Multiple Laser Scans of cDNA Microarrays

    BIOMETRICAL JOURNAL, Issue 1 2009
    C. A. Glasbey
    Abstract The first stage in the analysis of cDNA microarray data is estimation of the level of expression of each gene, from laser scans of hybridised microarrays. Typically, data are used from a single scan, although, if multiple scans are available, there is the opportunity to reduce sampling error by using all of them. Combining multiple laser scans can be formulated as multivariate functional regression through the origin. Maximum likelihood estimation fails, but many alternative estimators exist, one of which is to maximise the likelihood of a Gaussian structural regression model. We show by simulation that, surprisingly, this estimator is efficient for our problem, even though the distribution of gene expression values is far from Gaussian. Further, it performs well if errors have a heavier tailed distribution or the model includes intercept terms, but not necessarily in other regions of parameter space. Finally, we show that by combining multiple laser scans we increase the power to detect differential expression of genes. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    OUTLYING OBSERVATIONS AND MISSING VALUES: HOW SHOULD THEY BE HANDLED?

    CLINICAL AND EXPERIMENTAL PHARMACOLOGY AND PHYSIOLOGY, Issue 5-6 2008
    John Ludbrook
    SUMMARY 1The problems of, and best solutions for, outlying observations and missing values are very dependent on the sizes of the experimental groups. For original articles published in Clinical and Experimental Pharmacology and Physiology during 2006,2007, the range of group sizes ranged from three to 44 (,small groups'). In surveys, epidemiological studies and clinical trials, the group sizes range from 100s to 1000s (,large groups'). 2How can one detect outlying (extreme) observations? The best methods are graphical, for instance: (i) a scatterplot, often with mean±2 s; and (ii) a box-and-whisker plot. Even with these, it is a matter of judgement whether observations are truly outlying. 3It is permissable to delete or replace outlying observations if an independent explanation for them can be found. This may be, for instance, failure of a piece of measuring equipment or human error in operating it. If the observation is deleted, it can then be treated as a missing value. Rarely, the appropriate portion of the study can be repeated. 4It is decidedly not permissable to delete unexplained extreme values. Some of the acceptable strategies for handling them are: (i) transform the data and proceed with conventional statistical analyses; (ii) use the mean for location, but use permutation (randomization) tests for comparing means; and (iii) use robust methods for describing location (e.g. median, geometric mean, trimmed mean), for indicating dispersion (range, percentiles), for comparing locations and for regression analysis. 5What can be done about missing values? Some strategies are: (i) ignore them; (ii) replace them by hand if the data set is small; and (iii) use computerized imputation techniques to replace them if the data set is large (e.g. regression or EM (conditional Expectation, Maximum likelihood estimation) methods). 6If the missing values are ignored, or even if they are replaced, it is essential to test whether the individuals with missing values are otherwise indistinguishable from the remainder of the group. If the missing values have not occurred at random, but are associated with some property of the individuals being studied, the subsequent analysis may be biased. [source]


    Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods

    ECOLOGY LETTERS, Issue 7 2007
    Subhash R. Lele
    Abstract We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise. [source]


    Feature extraction by autoregressive spectral analysis using maximum likelihood estimation: internal carotid arterial Doppler signals

    EXPERT SYSTEMS, Issue 4 2008
    Elif Derya Übeyli
    Abstract: In this study, Doppler signals recorded from the internal carotid artery (ICA) of 97 subjects were processed by personal computer using classical and model-based methods. Fast Fourier transform (classical method) and autoregressive (model-based method) methods were selected for processing the ICA Doppler signals. The parameters in the autoregressive method were found by using maximum likelihood estimation. The Doppler power spectra of the ICA Doppler signals were obtained by using these spectral analysis techniques. The variations in the shape of the Doppler spectra as a function of time were presented in the form of sonograms in order to obtain medical information. These Doppler spectra and sonograms were then used to compare the applied methods in terms of their frequency resolution and the effects in determination of stenosis and occlusion in the ICA. Reliable information on haemodynamic alterations in the ICA can be obtained by evaluation of these sonograms. [source]


    Alternative event study methodology for detecting dividend signals in the context of joint dividend and earnings announcements

    ACCOUNTING & FINANCE, Issue 2 2009
    Warwick Anderson
    C51; D46; G14; N27 Abstract Friction models are used to examine the market reaction to the simultaneous disclosure of earnings and dividends in a thin-trading environment. Friction modelling, a procedure using maximum likelihood estimation, can be used to replace both the market model and restricted least-squares regression in event studies where there are two quantifiable variables and a number of possible interaction effects associated with the news that constitutes the study's event. The results indicate that the dividend signal can be separated from the earnings signal. [source]


    Bayesian estimation of financial models

    ACCOUNTING & FINANCE, Issue 2 2002
    Philip Gray
    This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum,likelihood,based asymptotic distributions have poor finite,sampleproperties. [source]


    Marginal maximum likelihood estimation of item response theory (IRT) equating coefficients for the common-examinee design

    JAPANESE PSYCHOLOGICAL RESEARCH, Issue 2 2001
    Haruhiko Ogasawara
    A method of estimating item response theory (IRT) equating coefficients by the common-examinee design with the assumption of the two-parameter logistic model is provided. The method uses the marginal maximum likelihood estimation, in which individual ability parameters in a common-examinee group are numerically integrated out. The abilities of the common examinees are assumed to follow a normal distribution but with an unknown mean and standard deviation on one of the two tests to be equated. The distribution parameters are jointly estimated with the equating coefficients. Further, the asymptotic standard errors of the estimates of the equating coefficients and the parameters for the ability distribution are given. Numerical examples are provided to show the accuracy of the method. [source]


    Normal mixture GARCH(1,1): applications to exchange rate modelling

    JOURNAL OF APPLIED ECONOMETRICS, Issue 3 2006
    Carol Alexander
    Some recent specifications for GARCH error processes explicitly assume a conditional variance that is generated by a mixture of normal components, albeit with some parameter restrictions. This paper analyses the general normal mixture GARCH(1,1) model which can capture time variation in both conditional skewness and kurtosis. A main focus of the paper is to provide evidence that, for modelling exchange rates, generalized two-component normal mixture GARCH(1,1) models perform better than those with three or more components, and better than symmetric and skewed Student's t -GARCH models. In addition to the extensive empirical results based on simulation and on historical data on three US dollar foreign exchange rates (British pound, euro and Japanese yen), we derive: expressions for the conditional and unconditional moments of all models; parameter conditions to ensure that the second and fourth conditional and unconditional moments are positive and finite; and analytic derivatives for the maximum likelihood estimation of the model parameters and standard errors of the estimates. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    The Impact of Omitted Responses on the Accuracy of Ability Estimation in Item Response Theory

    JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 3 2001
    R. J. De Ayala
    Practitioners typically face situations in which examinees have not responded to all test items. This study investigated the effect on an examinee's ability estimate when an examinee is presented an item, has ample time to answer, but decides not to respond to the item. Three approaches to ability estimation (biweight estimation, expected a posteriori, and maximum likelihood estimation) were examined. A Monte Carlo study was performed and the effect of different levels of omissions on the simulee's ability estimates was determined. Results showed that the worst estimation occurred when omits were treated as incorrect. In contrast, substitution of 0.5 for omitted responses resulted in ability estimates that were almost as accurate as those using complete data. Implications for practitioners are discussed. [source]


    Assessing the forecasting accuracy of alternative nominal exchange rate models: the case of long memory

    JOURNAL OF FORECASTING, Issue 5 2006
    David Karemera
    Abstract This paper presents an autoregressive fractionally integrated moving-average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long-memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long-memory model is more efficient than the random walk model in steps-ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi-step-ahead forecasts. This new finding strongly suggests that the long-memory model of nominal exchange rates be studied as a viable alternative to the conventional models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Natural resource-collection work and children's schooling in Malawi

    AGRICULTURAL ECONOMICS, Issue 2-3 2004
    Flora J. Nankhuni
    Abstract This paper presents results of research that investigates if long hours of work spent by children in fuel wood and water-collection activities, i. e., natural resource-collection work, influence the likelihood that a child aged 6,14 attends school. Potential endogeneity of resource-collection work hours is corrected for, using two-stage conditional maximum likelihood estimation. Data from the 1997,1998 Malawi Integrated Household Survey (IHS) conducted by the Malawi National Statistics Office (NSO) in conjunction with the International Food Policy Research Institute (IFPRI) are used. The study finds that Malawian children are significantly involved in resource-collection work and their likelihood of attending school decreases with increases in hours allocated to this work. The study further shows that girls spend more hours on resource-collection work and are more likely to be attending school while burdened by this work. Consequently, girls may find it difficult to progress well in school. However, girls are not necessarily less likely to be attending school. Results further show that presence of more women in a household is associated with a lower burden of resource-collection work on children and a higher probability of children's school attendance. Finally, the research shows that children from the most environmentally degraded districts of central and southern Malawi are less likely to attend school and relatively fewer of them have progressed to secondary school compared to those-from districts in the north. [source]


    Wavelet-based adaptive robust M-estimator for nonlinear system identification

    AICHE JOURNAL, Issue 8 2000
    D. Wang
    A wavelet-based robust M-estimation method for the identification of nonlinear systems is proposed. Because it is not based on the assumption that there is the class of error distribution, it takes a flexible, nonparametric approach and has the advantage of directly estimating the error distribution from the data. This M-estimator is optimal over any error distribution in the sense of maximum likelihood estimation. A Monte-Carlo study on a nonlinear chemical engineering example was used to compare the results with various previously utilized methods. [source]


    Exploring social mobility with latent trajectory groups

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008
    Patrick Sturgis
    Summary., We present a new methodological approach to the study of social mobility. We use a latent class growth analysis framework to identify five qualitatively distinct social class trajectory groups between 1980 and 2000 for male respondents to the 1970 British Cohort Study. We model the antecedents of trajectory group membership via multinomial logistic regression. Non-response, which is a considerable problem in long-term panels and cohort studies, is handled via direct maximum likelihood estimation, which is consistent and efficient when data are missing at random. Our results suggest a combination of meritocratic and ascriptive influences on the probability of membership in the different trajectory groups. [source]


    A latent Markov model for detecting patterns of criminal activity

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
    Francesco Bartolucci
    Summary., The paper investigates the problem of determining patterns of criminal behaviour from official criminal histories, concentrating on the variety and type of offending convictions. The analysis is carried out on the basis of a multivariate latent Markov model which allows for discrete covariates affecting the initial and the transition probabilities of the latent process. We also show some simplifications which reduce the number of parameters substantially; we include a Rasch-like parameterization of the conditional distribution of the response variables given the latent process and a constraint of partial homogeneity of the latent Markov chain. For the maximum likelihood estimation of the model we outline an EM algorithm based on recursions known in the hidden Markov literature, which make the estimation feasible also when the number of time occasions is large. Through this model, we analyse the conviction histories of a cohort of offenders who were born in England and Wales in 1953. The final model identifies five latent classes and specifies common transition probabilities for males and females between 5-year age periods, but with different initial probabilities. [source]