Asymptotic Normality (asymptotic + normality)

Distribution by Scientific Domains

Selected Abstracts


Jeremy Penzer
Summary ARCH/GARCH representations of financial series usually attempt to model the serial correlation structure of squared returns. Although it is undoubtedly true that squared returns are correlated, there is increasing empirical evidence of stronger correlation in the absolute returns than in squared returns. Rather than assuming an explicit form for volatility, we adopt an approximation approach; we approximate the ,th power of volatility by an asymmetric GARCH function with the power index , chosen so that the approximation is optimum. Asymptotic normality is established for both the quasi-maximum likelihood estimator (qMLE) and the least absolute deviations estimator (LADE) in our approximation setting. A consequence of our approach is a relaxation of the usual stationarity condition for GARCH models. In an application to real financial datasets, the estimated values for , are found to be close to one, consistent with the stylized fact that the strongest autocorrelation is found in the absolute returns. A simulation study illustrates that the qMLE is inefficient for models with heavy-tailed errors, whereas the LADE is more robust. [source]

Estimating Population Size for a Continuous Time Frailty Model with Covariates in a Capture,Recapture Study

BIOMETRICS, Issue 3 2007
Ying Xu
Summary A continuous time frailty capture,recapture model is proposed for estimating population size of a closed population with the use of observed covariates to explain individuals' heterogeneity in presence of a random effect. A conditional likelihood approach is used to derive the estimate of parameters, and the Horvitz,Thompson estimator is adopted to estimate the unknown population size. Asymptotic normality of the estimates is obtained. Simulation results and a real example show that the proposed method works satisfactorily. [source]

Estimation of Nonlinear Models with Measurement Error

ECONOMETRICA, Issue 1 2004
Susanne M. Schennach
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source]

Models for the estimation of a ,no effect concentration'

Ana M. Pires
Abstract The use of a no effect concentration (NEC), instead of the commonly used no observed effect concentration (NOEC), has been advocated recently. In this article models and methods for the estimation of an NEC are proposed and it is shown that the NEC overcomes many of the objections to the NOEC. The NEC is included as a threshold parameter in a non-linear model. Numerical methods are then used for point estimation and several techniques are proposed for interval estimation (based on bootstrap, profile likelihood and asymptotic normality). The adequacy of these methods is empirically confirmed by the results of a simulation study. The profile likelihood based interval has emerged as the best method. Finally the methodology is illustrated with data obtained from a 21 day Daphnia magna reproduction test with a reference substance, 3,4-dichloroaniline (3,4-DCA), and with a real effluent. Copyright © 2002 John Wiley & Sons, Ltd. [source]

The rate of learning-by-doing: estimates from a search-matching model

Julien Prat
We construct and estimate by maximum likelihood a job search model where wages are set by Nash bargaining and idiosyncratic productivity follows a geometric Brownian motion. The proposed framework enables us to endogenize job destruction and to estimate the rate of learning-by-doing. Although the range of the observations is not independent of the parameters, we establish that the estimators satisfy asymptotic normality. The structural model is estimated using Current Population Survey data on accepted wages and employment durations. We show that it accurately captures the joint distribution of wages and job spells. We find that the rate of learning-by-doing has an important positive effect on aggregate output and a small impact on employment. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Pseudomartingale estimating equations for modulated renewal process models

Fengchang Lin
Summary., We adapt martingale estimating equations based on gap time information to a general intensity model for a single realization of a modulated renewal process. The consistency and asymptotic normality of the estimators is proved under ergodicity conditions. Previous work has considered either parametric likelihood analysis or semiparametric multiplicative models using partial likelihood. The framework is generally applicable to semiparametric and parametric models, including additive and multiplicative specifications, and periodic models. It facilitates a semiparametric extension of a popular parametric earthquake model. Simulations and empirical analyses of Taiwanese earthquake sequences illustrate the methodology's practical utility. [source]

Assessing interaction effects in linear measurement error models

Li-Shan Huang
Summary., In a linear model, the effect of a continuous explanatory variable may vary across groups defined by a categorical variable, and the variable itself may be subject to measurement error. This suggests a linear measurement error model with slope-by-factor interactions. The variables that are defined by such interactions are neither continuous nor discrete, and hence it is not immediately clear how to fit linear measurement error models when interactions are present. This paper gives a corollary of a theorem of Fuller for the situation of correcting measurement errors in a linear model with slope-by-factor interactions. In particular, the error-corrected estimate of the coefficients and its asymptotic variance matrix are given in a more easily assessable form. Simulation results confirm the asymptotic normality of the coefficients in finite sample cases. We apply the results to data from the Seychelles Child Development Study at age 66 months, assessing the effects of exposure to mercury through consumption of fish on child development for females and males for both prenatal and post-natal exposure. [source]

Local Linear M-estimation in non-parametric spatial regression

Zhengyan Lin
primary 62G07; secondary 60F05 Abstract., A robust version of local linear regression smoothers augmented with variable bandwidths is investigated for dependent spatial processes. The (uniform) weak consistency as well as asymptotic normality for the local linear M-estimator (LLME) of the spatial regression function g(x) are established under some mild conditions. Furthermore, an additive model is considered to avoid the curse of dimensionality for spatial processes and an estimation procedure based on combining the marginal integration technique with LLME is applied in this paper. Meanwhile, we present a simulated study to illustrate the proposed estimation method. Our simulation results show that the estimation method works well numerically. [source]

Quasi-maximum likelihood estimation of periodic GARCH and periodic ARMA-GARCH processes

Abdelhakim Aknouche
Primary: 62F12; Secondary: 62M10, 91B84 Abstract., This article establishes the strong consistency and asymptotic normality (CAN) of the quasi-maximum likelihood estimator (QMLE) for generalized autoregressive conditionally heteroscedastic (GARCH) and autoregressive moving-average (ARMA)-GARCH processes with periodically time-varying parameters. We first give a necessary and sufficient condition for the existence of a strictly periodically stationary solution of the periodic GARCH (PGARCH) equation. As a result, it is shown that the moment of some positive order of the PGARCH solution is finite, under which we prove the strong consistency and asymptotic normality of the QMLE for a PGARCH process without any condition on its moments and for a periodic ARMA-GARCH (PARMA-PGARCH) under mild conditions. [source]

Local asymptotic normality and efficient estimation for INAR(p) models

Feike C. Drost
Abstract., Integer-valued autoregressive (INAR) processes have been introduced to model non-negative integer-valued phenomena that evolve in time. The distribution of an INAR(p) process is determined by two parameters: a vector of survival probabilities and a probability distribution on the non-negative integers, called an immigration distribution. This paper provides an efficient estimator of the parameters, and in particular, shows that the INAR(p) model has the Local Asymptotic Normality property. [source]

A light-tailed conditionally heteroscedastic model with applications to river flows

Péter Elek
Abstract., A conditionally heteroscedastic model, different from the more commonly used autoregressive moving average,generalized autoregressive conditionally heteroscedastic (ARMA-GARCH) processes, is established and analysed here. The time-dependent variance of innovations passing through an ARMA filter is conditioned on the lagged values of the generated process, rather than on the lagged innovations, and is defined to be asymptotically proportional to those past values. Designed this way, the model incorporates certain feedback from the modelled process, the innovation is no longer of GARCH type, and all moments of the modelled process are finite provided the same is true for the generating noise. The article gives the condition of stationarity, and proves consistency and asymptotic normality of the Gaussian quasi-maximum likelihood estimator of the variance parameters, even though the estimated parameters of the linear filter contain an error. An analysis of six diurnal water discharge series observed along Rivers Danube and Tisza in Hungary demonstrates the usefulness of such a model. The effect of lagged river discharge turns out to be highly significant on the variance of innovations, and nonparametric estimation approves its approximate linearity. Simulations from the new model preserve well the probability distribution, the high quantiles, the tail behaviour and the high-level clustering of the original series, further justifying model choice. [source]

Asymptotic self-similarity and wavelet estimation for long-range dependent fractional autoregressive integrated moving average time series with stable innovations

Stilian Stoev
Primary 60G18; 60E07; Secondary 62M10; 63G20 Abstract., Methods for parameter estimation in the presence of long-range dependence and heavy tails are scarce. Fractional autoregressive integrated moving average (FARIMA) time series for positive values of the fractional differencing exponent d can be used to model long-range dependence in the case of heavy-tailed distributions. In this paper, we focus on the estimation of the Hurst parameter H = d + 1/, for long-range dependent FARIMA time series with symmetric , -stable (1 < , < 2) innovations. We establish the consistency and the asymptotic normality of two types of wavelet estimators of the parameter H. We do so by exploiting the fact that the integrated series is asymptotically self-similar with parameter H. When the parameter , is known, we also obtain consistent and asymptotically normal estimators for the fractional differencing exponent d = H , 1/,. Our results hold for a larger class of causal linear processes with stable symmetric innovations. As the wavelet-based estimation method used here is semi-parametric, it allows for a more robust treatment of long-range dependent data than parametric methods. [source]

A Time-Domain Semi-parametric Estimate for Strongly Dependent Continuous-Time Stationary Processes

Takeshi Kato
Abstract., A covariance-based estimator of the memory parameter of strongly dependent continuous-time stationary processes is proposed. The consistency and asymptotic normality of the estimator are established. All assumptions, the form of the estimator, and the proofs are made in time-domain only. [source]

Nonparametric covariate adjustment for receiver operating characteristic curves

Fang Yao
Abstract The accuracy of a diagnostic test is typically characterized using the receiver operating characteristic (ROC) curve. Summarizing indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. The authors propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and possibly non-normal errors are discussed and analyzed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the model that relaxes the assumption of normality, the authors propose a covariate-adjusted Mann,Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalization of the Mann,Whitney approach for comparing two populations by taking covariate effects into account. The authors derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and mean squared error (MSE) consistency. The MSE of the AUC estimators was also assessed in smaller samples by simulation. Data from an agricultural study were used to illustrate the methods of analysis. The Canadian Journal of Statistics 38:27,46; 2010 © 2009 Statistical Society of Canada La précision d'un test diagnostique est habituellement établie en utilisant les courbes caracté-ristiques de fonctionnement du récepteur (« ROC »). Des statistiques telles que l'aire sous la courbe ROC (« AUC ») sont utilisées afin de comparer différents tests et pour mesurer la différence entre deux populations. Souvent de l'information supplémentaire est disponible sur quelques covariables dont l'influence sur de telles statistiques est connue. Les auteurs suggèrent des méthodes non paramétriques afin d'ajuster la statistique AUC pour prendre en compte les covariables. Des modèles avec des erreurs gaussiennes et même non gaussiennes sont présentés et analysés séparément. Une régression non paramétrique est utilisée afin d'estimer les fonctions moyenne et variance dans les deux scénarios. Pour le modèle sans l'hypothèse de normalité, les auteurs proposent un estimateur de Mann-Whithney tenant compte des covariables pour l'AUC qui utilise l'information disponible dans les données afin de construire des échantillons d'analyse pour n'importe quelle valeur des covariables. Cet estimateur est implanté, car il est calculable de façon efficace. Il généralise l'approche de Mann-Whitney pour comparer deux populations en considérant l'effet des covariables. Les auteurs obtiennent les propriétés asymptotiques des estimateurs AUC pour les deux scénarios incluant la normalité asymptotique, les vitesses optimales de convergence uniforme forte et la convergence en erreur quadratique moyenne (« MSE »). Le MSE de l'estimateur de l'AUC est aussi étudié pour les petits échantillons à l'aide de simulations. Des données provenant d'une étude dans le domaine agricole sont utilisées afin d'illustrer les méthodes d'analyse. La revue canadienne de statistique 38: 27,46; 2010 © 2009 Sociètè statistique du Canada [source]

Non-parametric regression with a latent time series

Oliver Linton
Summary, In this paper we investigate a class of semi-parametric models for panel data sets where the cross-section and time dimensions are large. Our model contains a latent time series that is to be estimated and perhaps forecasted along with a non-parametric covariate effect. Our model is motivated by the need to be flexible with regard to the functional form of covariate effects but also the need to be practical with regard to forecasting of time series effects. We propose estimation procedures based on local linear kernel smoothing; our estimators are all explicitly given. We establish the pointwise consistency and asymptotic normality of our estimators. We also show that the effects of estimating the latent time series can be ignored in certain cases. [source]

Prediction-based estimating functions

Michael Sørensen
A generalization of martingale estimating functions is presented which is useful when there are no natural or easily calculated martingales that can be used to construct a class of martingale estimating functions. An estimating function of the new type, which is based on linear predictors, is called a prediction-based estimating functions. Special attention is given to classes of prediction-based estimating functions given by a finite-dimensional space of predictors. It is demonstrated that such a class of estimating functions has most of the attractive properties of martingale estimating functions. In particular, a simple expression is found for the optimal estimating function. This type of prediction-based estimating functions only involve unconditional moments, in contrast to the martingale estimating functions where conditional moments are required. Thus, for applications to discretely observed continuous time models, a considerably smaller amount of simulation is, in general, needed for these than for martingale estimating functions. This is also true of the optimal prediction-based estimating functions. Conditions are given that ensure the existence, consistency and asymptotic normality of the corresponding estimators. The new method is applied to inference for sums of Ornstein,Uhlenbeck-type processes and stochastic volatility models. Stochastic volatility models are studied in considerable detail. It is demonstrated that for inference about models by Hull and White and Chesney and Scott, an explicit optimal prediction-based estimating function can be found so that no simulations are needed. [source]

On the estimation of the heavy-tail exponent in time series using the max-spectrum

Stilian A. Stoev
Abstract This paper addresses the problem of estimating the tail index , of distributions with heavy, Pareto-type tails for dependent data, that is of interest in the areas of finance, insurance, environmental monitoring and teletraffic analysis. A novel approach based on the max self-similarity scaling behavior of block maxima is introduced. The method exploits the increasing lack of dependence of maxima over large size blocks, which proves useful for time series data. We establish the consistency and asymptotic normality of the proposed max-spectrum estimator for a large class of m -dependent time series, in the regime of intermediate block-maxima. In the regime of large block-maxima, we demonstrate the distributional consistency of the estimator for a broad range of time series models including linear processes. The max-spectrum estimator is a robust and computationally efficient tool, which provides a novel time-scale perspective to the estimation of the tail exponents. Its performance is illustrated over synthetic and real data sets. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Kolmogorov,Smirnov-type testing for the partial homogeneity of Markov processes,with application to credit risk

Rafael Weißbach
Abstract In banking, the default behaviour of the counterpart is not only of interest for the pricing of transactions under credit risk but also for the assessment of a portfolio credit risk. We develop a test against the hypothesis that default intensities are chronologically constant within a group of similar counterparts, e.g. a rating class. The Kolmogorov,Smirnov-type test builds up on the asymptotic normality of counting processes in event history analysis. The right censoring accommodates for Markov processes with more than one no-absorbing state. A simulation study and two examples of rating systems demonstrate that partial homogeneity can be assumed, however occasionally, certain migrations must be modelled and estimated inhomogeneously. Copyright © 2007 John Wiley & Sons, Ltd. [source]

A wavelet solution to the spurious regression of fractionally differenced processes

Yanqin Fan
Abstract In this paper we propose to overcome the problem of spurious regression between fractionally differenced processes by applying the discrete wavelet transform (DWT) to both processes and then estimating the regression in the wavelet domain. The DWT is known to approximately decorrelate heavily autocorrelated processes and, unlike applying a first difference filter, involves a recursive two-step filtering and downsampling procedure. We prove the asymptotic normality of the proposed estimator and demonstrate via simulation its efficacy in finite samples. Copyright © 2003 John Wiley & Sons, Ltd. [source]

Interpreting Statistical Evidence with Empirical Likelihood Functions

Zhiwei Zhang
Abstract There has been growing interest in the likelihood paradigm of statistics, where statistical evidence is represented by the likelihood function and its strength is measured by likelihood ratios. The available literature in this area has so far focused on parametric likelihood functions, though in some cases a parametric likelihood can be robustified. This focused discussion on parametric models, while insightful and productive, may have left the impression that the likelihood paradigm is best suited to parametric situations. This article discusses the use of empirical likelihood functions, a well-developed methodology in the frequentist paradigm, to interpret statistical evidence in nonparametric and semiparametric situations. A comparative review of literature shows that, while an empirical likelihood is not a true probability density, it has the essential properties, namely consistency and local asymptotic normality that unify and justify the various parametric likelihood methods for evidential analysis. Real examples are presented to illustrate and compare the empirical likelihood method and the parametric likelihood methods. These methods are also compared in terms of asymptotic efficiency by combining relevant results from different areas. It is seen that a parametric likelihood based on a correctly specified model is generally more efficient than an empirical likelihood for the same parameter. However, when the working model fails, a parametric likelihood either breaks down or, if a robust version exists, becomes less efficient than the corresponding empirical likelihood. [source]

Cox Regression in Nested Case,Control Studies with Auxiliary Covariates

BIOMETRICS, Issue 2 2010
Mengling Liu
Summary Nested case,control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor. [source]

Incorporating Correlation for Multivariate Failure Time Data When Cluster Size Is Large

BIOMETRICS, Issue 2 2010
L. Xue
Summary We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates within-cluster correlations. Therefore, it is more efficient than those that ignore within-cluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995,,Biometrika,82, 151,164), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A chi-squared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing primary biliary cirrhosis (PBC) data. [source]

A Class of Multiplicity Adjusted Tests for Spatial Clustering Based on Case,Control Point Data

BIOMETRICS, Issue 1 2007
Toshiro Tango
Summary A class of tests with quadratic forms for detecting spatial clustering of health events based on case,control point data is proposed. It includes Cuzick and Edwards's test statistic (1990, Journal of theRoyal Statistical Society, Series B52, 73,104). Although they used the property of asymptotic normality of the test statistic, we show that such an approximation is generally poor for moderately large sample sizes. Instead, we suggest a central chi-square distribution as a better approximation to the asymptotic distribution of the test statistic. Furthermore, not only to estimate the optimal value of the unknown parameter on the scale of cluster but also to adjust for multiple testing due to repeating the procedure by changing the parameter value, we propose the minimum of the profile p-value of the test statistic for the parameter as an integrated test statistic. We also provide a statistic to estimate the areas or cases which make large contributions to significant clustering. The proposed methods are illustrated with a data set concerning the locations of cases of childhood leukemia and lymphoma and another on early medieval grave site locations consisting of affected and nonaffected grave sites. [source]