Finite Sample Properties (finite + sample_property)

Distribution by Scientific Domains


Selected Abstracts


Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo Approach

EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2008
Manuel Ammann
G12 Abstract We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact finite sample properties and accounts for errors-in-variables. Using S&P 500 panel data, we analyse the empirical performance of the CAPM and theFama and French (1993)three-factor model. We find that time-variation of betas in the CAPM and the time variation of the coefficients for the size factor (SMB) and the distress factor (HML) in the three-factor model improve the empirical performance. Therefore, our findings are consistent with time variation of firm-specific exposure to market risk, systematic credit risk and systematic size effects. However, a Bayesian model comparison trading off goodness of fit and model complexity indicates that the conditional CAPM performs best, followed by the conditional three-factor model, the unconditional CAPM, and the unconditional three-factor model. [source]


Genetic association tests with age at onset

GENETIC EPIDEMIOLOGY, Issue 2 2003
L. Hsu
Abstract Many diseases or traits exhibit a varying age at onset. Recent data examples of prostate cancer and childhood diabetes show that compared to simply treating the disease outcome as affected vs. unaffected, incorporation of age-at-onset information into the transmission/disequilibrium type of test (TDT) does not appear to change the results much. In this paper, we evaluate the power of TDT as a function of age at onset, and show that age-at-onset information is most useful when the disease is common, or the relative risk associated with the high-risk genotype varies with age. Moreover, an extremely old unaffected subject can contribute substantially to the power of the TDT, sometimes as much as old-onset subjects. We propose a modified test statistic for testing no association between the marker at the candidate locus and age at onset. The simulation study was conducted to evaluate the finite sample properties of proposed and the TDT test statistics under various sampling schemes for trios of parents and offspring, as well as for sibling clusters where unaffected siblings were used as controls. Genet Epidemiol 24:118,127, 2003. © 2003 Wiley-Liss, Inc. [source]


NONPARAMETRIC BOOTSTRAP PROCEDURES FOR PREDICTIVE INFERENCE BASED ON RECURSIVE ESTIMATION SCHEMES,

INTERNATIONAL ECONOMIC REVIEW, Issue 1 2007
Valentina Corradi
We introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting among multiple alternative forecasting models, all of which are possibly misspecified. In a Monte Carlo investigation, we compare the finite sample properties of our block bootstrap procedures with the parametric bootstrap due to Kilian (Journal of Applied Econometrics 14 (1999), 491,510), within the context of encompassing and predictive accuracy tests. In the empirical illustration, it is found that unemployment has nonlinear marginal predictive content for inflation. [source]


On variable bandwidth selection in local polynomial regression

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2000
Kjell Doksum
The performances of data-driven bandwidth selection procedures in local polynomial regression are investigated by using asymptotic methods and simulation. The bandwidth selection procedures considered are based on minimizing ,prelimit' approximations to the (conditional) mean-squared error (MSE) when the MSE is considered as a function of the bandwidth h. We first consider approximations to the MSE that are based on Taylor expansions around h=0 of the bias part of the MSE. These approximations lead to estimators of the MSE that are accurate only for small bandwidths h. We also consider a bias estimator which instead of using small h approximations to bias naïvely estimates bias as the difference of two local polynomial estimators of different order and we show that this estimator performs well only for moderate to large h. We next define a hybrid bias estimator which equals the Taylor-expansion-based estimator for small h and the difference estimator for moderate to large h. We find that the MSE estimator based on this hybrid bias estimator leads to a bandwidth selection procedure with good asymptotic and, for our Monte Carlo examples, finite sample properties. [source]


Maximum likelihood estimation of higher-order integer-valued autoregressive processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2008
Ruijun Bu
Abstract., In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701,722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) specification with binomial thinning and Poisson innovations, we examine both the asymptotic efficiency and finite sample properties of the ML estimator in relation to the widely used conditional least squares (CLS) and Yule,Walker (YW) estimators. We conclude that, if the Poisson assumption can be justified, there are substantial gains to be had from using ML especially when the thinning parameters are large. [source]


Seasonal Unit Root Tests Under Structural Breaks,

JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2004
Uwe Hassler
C12; C22 Abstract., In this paper, several seasonal unit root tests are analysed in the context of structural breaks at known time and a new break corrected test is suggested. We show that the widely used HEGY test, as well as an LM variant thereof, are asymptotically robust to seasonal mean shifts of finite magnitude. In finite samples, however, experiments reveal that such tests suffer from severe size distortions and power reductions when breaks are present. Hence, a new break corrected LM test is proposed to overcome this problem. Importantly, the correction for seasonal mean shifts bears no consequence on the limiting distributions, thereby maintaining the legitimacy of canonical critical values. Moreover, although this test assumes a breakpoint a priori, it is robust in terms of misspecification of the time of the break. This asymptotic property is well reproduced in finite samples. Based on a Monte-Carlo study, our new test is compared with other procedures suggested in the literature and shown to hold superior finite sample properties. [source]


Estimation of the Dominating Frequency for Stationary and Nonstationary Fractional Autoregressive Models

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2000
Jan Beran
This paper was motivated by the investigation of certain physiological series for premature infants. The question was whether the series exhibit periodic fluctuations with a certain dominating period. The observed series are nonstationary and/or have long-range dependence. The assumed model is a Gaussian process Xt whose mth difference Yt = (1 ,B)mXt is stationary with a spectral density f that may have a pole (or a zero) at the origin. the problem addressed in this paper is the estimation of the frequency ,max where f achieves the largest local maximum in the open interval (0, ,). The process Xt is assumed to belong to a class of parametric models, characterized by a parameter vector ,, defined in Beran (1995). An estimator of ,max is proposed and its asymptotic distribution is derived, with , being estimated by maximum likelihood. In particular, m and a fractional differencing parameter that models long memory are estimated from the data. Model choice is also incorporated. Thus, within the proposed framework, a data driven procedure is obtained that can be applied in situations where the primary interest is in estimating a dominating frequency. A simulation study illustrates the finite sample properties of the method. In particular, for short series, estimation of ,max is difficult, if the local maximum occurs close to the origin. The results are illustrated by two of the data examples that motivated this research. [source]


IS THERE UNIT ROOT IN THE NITROGEN OXIDES EMISSIONS: A MONTE CARLO INVESTIGATION?

NATURAL RESOURCE MODELING, Issue 1 2010
NINA S. JONES
Abstract Use of the time-series econometric techniques to investigate issues about environmental regulation requires knowing whether air pollution emissions are trend stationary or difference stationary. It has been shown that results regarding trend stationarity of the pollution data are sensitive to the methods used. I conduct a Monte Carlo experiment to study the size and power of two unit root tests that allow for a structural change in the trend at a known time using the data-generating process calibrated to the actual pollution series. I find that finite sample properties of the Perron test are better than the Park and Sung Phillips-Perron (PP) type test. Severe size distortions in the Park and Sung PP type test can explain the rejection of a unit root in air pollution emissions reported in some environmental regulation analyses. [source]


Blockwise generalized empirical likelihood inference for non-linear dynamic moment conditions models

THE ECONOMETRICS JOURNAL, Issue 2 2009
Francesco Bravo
Summary, This paper shows how the blockwise generalized empirical likelihood method can be used to obtain valid asymptotic inference in non-linear dynamic moment conditions models for possibly non-stationary weakly dependent stochastic processes. The results of this paper can be used to construct test statistics for overidentifying moment restrictions, for additional moments, and for parametric restrictions expressed in mixed implicit and constraint form. Monte Carlo simulations seem to suggest that some of the proposed test statistics have competitive finite sample properties. [source]


Least squares estimation and tests of breaks in mean and variance under misspecification

THE ECONOMETRICS JOURNAL, Issue 1 2004
Jean-Yves Pitarakis
Summary In this paper we investigate the consequences of misspecification on the large sample properties of change-point estimators and the validity of tests of the null hypothesis of linearity versus the alternative of a structural break. Specifically this paper concentrates on the interaction of structural breaks in the mean and variance of a time series when either of the two is omitted from the estimation and inference procedures. Our analysis considers the case of a break in mean under omitted-regime-dependent heteroscedasticity and that of a break in variance under an omitted mean shift. The large and finite sample properties of the resulting least-squares-based estimators are investigated and the impact of the two types of misspecification on inferences about the presence or absence of a structural break subsequently analysed. [source]


Alternative tilts for nonparametric option pricing

THE JOURNAL OF FUTURES MARKETS, Issue 10 2010
M. Ryan Haley
This study generalizes the nonparametric approach to option pricing of Stutzer, M. (1996) by demonstrating that the canonical valuation methodology introduced therein is one member of the Cressie,Read family of divergence measures. Alhough the limiting distribution of the alternative measures is identical to the canonical measure, the finite sample properties are quite different. We assess the ability of the alternative divergence measures to price European call options by approximating the risk-neutral, equivalent martingale measure from an empirical distribution of the underlying asset. A simulation study of the finite sample properties of the alternative measure changes reveals that the optimal divergence measure depends upon how accurately the empirical distribution of the underlying asset is estimated. In a simple Black,Scholes model, the optimal measure change is contingent upon the number of outliers observed, whereas the optimal measure change is a function of time to expiration in the stochastic volatility model of Heston, S. L. (1993). Our extension of Stutzer's technique preserves the clean analytic structure of imposing moment restrictions to price options, yet demonstrates that the nonparametric approach is even more general in pricing options than originally believed. © 2009 Wiley Periodicals, Inc. Jrl Fut Mark 30:983,1006, 2010 [source]


The finite sample properties of the GARCH option pricing model

THE JOURNAL OF FUTURES MARKETS, Issue 6 2007
George Dotsis
The authors explore the finite sample properties of the generalized autoregressive conditional heteroscedasticity (GARCH) option pricing model proposed by S. L. Heston and S. Nandi (2000). Simulation results show that the maximum likelihood estimators of the GARCH process may contain substantial estimation biases, even when samples as large as 3,000 observations are used. However, it was found that these biases cause significant mispricings only for short-term, out-of-the-money options. It is shown that, given an adequate estimation sample, this bias can be reduced considerably by employing the jackknife resampling method. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:599,615, 2007 [source]


Estimation of the ROC Curve under Verification Bias

BIOMETRICAL JOURNAL, Issue 3 2009
Ronen Fluss
Abstract The ROC (receiver operating characteristic) curve is the most commonly used statistical tool for describing the discriminatory accuracy of a diagnostic test. Classical estimation of the ROC curve relies on data from a simple random sample from the target population. In practice, estimation is often complicated due to not all subjects undergoing a definitive assessment of disease status (verification). Estimation of the ROC curve based on data only from subjects with verified disease status may be badly biased. In this work we investigate the properties of the doubly robust (DR) method for estimating the ROC curve under verification bias originally developed by Rotnitzky, Faraggi and Schisterman (2006) for estimating the area under the ROC curve. The DR method can be applied for continuous scaled tests and allows for a non-ignorable process of selection to verification. We develop the estimator's asymptotic distribution and examine its finite sample properties via a simulation study. We exemplify the DR procedure for estimation of ROC curves with data collected on patients undergoing electron beam computer tomography, a diagnostic test for calcification of the arteries. [source]


Design and Inference for Cancer Biomarker Study with an Outcome and Auxiliary-Dependent Subsampling

BIOMETRICS, Issue 2 2010
Xiaofei Wang
Summary In cancer research, it is important to evaluate the performance of a biomarker (e.g., molecular, genetic, or imaging) that correlates patients' prognosis or predicts patients' response to treatment in a large prospective study. Due to overall budget constraint and high cost associated with bioassays, investigators often have to select a subset from all registered patients for biomarker assessment. To detect a potentially moderate association between the biomarker and the outcome, investigators need to decide how to select the subset of a fixed size such that the study efficiency can be enhanced. We show that, instead of drawing a simple random sample from the study cohort, greater efficiency can be achieved by allowing the selection probability to depend on the outcome and an auxiliary variable; we refer to such a sampling scheme as,outcome and auxiliary-dependent subsampling,(OADS). This article is motivated by the need to analyze data from a lung cancer biomarker study that adopts the OADS design to assess epidermal growth factor receptor (EGFR) mutations as a predictive biomarker for whether a subject responds to a greater extent to EGFR inhibitor drugs. We propose an estimated maximum-likelihood method that accommodates the OADS design and utilizes all observed information, especially those contained in the likelihood score of EGFR mutations (an auxiliary variable of EGFR mutations) that is available to all patients. We derive the asymptotic properties of the proposed estimator and evaluate its finite sample properties via simulation. We illustrate the proposed method with a data example. [source]


Marginal Hazards Regression for Retrospective Studies within Cohort with Possibly Correlated Failure Time Data

BIOMETRICS, Issue 2 2009
Sangwook Kang
Summary A retrospective dental study was conducted to evaluate the degree to which pulpal involvement affects tooth survival. Due to the clustering of teeth, the survival times within each subject could be correlated and thus the conventional method for the case,control studies cannot be directly applied. In this article, we propose a marginal model approach for this type of correlated case,control within cohort data. Weighted estimating equations are proposed for the estimation of the regression parameters. Different types of weights are also considered for improving the efficiency. Asymptotic properties of the proposed estimators are investigated and their finite sample properties are assessed via simulations studies. The proposed method is applied to the aforementioned dental study. [source]


Multiple Imputation Methods for Treatment Noncompliance and Nonresponse in Randomized Clinical Trials

BIOMETRICS, Issue 1 2009
L. Taylor
Summary Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine. [source]