Monte Carlo Experiments (monte + carlo_experiment)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting


Selected Abstracts


Consistent Tests for Stochastic Dominance

ECONOMETRICA, Issue 1 2003
Garry F. Barrett
Methods are proposed for testing stochastic dominance of any pre,specified order, with primary interest in the distributions of income. We consider consistent tests, that are similar to Kolmogorov,Smirnov tests, of the complete set of restrictions that relate to the various forms of stochastic dominance. For such tests, in the case of tests for stochastic dominance beyond first order, we propose and justify a variety of approaches to inference based on simulation and the bootstrap. We compare these approaches to one another and to alternative approaches based on multiple comparisons in the context of a Monte Carlo experiment and an empirical example. [source]


Asymmetric power distribution: Theory and applications to risk measurement

JOURNAL OF APPLIED ECONOMETRICS, Issue 5 2007
Ivana Komunjer
Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew- t -GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall. Copyright © 2007 John Wiley & Sons, Ltd. [source]


On the properties of the periodogram of a stationary long-memory process over different epochs with applications

JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2010
Valdério A. Reisen
Primary 60G10; 60K35; Secondary 60G18 This article studies the asymptotic properties of the discrete Fourier transforms (DFT) and the periodogram of a stationary long-memory time series over different epochs. The main theoretical result is a novel bound for the covariance of the DFT ordinates evaluated on two distinct epochs, which depends explicitly on the Fourier frequencies and the gap between the epochs. This result is then applied to obtain the limiting distribution of some nonlinear functions of the periodogram over different epochs, under the additional assumption of gaussianity. We then apply this result to construct an estimator of the memory parameter based on the regression in a neighbourhood of the zero-frequency of the logarithm of the averaged periodogram, obtained by computing the empirical mean of the periodogram over adjacent epochs. It is shown that replacing the periodogram by its average has an effect similar to the frequency domain pooling to reduce the variance of the estimate. We also propose a simple procedure to test the stationarity of the memory coefficient. A limited Monte Carlo experiment is presented to support our findings. [source]


Testing Stochastic Cycles in Macroeconomic Time Series

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2001
L. A. Gil-Alana
A particular version of the tests of Robinson (1994) for testing stochastic cycles in macroeconomic time series is proposed in this article. The tests have a standard limit distribution and are easy to implement in raw time series. A Monte Carlo experiment is conducted, studying the size and the power of the tests against different alternatives, and the results are compared with those based on other tests. An empirical application using historical US annual data is also carried out at the end of the article. [source]


IS THERE UNIT ROOT IN THE NITROGEN OXIDES EMISSIONS: A MONTE CARLO INVESTIGATION?

NATURAL RESOURCE MODELING, Issue 1 2010
NINA S. JONES
Abstract Use of the time-series econometric techniques to investigate issues about environmental regulation requires knowing whether air pollution emissions are trend stationary or difference stationary. It has been shown that results regarding trend stationarity of the pollution data are sensitive to the methods used. I conduct a Monte Carlo experiment to study the size and power of two unit root tests that allow for a structural change in the trend at a known time using the data-generating process calibrated to the actual pollution series. I find that finite sample properties of the Perron test are better than the Park and Sung Phillips-Perron (PP) type test. Severe size distortions in the Park and Sung PP type test can explain the rejection of a unit root in air pollution emissions reported in some environmental regulation analyses. [source]


Regression Models with Variables of Different Frequencies: The Case of a Fixed Frequency Ratio,

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2010
Virmantas Kvedaras
Abstract An increasing variety of data frequencies available in economics, finance, etc. gives rise to a question how to build and estimate a regression model with variables observed at different frequencies. In a unifying framework of (m,d)-aggregation we consider various approaches by discussing some potential and limitations. A Monte Carlo experiment and an empirical example illustrate that the traditional fixed aggregation approach, widely used in applied economics, might be inconsistent with data and highly inferior in terms of model precision. [source]


Improved log-linear model estimators of abundance in capture-recapture experiments

THE CANADIAN JOURNAL OF STATISTICS, Issue 4 2001
Louis-Paul Rivest
Abstract The authors review log-linear models for estimating the size of a closed population and propose a new log-linear estimator for experiments having between animal heterogeneity and a behavioral response. They give a general formula for evaluating the asymptotic biases of estimators of abundance derived from log-linear models. They propose simple frequency modifications for reducing these asymptotic biases and investigate the modifications in a Monte Carlo experiment which reveals that they reduce both the bias and the mean squared error of abundance estimators. [source]


Semiparametric competing risks analysis

THE ECONOMETRICS JOURNAL, Issue 2 2007
José Canals-Cerdá
Summary, In this paper we analyse a semi-parametric estimation technique for competing risks models based on series expansion of the joint density of the unobserved heterogeneity components. This technique allows for unrestricted correlation among the risks. The finite sample behavior of the estimation technique is analysed in a Monte Carlo experiment using an empirically relevant data-generating process. The estimator performs well when compared with the Heckman,Singer estimator. [source]


Counts with an endogenous binary regressor: A series expansion approach

THE ECONOMETRICS JOURNAL, Issue 1 2005
Andrés Romeu
Summary, We propose an estimator for count data regression models where a binary regressor is endogenously determined. This estimator departs from previous approaches by using a flexible form for the conditional probability function of the counts. Using a Monte Carlo experiment we show that our estimator improves the fit and provides a more reliable estimate of the impact of regressors on the count when compared to alternatives which do restrict the mean to be linear-exponential. In an application to the number of trips by households in the United States, we find that the estimate of the treatment effect obtained is considerably different from the one obtained under a linear-exponential mean specification. [source]


Testing for stationarity in heterogeneous panel data

THE ECONOMETRICS JOURNAL, Issue 2 2000
Kaddour Hadri
This paper proposes a residual-based Lagrange multiplier (LM) test for a null that the individual observed series are stationary around a deterministic level or around a deterministic trend against the alternative of a unit root in panel data. The tests which are asymptotically similar under the null, belong to the locally best invariant (LBI) test statistics. The asymptotic distributions of the statistics are derived under the null and are shown to be normally distributed. Finite sample sizes and powers are considered in a Monte Carlo experiment. The empirical sizes of the tests are close to the true size even in small samples. The testing procedure is easy to apply, including, to panel data models with fixed effects, individual deterministic trends and heterogeneous errors across cross-sections. It is also shown how to apply the tests to the more general case of serially correlated disturbance terms. [source]


Applications and Extensions of Chao's Moment Estimator for the Size of a Closed Population

BIOMETRICS, Issue 4 2007
Louis-Paul Rivest
Summary This article revisits Chao's (1989, Biometrics45, 427,438) lower bound estimator for the size of a closed population in a mark,recapture experiment where the capture probabilities vary between animals (model Mh). First, an extension of the lower bound to models featuring a time effect and heterogeneity in capture probabilities (Mth) is proposed. The biases of these lower bounds are shown to be a function of the heterogeneity parameter for several loglinear models for Mth. Small-sample bias reduction techniques for Chao's lower bound estimator are also derived. The application of the loglinear model underlying Chao's estimator when heterogeneity has been detected in the primary periods of a robust design is then investigated. A test for the null hypothesis that Chao's loglinear model provides unbiased abundance estimators is provided. The strategy of systematically using Chao's loglinear model in the primary periods of a robust design where heterogeneity has been detected is investigated in a Monte Carlo experiment. Its impact on the estimation of the population sizes and of the survival rates is evaluated in a Monte Carlo experiment. [source]


Choosing the Number of Instruments

ECONOMETRICA, Issue 5 2001
Stephen G. Donald
Properties of instrumental variable estimators are sensitive to the choice of valid instruments, even in large cross-section applications. In this paper we address this problem by deriving simple mean-square error criteria that can be minimized to choose the instrument set. We develop these criteria for two-stage least squares (2SLS), limited information maximum likelihood (LIML), and a bias adjusted version of 2SLS (B2SLS). We give a theoretical derivation of the mean-square error and show optimality. In Monte Carlo experiments we find that the instrument choice generally yields an improvement in performance. Also, in the Angrist and Krueger (1991) returns to education application, when the instrument set is chosen in the way we consider, it turns out that both 2SLS and LIML give similar (large) returns to education. [source]


An Adaptive, Rate-Optimal Test of a Parametric Mean-Regression Model Against a Nonparametric Alternative

ECONOMETRICA, Issue 3 2001
Joel L. Horowitz
We develop a new test of a parametric model of a conditional mean function against a nonparametric alternative. The test adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric model converges to zero at the fastest possible rate. This rate is slower than n,1/2. Some existing tests have nontrivial power against restricted classes of alternatives whose distance from the parametric model decreases at the rate n,1/2. There are, however, sequences of alternatives against which these tests are inconsistent and ours is consistent. As a consequence, there are alternative models for which the finite-sample power of our test greatly exceeds that of existing tests. This conclusion is illustrated by the results of some Monte Carlo experiments. [source]


IMPROVING FORECAST ACCURACY BY COMBINING RECURSIVE AND ROLLING FORECASTS,

INTERNATIONAL ECONOMIC REVIEW, Issue 2 2009
Todd E. Clark
This article presents analytical, Monte Carlo, and empirical evidence on combining recursive and rolling forecasts when linear predictive models are subject to structural change. Using a characterization of the bias,variance trade-off faced when choosing between either the recursive and rolling schemes or a scalar convex combination of the two, we derive optimal observation windows and combining weights designed to minimize mean square forecast error. Monte Carlo experiments and several empirical examples indicate that combination can often provide improvements in forecast accuracy relative to forecasts made using the recursive scheme or the rolling scheme with a fixed window width. [source]


The estimation of utility-consistent labor supply models by means of simulated scores

JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2008
Hans G. Bloemen
We consider a utility-consistent static labor supply model with flexible preferences and a nonlinear and possibly non-convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Timing structural change: a conditional probabilistic approach

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2006
David N. DeJong
We propose a strategy for assessing structural stability in time-series frameworks when potential change dates are unknown. Existing stability tests are effective in detecting structural change, but procedures for identifying timing are imprecise, especially in assessing the stability of variance parameters. We present a likelihood-based procedure for assigning conditional probabilities to the occurrence of structural breaks at alternative dates. The procedure is effective in improving the precision with which inferences regarding timing can be made. We illustrate parametric and non-parametric implementations of the procedure through Monte Carlo experiments, and an assessment of the volatility reduction in the growth rate of US GDP. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An outlier robust GARCH model and forecasting volatility of exchange rate returns

JOURNAL OF FORECASTING, Issue 5 2002
Beum-Jo Park
Abstract Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one-step-ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out-of-sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Choosing among competing econometric forecasts: Regression-based forecast combination using model selection

JOURNAL OF FORECASTING, Issue 6 2001
Norman R. Swanson
Abstract Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE-dominate SIC combination forecasts less than 25% of the time in most cases, while other ,standard' combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real-time forecasts of the variables, and it is shown via a series of experiments that SIC, t -statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE-dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd. [source]


ADL tests for threshold cointegration

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2010
Jing Li
C12; C15; C32 In this article, we propose new tests for threshold cointegration using an autoregressive distributed lag (ADL) model. The indicators in the threshold model can adopt either a nonstationary or stationary threshold variable. The cointegrating vector is not prespecified in this article. We adopt a supremum Wald type test to account for the so-called Davies (1987, Biometrika 74,33) problem. The asymptotic null distributions of the proposed tests are free of nuisance parameters. As such, a bootstrap procedure is not required and the critical values of the proposed tests are tabulated. Monte Carlo experiments show good finite-sample performance. [source]


Evaluating Specification Tests for Markov-Switching Time-Series Models

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2008
Daniel R. Smith
C12; C15; C22 Abstract., We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung,Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung,Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar,British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data. [source]


Bootstrap predictive inference for ARIMA processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2004
Lorenzo Pascual
Abstract., In this study, we propose a new bootstrap strategy to obtain prediction intervals for autoregressive integrated moving-average processes. Its main advantage over other bootstrap methods previously proposed for autoregressive integrated processes is that variability due to parameter estimation can be incorporated into prediction intervals without requiring the backward representation of the process. Consequently, the procedure is very flexible and can be extended to processes even if their backward representation is not available. Furthermore, its implementation is very simple. The asymptotic properties of the bootstrap prediction densities are obtained. Extensive finite-sample Monte Carlo experiments are carried out to compare the performance of the proposed strategy vs. alternative procedures. The behaviour of our proposal equals or outperforms the alternatives in most of the cases. Furthermore, our bootstrap strategy is also applied for the first time to obtain the prediction density of processes with moving-average components. [source]


Evaluating, Comparing and Combining Density Forecasts Using the KLIC with an Application to the Bank of England and NIESR ,Fan' Charts of Inflation,

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2005
James Mitchell
Abstract This paper proposes and analyses the Kullback,Leibler information criterion (KLIC) as a unified statistical tool to evaluate, compare and combine density forecasts. Use of the KLIC is particularly attractive, as well as operationally convenient, given its equivalence with the widely used Berkowitz likelihood ratio test for the evaluation of individual density forecasts that exploits the probability integral transforms. Parallels with the comparison and combination of point forecasts are made. This and related Monte Carlo experiments help draw out properties of combined density forecasts. We illustrate the uses of the KLIC in an application to two widely used published density forecasts for UK inflation, namely the Bank of England and NIESR ,fan' charts. [source]


Semiparametric estimation of single-index hazard functions without proportional hazards

THE ECONOMETRICS JOURNAL, Issue 1 2006
Tue Gřrgens
Summary, This research develops a semiparametric kernel-based estimator of hazard functions which does not assume proportional hazards. The maintained assumption is that the hazard functions depend on regressors only through a linear index. The estimator permits both discrete and continuous regressors, both discrete and continuous failure times, and can be applied to right-censored data and to multiple-risks data, in which case the hazard functions are risk-specific. The estimator is root- n consistent and asymptotically normally distributed. The estimator performs well in Monte Carlo experiments. [source]


Estimation with weak instruments: Accuracy of higher-order bias and MSE approximations

THE ECONOMETRICS JOURNAL, Issue 1 2004
Jinyong Hahn
Summary In this paper, we consider parameter estimation in a linear simultaneous equations model. It is well known that two-stage least squares (2SLS) estimators may perform poorly when the instruments are weak. In this case 2SLS tends to suffer from the substantial small sample biases. It is also known that LIML and Nagar-type estimators are less biased than 2SLS but suffer from large small sample variability. We construct a bias-corrected version of 2SLS based on the Jackknife principle. Using higher-order expansions we show that the MSE of our Jackknife 2SLS estimator is approximately the same as the MSE of the Nagar-type estimator. We also compare the Jackknife 2SLS with an estimator suggested by Fuller (Econometrica 45, 933,54) that significantly decreases the small sample variability of LIML. Monte Carlo simulations show that even in relatively large samples the MSE of LIML and Nagar can be substantially larger than for Jackknife 2SLS. The Jackknife 2SLS estimator and Fuller's estimator give the best overall performance. Based on our Monte Carlo experiments we conduct informal statistical tests of the accuracy of approximate bias and MSE formulas. We find that higher-order expansions traditionally used to rank LIML, 2SLS and other IV estimators are unreliable when identification of the model is weak. Overall, our results show that only estimators with well-defined finite sample moments should be used when identification of the model is weak. [source]


Empirical properties of duality theory

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 1 2002
Jayson L. Lusk
This research examines selected empirical properties of duality relationships. Monte Carlo experiments indicate that Hessian matrices estimated from the normalised unrestricted profit, restricted profit and production functions yield conflicting results in the presence of measurement error and low relative price variability. In particular, small amounts of measurement error in quantity variables can translate into large errors in uncompensated estimates calculated via restricted and unrestricted profit and production functions. These results emphasise the need for high quality data when estimating empirical models in order to accurately determine dual relationships implied by economic theory. [source]