Home About us Contact | |||
Competing Models (competing + models)
Selected AbstractsTwo Competing Models of How People Learn in GamesECONOMETRICA, Issue 6 2002Ed Hopkins Reinforcement learning and stochastic fictitious play are apparent rivals as models of human learning. They embody quite different assumptions about the processing of information and optimization. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behavior. In particular, local stability of mixed equilibria under stochastic fictitious play implies local stability under perturbed reinforcement learning. The main identifiable difference between the two models is speed: stochastic fictitious play gives rise to faster learning. [source] Human evolution at the Matuyama-Brunhes boundaryEVOLUTIONARY ANTHROPOLOGY, Issue 1 2004Article first published online: 12 FEB 200, Giorgio Manzi Abstract The cranial morphology of fossil hominids between the end of the Early Pleistocene and the beginning of the Middle Pleistocene provides crucial evidence to understand the distribution in time and space of the genus Homo. This evidence is critical for evaluating the competing models regarding diversity within our genus. The debate focuses on two alternative hypotheses, one basically anagenetic and the other cladogenetic. The first suggests that morphological change is so diffused, slow, and steady that it is meaningless to apply species names to segments of a single lineage. The second is that the morphological variation observed in the fossil record can best be described as a number of distinct species that are not connected in a linear ancestor-descendant sequence. Today much more fossil evidence is available than was in the past to test these alternative hypotheses, as well as intermediate variants. Special attention must be paid to Africa because this is the most probable continental homeland for both the origin of the genus Homo (around 2.5,2 Ma),1 as well as the site, two million or so years later, of the emergence of the species H. sapiens.2 However, the African fossil record is very poorly represented between 1 Ma and 600 ka. Europe furnishes recent discoveries in this time range around the Matuyama-Brunhes chron boundary (780,000 years ago), a period for which, at present, we have no noteworthy fossil evidence in Africa or the Levant. Two penecontemporaneous sources of European fossil evidence, the Ceprano calvaria (Italy)3 and the TD6 fossil assemblage of Atapuerca (Spain)4 are thus of great interest for testing hypotheses about human evolution in the fundamental time span bracketed between the late Early and the Middle Pleistocene. This paper is based on a phenetic approach to cranial variation aimed at reviewing the Early-to-Middle Pleistocene trajectories of human evolution. The focus of the paper is on neither the origin nor the end of the story of the genus Homo, but rather its chronological and phylogenetic core. Elucidation of the evolutionary events that happened around 780 ka during the transition from the Early to Middle Pleistocene is one of the new frontiers for human paleontology, and is critical for understanding the processes that ultimately led to the origin of H. sapiens. [source] Bridging domain methods for coupled atomistic,continuum models with L2 or H1 couplingsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2009P.-A. Guidault Abstract A bridging domain method for coupled atomistic,continuum models is proposed that enables to compare various coupling terms. The approach does not require the finite element mesh to match the lattice spacing of the atomic model. It is based on an overlapping domain decomposition method that makes use of Lagrange multipliers and weight functions in the coupling zone in order to distribute the energy between the two competing models. Two couplings are investigated. The L2 coupling enforces the continuity of displacements between the two models directly. The H1 coupling involves the definition of a strain measure. For this purpose, a moving least-square interpolant of the atomic displacement is defined. The choice of the weight functions is studied. Patch tests and a graphene sheet with a crack are studied. It is shown that both continuous and discontinuous weight functions can be used with the H1 coupling whereas the L2 coupling requires continuous weight functions. For the examples developed herein, the L2 coupling produces less error in the zone of interest. The flexibility of the H1 coupling with constant weight function may be beneficial but the results may be affected depending on the topology of the bridging zone. Copyright © 2008 John Wiley & Sons, Ltd. [source] Forecasting financial volatility of the Athens stock exchange daily returns: an application of the asymmetric normal mixture GARCH modelINTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 4 2010Anastassios A. Drakos Abstract In this paper we model the return volatility of stocks traded in the Athens Stock Exchange using alternative GARCH models. We employ daily data for the period January 1998 to November 2008 allowing us to capture possible positive and negative effects that may be due to either contagion or idiosyncratic sources. The econometric analysis is based on the estimation of a class of five GARCH models under alternative assumptions with respect to the error distribution. The main findings of our analysis are: first, based on a battery of diagnostic tests it is shown that the normal mixture asymmetric GARCH (NM-AGARCH) models perform better in modeling the volatility of stock returns. Second, it is shown that with the use of the Kupiec's tests for in-sample and out-of-sample forecasting performance the evidence is mixed as the choice of the appropriate volatility model depends on the trading position under consideration. Third, at the 99% confidence interval the NM-AGARCH model with skewed Student-distribution outperforms all other competing models both for in-sample and out-of-sample forecasting performance. This increase in predictive performance for higher confidence intervals of the NM-AGARCH model with skewed Student-distribution makes this specification consistent with the requirements of the Basel II agreement. Copyright © 2010 John Wiley & Sons, Ltd. [source] A score test for non-nested hypotheses with applications to discrete data modelsJOURNAL OF APPLIED ECONOMETRICS, Issue 5 2001J. M. C. Santos Silva In this paper it is shown that a convenient score test against non-nested alternatives can be constructed from the linear combination of the likelihood functions of the competing models. This is essentially a test for the correct specification of the conditional distribution of the variable of interest. Given its characteristics, the proposed test is particularly attractive to check the distributional assumptions in models for discrete data. The usefulness of the test is illustrated with an application to models for recreational boating trips. Copyright © 2001 John Wiley & Sons, Ltd. [source] Putting density dependence in perspective: nest density, nesting phenology, and biome, all matter to survival of simulated mallard Anas platyrhynchos nestsJOURNAL OF AVIAN BIOLOGY, Issue 3 2009Johan Elmberg Breeding success in ground-nesting birds is primarily determined by nest survival, which may be density-dependent, but the generality of this pattern remains untested. In a replicated crossover experiment conducted on 30 wetlands, survival of simulated mallard nests was related to "biome" (n=14 mediterranean and 16 boreal wetlands), breeding "phenology" (early vs late nests), and "density" (2 vs 8 nests per 225 m shoreline). Local abundances of "waterfowl", "other waterbirds", and "avian predators" were used as covariates. We used an information-theoretic approach and Program MARK to select among competing models. Nest survival was lower in late nests compared with early ones, and it was lower in the mediterranean than in the boreal study region. High-density treatment nests suffered higher depredation rates than low-density nests during days 1,4 of each experimental period. Nest survival was negatively associated with local abundance of "waterfowl" in the boreal but not in the mediterranean biome. Effect estimates from the highest-ranked model showed that nest "density" (d 1,4) had the strongest impact on model fit; i.e. three times that of "biome" and 1.5 times that of "phenology". The latter,s effect, in turn, was twice that of "biome". We argue that our study supports the idea that density-dependent nest predation may be temporally and spatially widespread in waterfowl. We also see an urgent need for research of how waterfowl nesting phenology is matched to that of prey and vegetation. [source] Development and testing of the velicer attitudes toward violence scale: evidence for a four-factor modelAGGRESSIVE BEHAVIOR, Issue 2 2006Craig A. Anderson Abstract The factor structure of the Velicer Attitudes Toward Violence Scale [VATVS; Velicer, Huckel and Hansen, 1989] was examined in three studies. Study 1 (n=460 undergraduates) found a poor fit for a hierarchical five-factor model earlier advanced by Velicer et al. [1989], but a good fit for an oblique four-factor model. In Study 2, this alternative model was cross-validated in a confirmatory factor analysis of an additional 195 undergraduate students. In Study 3, the competing models were compared in terms of ability to predict self-reported aggression, with 823 undergraduate students. The new four-factor model proved superior. Other findings included evidence of factorial invariance on the VATVS, and more favorable attitudes toward violence among men than women. The VATVS appears to measure the same four attitudinal constructs for men and women: violence in war, penal code violence, corporal punishment of children, and intimate violence. Aggr. Behav. 32:122,136, 2006. © 2006 Wiley-Liss, Inc. [source] Forecasting volatility with support vector machine-based GARCH modelJOURNAL OF FORECASTING, Issue 4 2010Shiyi Chen Abstract Recently, support vector machine (SVM), a novel artificial neural network (ANN), has been successfully used for financial forecasting. This paper deals with the application of SVM in volatility forecasting under the GARCH framework, the performance of which is compared with simple moving average, standard GARCH, nonlinear EGARCH and traditional ANN-GARCH models by using two evaluation measures and robust Diebold,Mariano tests. The real data used in this study are daily GBP exchange rates and NYSE composite index. Empirical results from both simulation and real data reveal that, under a recursive forecasting scheme, SVM-GARCH models significantly outperform the competing models in most situations of one-period-ahead volatility forecasting, which confirms the theoretical advantage of SVM. The standard GARCH model also performs well in the case of normality and large sample size, while EGARCH model is good at forecasting volatility under the high skewed distribution. The sensitivity analysis to choose SVM parameters and cross-validation to determine the stopping point of the recurrent SVM procedure are also examined in this study. Copyright © 2009 John Wiley & Sons, Ltd. [source] Volatility forecasting with double Markov switching GARCH modelsJOURNAL OF FORECASTING, Issue 8 2009Cathy W. S. Chen Abstract This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat-tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first-order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value-at-risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd. [source] Comparing density forecast models,JOURNAL OF FORECASTING, Issue 3 2007Yong Bao Abstract In this paper we discuss how to compare various (possibly misspecified) density forecast models using the Kullback,Leibler information criterion (KLIC) of a candidate density forecast model with respect to the true density. The KLIC differential between a pair of competing models is the (predictive) log-likelihood ratio (LR) between the two models. Even though the true density is unknown, using the LR statistic amounts to comparing models with the KLIC as a loss function and thus enables us to assess which density forecast model can approximate the true density more closely. We also discuss how this KLIC is related to the KLIC based on the probability integral transform (PIT) in the framework of Diebold et al. (1998). While they are asymptotically equivalent, the PIT-based KLIC is best suited for evaluating the adequacy of each density forecast model and the original KLIC is best suited for comparing competing models. In an empirical study with the S&P500 and NASDAQ daily return series, we find strong evidence for rejecting the normal-GARCH benchmark model, in favor of the models that can capture skewness in the conditional distribution and asymmetry and long memory in the conditional variance.,,Copyright © 2007 John Wiley & Sons, Ltd. [source] Predicting LDC debt rescheduling: performance evaluation of OLS, logit, and neural network modelsJOURNAL OF FORECASTING, Issue 8 2001Douglas K. Barney Abstract Empirical studies in the area of sovereign debt have used statistical models singularly to predict the probability of debt rescheduling. Unfortunately, researchers have made few efforts to test the reliability of these model predictions or to identify a superior prediction model among competing models. This paper tested neural network, OLS, and logit models' predictive abilities regarding debt rescheduling of less developed countries (LDC). All models predicted well out-of-sample. The results demonstrated a consistent performance of all models, indicating that researchers and practitioners can rely on neural networks or on the traditional statistical models to give useful predictions. Copyright © 2001 John Wiley & Sons, Ltd. [source] Measurement and correlation of microstructures: the case of foliation intersection axesJOURNAL OF METAMORPHIC GEOLOGY, Issue 3 2003A. R. Stallard Abstract Recent studies have used the relative rotation axis of sigmoidal and spiral-shaped inclusion trails, known as Foliation Inflexion/Intersection Axis (FIA), to investigate geological processes such as fold mechanisms and porphyroblast growth. The geological usefulness of this method depends upon the accurate measurement of FIA orientations and correct correlation of temporally related FIAs. This paper uses new data from the Canton Schist to assess the variation in FIA orientations within and between samples, and evaluates criteria for correlating FIAs. For the first time, an entire data set of FIA measurements is published, and data are presented in a way that reflects the variation in FIA orientations within individual samples and provides an indication of the reliability of the data. Analysis of 61 FIA trends determined from the Canton Schist indicate a minimum intrasample range in FIA orientations of 30°. Three competing models are presented for correlation of these FIAs, and each of the models employ different correlation criteria. Correlation of FIAs in Model 1 is based on relative timing and textural criteria, while Model 2 uses relative timing, orientation and patterns of changing FIA orientations, and Model 3 uses relative timing and FIA orientation as correlation criteria. Importantly, the three models differ in the spread of FIA orientations within individual sets, and the number of sets distinguished in the data. Relative timing is the most reliable criterion for correlation, followed by textural criteria and patterns of changing FIA orientations from core to rim of porphyroblasts. It is proposed that within a set of temporally related FIAs, the typical spread of orientations involves clustering of data in a 60° range, but outliers occur at other orientations including near-normal to the peak distribution. Consequently, in populations of FIA data that contain a wide range of orientations, correlation on the basis of orientation is unreliable in the absence of additional criteria. The results of this study suggest that FIAs are best used as semiquantitative indicators of bulk trends rather than an exact measurement for the purpose of quantitative analyses. [source] Jailed resources: conservation of resources theory as applied to burnout among prison guardsJOURNAL OF ORGANIZATIONAL BEHAVIOR, Issue 1 2007Jean-Pierre Neveu This study evaluates a salutogenic perspective of the burnout process. Building upon Hobfoll's (1989) Conservation of Resources theory, it proposes a simultaneous test of three hypothesized resources-based models. These competing models test the structure of burnout in relation to depleted resources (e.g., lack of skill utilization, of participation, of co-worker support, and of professional worth) and negative correlates (e.g., absenteeism and depression). SEM results provide equally good support for two resource-based models, although each of them proceeds from two different approaches (Leiter vs. Golembiewski). Of all burnout components, personal accomplishment is found to be least related to resources depletion, while emotional exhaustion is the most related to absenteeism and depression. Results are analyzed in light of existing literature and of the specific nature of the sample, a large population of French correctional officers (n,=,707). Implications for burnout theory and human resource management are discussed. Copyright © 2006 John Wiley & Sons, Ltd. [source] The chronology of abrupt climate change and Late Upper Palaeolithic human adaptation in Europe,JOURNAL OF QUATERNARY SCIENCE, Issue 5 2006S. P. E. Blockley Abstract This paper addresses the possible connections between the onset of human expansion in Europe following the Last Glacial Maximum, and the timing of abrupt climate warming at the onset of the Lateglacial (Bölling/Allerød) Interstadial. There are opposing views as to whether or not human populations and activities were directly ,forced' by climate change, based on different comparisons between archaeological and environmental data. We review the geochronological assumptions and approaches on which data comparisons have been attempted in the past, and argue that the uncertainties presently associated with age models based on calibrated radiocarbon dates preclude robust testing of the competing models, particularly when comparing the data to non-radiocarbon-based timescales such as the Greenland ice core records. The paper concludes with some suggestions as to the steps that will be necessary if more robust tests of the models are to be developed in the future. Copyright © 2006 John Wiley & Sons, Ltd. [source] Structural and parameter uncertainty in Bayesian cost-effectiveness modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2010Christopher H. Jackson Summary., Health economic decision models are subject to various forms of uncertainty, including uncertainty about the parameters of the model and about the model structure. These uncertainties can be handled within a Bayesian framework, which also allows evidence from previous studies to be combined with the data. As an example, we consider a Markov model for assessing the cost-effectiveness of implantable cardioverter defibrillators. Using Markov chain Monte Carlo posterior simulation, uncertainty about the parameters of the model is formally incorporated in the estimates of expected cost and effectiveness. We extend these methods to include uncertainty about the choice between plausible model structures. This is accounted for by averaging the posterior distributions from the competing models using weights that are derived from the pseudo-marginal-likelihood and the deviance information criterion, which are measures of expected predictive utility. We also show how these cost-effectiveness calculations can be performed efficiently in the widely used software WinBUGS. [source] Sensitivity analysis for incomplete contingency tables: the Slovenian plebiscite caseJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2001Geert Molenberghs Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty). [source] Bayesian optimal reconstruction of the primordial power spectrumMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2009M. Bridges ABSTRACT The form of the primordial power spectrum has the potential to differentiate strongly between competing models of perturbation generation in the early universe and so is of considerable importance. The recent release of five years of Wilkinson Microwave Anisotropy Probe observations have confirmed the general picture of the primordial power spectrum as deviating slightly from scale invariance with a spectral tilt parameter of ns, 0.96. None the less, many attempts have been made to isolate further features such as breaks and cut-offs using a variety of methods, some employing more than ,10 varying parameters. In this paper, we apply the robust technique of the Bayesian model selection to reconstruct the optimal degree of structure in the spectrum. We model the spectrum simply and generically as piecewise linear in ln k between ,nodes' in k space whose amplitudes are allowed to vary. The number of nodes and their k -space positions are chosen by the Bayesian evidence so that we can identify both the complexity and location of any detected features. Our optimal reconstruction contains, perhaps, surprisingly few features, the data preferring just three nodes. This reconstruction allows for a degree of scale dependence of the tilt with the ,turn-over' scale occurring around k, 0.016 Mpc,1. More structure is penalized by the evidence as overfitting the data, so there is currently little point in attempting reconstructions that are more complex. [source] Measuring Mental Health Following Traumatization: Psychometric Analysis of the Kuwait Raha Scale Using a Random National Household Data SetAMERICAN JOURNAL OF ORTHOPSYCHIATRY, Issue 2 2009Paula Chapman PhD The authors report on the psychometric properties of the Kuwait Raha Scale (KRS), a measure developed to assess well-being among Kuwaitis. Specific aims of the study were to (a) evaluate competing models of the latent structure of the KRS using exploratory factor analysis and identify the best model, (b) compare the model developed from a nationally representative sample with the initial model reported with Kuwaiti undergraduate students, and (c) assess the discriminant validity of the KRS with the General Health Questionnaire (GHQ). Factor analysis suggested that a 5-factor model best suited the data, whereas the development of the KRS indicated a 4-factor model. Differences in the latent structure found between the current study and the original examination of the KRS factor structure may be attributed to the demographics of the samples used in the 2 studies. Whereas the earlier study used a sample of undergraduate college students, the current study acquired a nationally representative sample of the Kuwaiti population. Discriminant validity of the KRS with the GHQ indicated that the KRS and the GHQ measure different dimensions of health. Implications for theory and research are discussed, with particular attention to overcoming the challenges confronting the meaning and measurement of well-being in developing countries and stimulating interdisciplinary research. [source] Improving robust model selection tests for dynamic modelsTHE ECONOMETRICS JOURNAL, Issue 2 2010Hwan-sik Choi Summary, We propose an improved model selection test for dynamic models using a new asymptotic approximation to the sampling distribution of a new test statistic. The model selection test is applicable to dynamic models with very general selection criteria and estimation methods. Since our test statistic does not assume the exact form of a true model, the test is essentially non-parametric once competing models are estimated. For the unknown serial correlation in data, we use a Heteroscedasticity/Autocorrelation-Consistent (HAC) variance estimator, and the sampling distribution of the test statistic is approximated by the fixed- b,asymptotic approximation. The asymptotic approximation depends on kernel functions and bandwidth parameters used in HAC estimators. We compare the finite sample performance of the new test with the bootstrap methods as well as with the standard normal approximations, and show that the fixed- b,asymptotics and the bootstrap methods are markedly superior to the standard normal approximation for a moderate sample size for time series data. An empirical application for foreign exchange rate forecasting models is presented, and the result shows the normal approximation to the distribution of the test statistic considered appears to overstate the data's ability to distinguish between two competing models. [source] Model selection tests for nonlinear dynamic modelsTHE ECONOMETRICS JOURNAL, Issue 1 2002Douglas Rivers This paper generalizes Vuong (1989) asymptotically normal tests for model selection in several important directions. First, it allows for incompletely parametrized models such as econometric models defined by moment conditions. Second, it allows for a broad class of estimation methods that includes most estimators currently used in practice. Third, it considers model selection criteria other than the models' likelihoods such as the mean squared errors of prediction. Fourth, the proposed tests are applicable to possibly misspecified nonlinear dynamic models with weakly dependent heterogeneous data. Cases where the estimation methods optimize the model selection criteria are distinguished from cases where they do not. We also consider the estimation of the asymptotic variance of the difference between the competing models' selection criteria, which is necessary to our tests. Finally, we discuss conditions under which our tests are valid. It is seen that the competing models must be essentially nonnested. [source] An examination of the complementary volume,volatility information theoriesTHE JOURNAL OF FUTURES MARKETS, Issue 10 2008Zhiyao Chen The volume,volatility relationship during the dissemination stages of information flow is examined by analyzing various theories relating volume and volatility as complementary rather than competing models. The mixture of distributions hypothesis, sequential arrival of information hypothesis, the dispersion of beliefs hypothesis, and the noise trader hypothesis all add to the understanding of how volume and volatility interact for different types of futures traders. An integrated picture of the volume,volatility relationship is provided by investigating the dynamic linear and nonlinear associations between volatility and the volume of informed (institutional) and uninformed (the general public) traders. In particular, the trading behavior explanation for the persistence of futures volatility, the effect of the timing of private information arrival, and the response of institutional traders to excess noise trading risk is examined. © 2008 Wiley Periodicals, Inc. Jrl Fut Mark 28:963,992, 2008 [source] Falling and explosive, dormant, and rising markets via multiple-regime financial time series modelsAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2010Cathy W. S. Chen Abstract A multiple-regime threshold nonlinear financial time series model, with a fat-tailed error distribution, is discussed and Bayesian estimation and inference are considered. Furthermore, approximate Bayesian posterior model comparison among competing models with different numbers of regimes is considered which is effectively a test for the number of required regimes. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is designed, while importance sampling is employed to estimate Bayesian residuals for model diagnostic testing. Our modeling framework provides a parsimonious representation of well-known stylized features of financial time series and facilitates statistical inference in the presence of high or explosive persistence and dynamic conditional volatility. We focus on the three-regime case where the main feature of the model is to capturing of mean and volatility asymmetries in financial markets, while allowing an explosive volatility regime. A simulation study highlights the properties of our MCMC estimators and the accuracy and favourable performance as a model selection tool, compared with a deviance criterion, of the posterior model probability approximation method. An empirical study of eight international oil and gas markets provides strong support for the three-regime model over its competitors, in most markets, in terms of model posterior probability and in showing three distinct regime behaviours: falling/explosive, dormant and rising markets. Copyright © 2009 John Wiley & Sons, Ltd. [source] Information Uncertainty Risk and Seasonality in International Stock MarketsASIA-PACIFIC JOURNAL OF FINANCIAL STUDIES, Issue 2 2010Dongcheol Kim G14; G12 Abstract A parsimonious two-factor model containing the market risk factor and a risk factor related to earnings information uncertainty has been developed to explain the seasonal regularity of January in international stock markets. This two-factor model shows apparently stronger power in explaining time-series behavior of stock returns and the cross-section of average stock returns in all major developed countries than do the competing models. Furthermore, the arbitrage residual return in January, which is the difference in the average residual returns between the smallest and largest size portfolios, is statistically insignificant in all the countries. These results indicate that the risk factor related to earnings information uncertainty plays a special role in explaining the seasonal pattern of stock returns in January, and that January might be a month that potentially tends to differentially reward stocks having uncertain earnings information. It could be argued, therefore, that large returns in January might be a risk premium for taking information uncertainty risk concerning earnings and unexpected earnings surprises faced at the earnings announcement, and that the previously reported strong January seasonality in stock returns might result from the use of misspecified models in adjusting for risk. [source] |