Home About us Contact | |||
Factor Models (factor + models)
Kinds of Factor Models Selected AbstractsArbitrage and the Evaluation of Linear Factor Models in UK Stock ReturnsFINANCIAL REVIEW, Issue 2 2010Jonathan Fletcher G12 Abstract I examine the impact of the no arbitrage restriction on the estimation and evaluation of linear factor models in UK stock returns. The no arbitrage restriction reduces volatility and eliminates most of the negative values of the fitted stochastic discount factor models. All of the factor models are rejected and there are significant differences in the pricing performance between models under the no arbitrage restriction. The no arbitrage restriction can have a significant impact on both the parameter estimates and pricing errors for some models. [source] Multivariate Markov Switching Common Factor Models for the UKBULLETIN OF ECONOMIC RESEARCH, Issue 2 2003Terence C. Mills We estimate a model that incorporates two key features of business cycles, comovement among economic variables and switching between regimes of boom and slump, to quarterly UK data for the last four decades. A common factor, interpreted as a composite indicator of coincident variables, and estimates of turning points from one regime to the other, are extracted from the data by using the Kalman filter and maximum likelihood estimation. Both comovement and regime switching are found to be important features of the UK business cycle. The composite indicator produces a sensible representation of the cycle and the estimated turning points agree fairly well with independently determined chronologies. These estimates are sharper than those produced by a univariate Markov switching model of GDP alone. A fairly typical stylized fact of business cycles is confirmed by this model , recessions are steeper and shorter than recoveries. [source] How successful are dynamic factor models at forecasting output and inflation?JOURNAL OF FORECASTING, Issue 3 2008A meta-analytic approach Abstract This paper uses a meta-analysis to survey existing factor forecast applications for output and inflation and assesses what causes large factor models to perform better or more poorly at forecasting than other models. Our results suggest that factor models tend to outperform small models, whereas factor forecasts are slightly worse than pooled forecasts. Factor models deliver better predictions for US variables than for UK variables, for US output than for euro-area output and for euro-area inflation than for US inflation. The size of the dataset from which factors are extracted positively affects the relative factor forecast performance, whereas pre-selecting the variables included in the dataset did not improve factor forecasts in the past. Finally, the factor estimation technique may matter as well. Copyright © 2008 John Wiley & Sons, Ltd. [source] Estimating the Technology of Cognitive and Noncognitive Skill FormationECONOMETRICA, Issue 3 2010Flavio Cunha This paper formulates and estimates multistage production functions for children's cognitive and noncognitive skills. Skills are determined by parental environments and investments at different stages of childhood. We estimate the elasticity of substitution between investments in one period and stocks of skills in that period to assess the benefits of early investment in children compared to later remediation. We establish nonparametric identification of a general class of production technologies based on nonlinear factor models with endogenous inputs. A by-product of our approach is a framework for evaluating childhood and schooling interventions that does not rely on arbitrarily scaled test scores as outputs and recognizes the differential effects of the same bundle of skills in different tasks. Using the estimated technology, we determine optimal targeting of interventions to children with different parental and personal birth endowments. Substitutability decreases in later stages of the life cycle in the production of cognitive skills. It is roughly constant across stages of the life cycle in the production of noncognitive skills. This finding has important implications for the design of policies that target the disadvantaged. For most configurations of disadvantage it is optimal to invest relatively more in the early stages of childhood than in later stages. [source] Bayesian analysis of dynamic factor models: an application to air pollution and mortality in São Paulo, BrazilENVIRONMETRICS, Issue 6 2008T. Sáfadi Abstract The Bayesian estimation of a dynamic factor model where the factors follow a multivariate autoregressive model is presented. We derive the posterior distributions for the parameters and the factors and use Monte Carlo methods to compute them. The model is applied to study the association between air pollution and mortality in the city of São Paulo, Brazil. Statistical analysis was performed through a Bayesian analysis of a dynamic factor model. The series considered were minimal temperature, relative humidity, air pollutant of PM10 and CO, mortality circulatory disease and mortality respiratory disease. We found a strong association between air pollutant (PM10), Humidity and mortality respiratory disease for the city of São Paulo. Copyright © 2007 John Wiley & Sons, Ltd. [source] Arbitrage and the Evaluation of Linear Factor Models in UK Stock ReturnsFINANCIAL REVIEW, Issue 2 2010Jonathan Fletcher G12 Abstract I examine the impact of the no arbitrage restriction on the estimation and evaluation of linear factor models in UK stock returns. The no arbitrage restriction reduces volatility and eliminates most of the negative values of the fitted stochastic discount factor models. All of the factor models are rejected and there are significant differences in the pricing performance between models under the no arbitrage restriction. The no arbitrage restriction can have a significant impact on both the parameter estimates and pricing errors for some models. [source] Can Asset Pricing Models Price Idiosyncratic Risk in U.K. Stock Returns?FINANCIAL REVIEW, Issue 4 2007Jonathan Fletcher G12 Abstract I examine how well different linear factor models and consumption-based asset pricing models price idiosyncratic risk in U.K. stock returns. Correctly pricing idiosyncratic risk is a significant challenge for many of the models I consider. For some consumption-based models, there is a clear tradeoff in the performance of the models between correctly pricing systematic risk and idiosyncratic risk. Linear factor models do a better job in most cases in pricing systematic risk than consumption-based models but the reverse is true for idiosyncratic risk. [source] Correlation optimized warping and dynamic time warping as preprocessing methods for chromatographic dataJOURNAL OF CHEMOMETRICS, Issue 5 2004Giorgio Tomasi Abstract Two different algorithms for time-alignment as a preprocessing step in linear factor models are studied. Correlation optimized warping and dynamic time warping are both presented in the literature as methods that can eliminate shift-related artifacts from measurements by correcting a sample vector towards a reference. In this study both the theoretical properties and the practical implications of using signal warping as preprocessing for chromatographic data are investigated. The connection between the two algorithms is also discussed. The findings are illustrated by means of a case study of principal component analysis on a real data set, including manifest retention time artifacts, of extracts from coffee samples stored under different packaging conditions for varying storage times. We concluded that for the data presented here dynamic time warping with rigid slope constraints and correlation optimized warping are superior to unconstrained dynamic time warping; both considerably simplify interpretation of the factor model results. Unconstrained dynamic time warping was found to be too flexible for this chromatographic data set, resulting in an overcompensation of the observed shifts and suggesting the unsuitability of this preprocessing method for this type of signals. Copyright © 2004 John Wiley & Sons, Ltd. [source] Short-term forecasting of GDP using large datasets: a pseudo real-time forecast evaluation exercise,JOURNAL OF FORECASTING, Issue 7 2009G. Rünstler Abstract This paper performs a large-scale forecast evaluation exercise to assess the performance of different models for the short-term forecasting of GDP, resorting to large datasets from ten European countries. Several versions of factor models are considered and cross-country evidence is provided. The forecasting exercise is performed in a simulated real-time context, which takes account of publication lags in the individual series. In general, we find that factor models perform best and models that exploit monthly information outperform models that use purely quarterly data. However, the improvement over the simpler, quarterly models remains contained. Copyright © 2009 John Wiley & Sons, Ltd. [source] How successful are dynamic factor models at forecasting output and inflation?JOURNAL OF FORECASTING, Issue 3 2008A meta-analytic approach Abstract This paper uses a meta-analysis to survey existing factor forecast applications for output and inflation and assesses what causes large factor models to perform better or more poorly at forecasting than other models. Our results suggest that factor models tend to outperform small models, whereas factor forecasts are slightly worse than pooled forecasts. Factor models deliver better predictions for US variables than for UK variables, for US output than for euro-area output and for euro-area inflation than for US inflation. The size of the dataset from which factors are extracted positively affects the relative factor forecast performance, whereas pre-selecting the variables included in the dataset did not improve factor forecasts in the past. Finally, the factor estimation technique may matter as well. Copyright © 2008 John Wiley & Sons, Ltd. [source] Forecasting interest rate swap spreads using domestic and international risk factors: evidence from linear and non-linear modelsJOURNAL OF FORECASTING, Issue 8 2007Ilias Lekkos Abstract This paper explores the ability of factor models to predict the dynamics of US and UK interest rate swap spreads within a linear and a non-linear framework. We reject linearity for the US and UK swap spreads in favour of a regime-switching smooth transition vector autoregressive (STVAR) model, where the switching between regimes is controlled by the slope of the US term structure of interest rates. We compare the ability of the STVAR model to predict swap spreads with that of a non-linear nearest-neighbours model as well as that of linear AR and VAR models. We find some evidence that the non-linear models predict better than the linear ones. At short horizons, the nearest-neighbours (NN) model predicts better than the STVAR model US swap spreads in periods of increasing risk conditions and UK swap spreads in periods of decreasing risk conditions. At long horizons, the STVAR model increases its forecasting ability over the linear models, whereas the NN model does not outperform the rest of the models.,,Copyright © 2007 John Wiley & Sons, Ltd. [source] Forecasting German GDP using alternative factor models based on large datasetsJOURNAL OF FORECASTING, Issue 4 2007Christian Schumacher Abstract This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state-space models. Out-of-sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean-squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely.,,Copyright © 2007 John Wiley & Sons, Ltd. [source] Forecasting euro area inflation using dynamic factor measures of underlying inflationJOURNAL OF FORECASTING, Issue 7 2005Gonzalo Camba-Mendez Abstract Standard measures of prices are often contaminated by transitory shocks. This has prompted economists to suggest the use of measures of underlying inflation to formulate monetary policy and assist in forecasting observed inflation. Recent work has concentrated on modelling large data sets using factor models. In this paper we estimate factors from data sets of disaggregated price indices for European countries. We then assess the forecasting ability of these factor estimates against other measures of underlying inflation built from more traditional methods. The power to forecast headline inflation over horizons of 12 to 18 months is adopted as a valid criterion to assess forecasting. Empirical results for the five largest euro area countries, as well as for the euro area itself, are presented. Copyright © 2005 John Wiley & Sons, Ltd. [source] Factor forecasts for the UKJOURNAL OF FORECASTING, Issue 4 2005Michael J. Artis Abstract Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time-series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time-series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd. [source] A narrative review of the Beck Depression Inventory (BDI) and implications for its use in an alcohol-dependent populationJOURNAL OF PSYCHIATRIC & MENTAL HEALTH NURSING, Issue 1 2010A. MCPHERSON rn ba(hons) bsc Accessible summary ,,The findings from the present study reveal that the Beck Depression Inventory (BDI) is a reliable and valid instrument for measuring depression in a variety of populations. This realization should enable nurses and other health professionals to utilize the tool with added confidence and assurance. ,,The main finding was that the BDI would probably be a reliable and valid screening tool in an alcohol-dependent population. This conclusion appears to echo the relationship that alcohol consumption generally has with depression. This finding is important to those practitioners using the BDI in this population in that it provides further evidence to enhance their practical experience. Abstract A psychometric evaluation of the Beck Depression Inventory (BDI) was carried out on contemporary studies to ascertain its suitability for use in an alcohol-dependent population. Three criteria were used for this: factor analysis, test,retest reliability and internal consistency reliability. Factor analysis revealed that its structure is consistent with either two or three factor models, depending on the population. Test,retest results concluded that the correlation coefficient remained above the recommended threshold and internal consistency reliability highlighted alpha coefficient results consistently above suggested scores, leading to the conclusion that the BDI is probably an effective screening tool in an alcohol-dependent population. [source] A parametric estimation method for dynamic factor models of large dimensionsJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2009George Kapetanios C32; C51; E52 Abstract., The estimation of dynamic factor models for large sets of variables has attracted considerable attention recently, because of the increased availability of large data sets. In this article we propose a new parametric methodology for estimating factors from large data sets based on state,space models and discuss its theoretical properties. In particular, we show that it is possible to estimate consistently the factor space. We also conduct a set of simulation experiments that show that our approach compares well with existing alternatives. [source] A quantitative genetic analysis of intermediate asthma phenotypesALLERGY, Issue 3 2009S. F. Thomsen Aim:, To study the relative contribution of genetic and environmental factors to the correlation between exhaled nitric oxide (FeNO), airway responsiveness, airway obstruction, and serum total immunoglobulin E (IgE). Methods:, Within a sampling frame of 21 162 twin subjects, 20,49 years of age, from the Danish Twin Registry, a total of 575 subjects (256 intact pairs and 63 single twins) who either themselves and/or their co-twins reported a history of asthma at a nationwide questionnaire survey, were clinically examined. Traits were measured using standard techniques. Latent factor models were fitted to the observed data using maximum likelihood methods. Results:, Additive genetic factors explained 67% of the variation in FeNO, 43% in airway responsiveness, 22% in airway obstruction, and 81% in serum total IgE. In general, traits had genetically and environmentally distinct variance structures. The most substantial genetic similarity was observed between FeNO and serum total IgE, genetic correlation (,A) = 0.37, whereas the strongest environmental resemblance was observed between airway responsiveness and airway obstruction, specific environmental correlation (,E) = ,0.46, and between FeNO and airway responsiveness, ,E = 0.34. Conclusions:, Asthma is a complex disease characterized by a set of etiologically heterogeneous biomarkers, which likely constitute diverse targets of intervention. [source] A Coincident Index, Common Factors, and Monthly Real GDP,OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 1 2010Roberto S. Mariano Abstract The Stock,Watson coincident index and its subsequent extensions assume a static linear one-factor model for the component indicators. This restrictive assumption is unnecessary if one defines a coincident index as an estimate of monthly real gross domestic products (GDP). This paper estimates Gaussian vector autoregression (VAR) and factor models for latent monthly real GDP and other coincident indicators using the observable mixed-frequency series. For maximum likelihood estimation of a VAR model, the expectation-maximization (EM) algorithm helps in finding a good starting value for a quasi-Newton method. The smoothed estimate of latent monthly real GDP is a natural extension of the Stock,Watson coincident index. [source] Multifactor implied volatility functions for HJM modelsTHE JOURNAL OF FUTURES MARKETS, Issue 8 2006I-Doun Kuo This study evaluates two one-factor, two two-factor, and two three-factor implied volatility functions in the HJM class, with the use of eurodollar futures options across both strike prices and maturities. The primary contributions of this article are (a) to propose and test three implied volatility multifactor functions not considered by K. I. Amin and A. J. Morton (1994), (b) to evaluate models using the AIC criteria as well as other standard criteria neglected by S. Y. M. Zeto (2002), and (c) to .nd that multifactor models incorporating the exponential decaying implied volatility functions generally outperform other models in .tting and prediction, in sharp contrast to K. I. Amin and A. J. Morton, who find the constantvolatility model superior. Correctly specified and calibrated simple constant and square-root factor models may be superior to inappropriate multifactor models in option trading and hedging strategies. © 2006 Wiley Periodicals, Inc. Jrl Fut Mark 26:809,833, 2006 [source] Time varying and dynamic models for default risk in consumer loansJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2010Jonathan Crook Summary., We review the incorporation of time varying variables into models of the risk of consumer default. Lenders typically have data which are of a panel format. This allows the inclusion of time varying covariates in models of account level default by including them in survival models, panel models or ,correction factor' models. The choice depends on the aim of the model and the assumptions that can be plausibly made. At the level of the portfolio, Merton-type models have incorporated macroeconomic and latent variables in mixed (factor) models and Kalman filter models whereas reduced form approaches include Markov chains and stochastic intensity models. The latter models have mainly been applied to corporate defaults and considerable scope remains for application to consumer loans. [source] |