Financial Time Series (financial + time_series)

Distribution by Scientific Domains


Selected Abstracts


HEAVY-TAILED-DISTRIBUTED THRESHOLD STOCHASTIC VOLATILITY MODELS IN FINANCIAL TIME SERIES

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2008
Cathy W. S. Chen
Summary To capture mean and variance asymmetries and time-varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy-tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time-delay parameter. Self-exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value-at-risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models. [source]


Bootstrapping Financial Time Series

JOURNAL OF ECONOMIC SURVEYS, Issue 3 2002
Esther Ruiz
It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate. [source]


Testing Models of Low-Frequency Variability

ECONOMETRICA, Issue 5 2008
Ulrich K. Müller
We develop a framework to assess how successfully standard time series models explain low-frequency variability of a data series. The low-frequency information is extracted by computing a finite number of weighted averages of the original data, where the weights are low-frequency trigonometric series. The properties of these weighted averages are then compared to the asymptotic implications of a number of common time series models. We apply the framework to twenty U.S. macroeconomic and financial time series using frequencies lower than the business cycle. [source]


Modelling & Controlling Monetary and Economic Identities with Constrained State Space Models

INTERNATIONAL STATISTICAL REVIEW, Issue 2 2007
Gurupdesh S. Pandher
Summary The paper presents a method for modelling and controlling time series with identity structures. The approach is presented in the context of monetary targeting where the monetary identity (e.g. reserve money equals net foreign assets plus domestic credit) is modelled using a constrained state space model and next-period changes in domestic credit (policy variable) are estimated to reach the target level of reserve money. The constrained modelling ensures that aggregation and identity relations among items are dynamically satisfied during estimation, leading to more accurate forecasting and targeting. Applications to Germany, UK and USA show that the constrained state space model provides significant improvements in targeting and forecasting performance over the AR(1) benchmark and the unconstrained model. Reduction in the mean square error of targeting over AR(1) is in the range of 76,95% for the three countries while the gain in targeting efficiency over unconstrained modelling is between 21% and 55%. Beyond monetary targeting, the method has wide application to the dynamic modelling and control of economic and financial time series with identity and aggregation constraints (e.g. balance of payment, national income, purchasing power parity, company balance sheet). Résumé L'article présente une méthode de modélisation et de contrôle des séries temporelles avec des structures d'identité. L'approche est présentée dans le contexte de ciblage monétaire où l'identité monétaire (c. à d. monnaie de réserve égale avoirs étrangers plus crédit intérieur) est modélisée en utilisant un modèle spatial sous contrainte et où les variations du crédit intérieur à la période suivante (variable de politique) sont estimés pour atteindre le niveau visé de monnaie de réserve. La modélisation sous contrainte assure que les relations d'agrégation et d'identité entre items sont satisfaites en dynamique dans l'estimation, ce qui conduit à des prévisions et ciblages plus précis. L'application à l'Allemagne, le Royaume-Uni et les USA montrent que le modèle contraint apporte des améliorations importantes dans la performance de ciblage et de prévision par rapport à l'étalonnage auto-régressif (1) et au modèle sans contrainte. La réduction d'erreur du moindre carré par rapport à l'AR est comprise entre 76 et 95% pour les trois pays tandis que le gain en efficacité de ciblage sur le modèle sans contrainte se situe entre 21 et 55%. Par delà le ciblage monétaire, la méthode a une large application à la modélisation dynamique et au contrôle des séries temporelles économiques et financières avec des contraintes d'identité et d'agrégation (par ex. la balance des paiements, le revenu national, la parité de pouvoir d'achat, le bilan d'une compagnie). [source]


Asymmetric power distribution: Theory and applications to risk measurement

JOURNAL OF APPLIED ECONOMETRICS, Issue 5 2007
Ivana Komunjer
Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew- t -GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Bootstrapping Financial Time Series

JOURNAL OF ECONOMIC SURVEYS, Issue 3 2002
Esther Ruiz
It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate. [source]


Some Recent Developments in Futures Hedging

JOURNAL OF ECONOMIC SURVEYS, Issue 3 2002
Donald Lien
The use of futures contracts as a hedging instrument has been the focus of much research. At the theoretical level, an optimal hedge strategy is traditionally based on the expected,utility maximization paradigm. A simplification of this paradigm leads to the minimum,variance criterion. Although this paradigm is quite well accepted, alternative approaches have been sought. At the empirical level, research on futures hedging has benefited from the recent developments in the econometrics literature. Much research has been done on improving the estimation of the optimal hedge ratio. As more is known about the statistical properties of financial time series, more sophisticated estimation methods are proposed. In this survey we review some recent developments in futures hedging. We delineate the theoretical underpinning of various methods and discuss the econometric implementation of the methods. [source]


Volatility forecasting with double Markov switching GARCH models

JOURNAL OF FORECASTING, Issue 8 2009
Cathy W. S. Chen
Abstract This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat-tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first-order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value-at-risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Long-memory forecasting of US monetary indices

JOURNAL OF FORECASTING, Issue 4 2006
John Barkoulas
Abstract Several studies have tested for long-range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long-memory models as forecast-generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out-of-sample fractional forecasts to benchmark forecasts. The long-memory parameter is estimated using Robinson's Gaussian semi-parametric and multivariate log-periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out-of-sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


A Bayesian threshold nonlinearity test for financial time series

JOURNAL OF FORECASTING, Issue 1 2005
Mike K. P. So
Abstract We propose in this paper a threshold nonlinearity test for financial time series. Our approach adopts reversible-jump Markov chain Monte Carlo methods to calculate the posterior probabilities of two competitive models, namely GARCH and threshold GARCH models. Posterior evidence favouring the threshold GARCH model indicates threshold nonlinearity or volatility asymmetry. Simulation experiments demonstrate that our method works very well in distinguishing GARCH and threshold GARCH models. Sensitivity analysis shows that our method is robust to misspecification in error distribution. In the application to 10 market indexes, clear evidence of threshold nonlinearity is discovered and thus supporting volatility asymmetry. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A fractal forecasting model for financial time series

JOURNAL OF FORECASTING, Issue 8 2004
Gordon R. Richards
Abstract Financial market time series exhibit high degrees of non-linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non-integer, this is associated with two features: (1) inhomogeneity,extreme fluctuations at irregular intervals, and (2) scaling symmetries,proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition-fitted residual scale ratio (ST-FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15-minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST-FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Structural learning with time-varying components: tracking the cross-section of financial time series

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2005
Makram Talih
Summary., When modelling multivariate financial data, the problem of structural learning is compounded by the fact that the covariance structure changes with time. Previous work has focused on modelling those changes by using multivariate stochastic volatility models. We present an alternative to these models that focuses instead on the latent graphical structure that is related to the precision matrix. We develop a graphical model for sequences of Gaussian random vectors when changes in the underlying graph occur at random times, and a new block of data is created with the addition or deletion of an edge. We show how a Bayesian hierarchical model incorporates both the uncertainty about that graph and the time variation thereof. [source]


Return Dynamics when Persistence is Unobservable

MATHEMATICAL FINANCE, Issue 4 2001
Timothy C. Johnson
This paper proposes a new theory of the sources of time-varying second (and higher) moments in financial time series. The key idea is that fully rational agents must infer the stochastic degree of persistence of fundamental shocks. Endogenous changes in their uncertainty determine the evolution of conditional moments of returns. The model accounts for the principal observed features of volatility dynamics and implies some new ones. Most strikingly, it implies a relationship between ex post trends, or momentum, and changes in volatility. [source]


Robust modelling of DTARCH models

THE ECONOMETRICS JOURNAL, Issue 2 2005
Yer Van Hui
Summary, Autoregressive conditional heteroscedastic (ARCH) models and its extensions are widely used in modelling volatility in financial time series. One of the variants, the double-threshold autoregressive conditional heteroscedastic (DTARCH) model, has been proposed to model the conditional mean and the conditional variance that are piecewise linear. The DTARCH model is also useful for modelling conditional heteroscedasticity with nonlinear structures such as asymmetric cycles, jump resonance and amplitude-frequence dependence. Since asset returns often display heavy tails and outliers, it is worth studying robust DTARCH modelling without specific distribution assumption. This paper studies DTARCH structures for conditional scale instead of conditional variance. We examine L1 -estimation of the DTARCH model and derive limiting distributions for the proposed estimators. A robust portmanteau statistic based on the L1 -norm fit is constructed to test the model adequacy. This approach captures various nonlinear phenomena and stylized facts with desirable robustness. Simulations show that the L1 -estimators are robust against innovation distributions and accurate for a moderate sample size, and the proposed test is not only robust against innovation distributions but also powerful in discriminating the delay parameters and ARCH models. It is noted that the quasi-likelihood modelling approach used in ARCH models is inappropriate to DTARCH models in the presence of outliers and heavy tail innovations. [source]


Trend estimation of financial time series

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2010
Víctor M. Guerrero
Abstract We propose to decompose a financial time series into trend plus noise by means of the exponential smoothing filter. This filter produces statistically efficient estimates of the trend that can be calculated by a straightforward application of the Kalman filter. It can also be interpreted in the context of penalized least squares as a function of a smoothing constant has to be minimized by trading off fitness against smoothness of the trend. The smoothing constant is crucial to decide the degree of smoothness and the problem is how to choose it objectively. We suggest a procedure that allows the user to decide at the outset the desired percentage of smoothness and derive from it the corresponding value of that constant. A definition of smoothness is first proposed as well as an index of relative precision attributable to the smoothing element of the time series. The procedure is extended to series with different frequencies of observation, so that comparable trends can be obtained for say, daily, weekly or intraday observations of the same variable. The theoretical results are derived from an integrated moving average model of order (1, 1) underlying the statistical interpretation of the filter. Expressions of equivalent smoothing constants are derived for series generated by temporal aggregation or systematic sampling of another series. Hence, comparable trend estimates can be obtained for the same time series with different lengths, for different time series of the same length and for series with different frequencies of observation of the same variable. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Falling and explosive, dormant, and rising markets via multiple-regime financial time series models

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2010
Cathy W. S. Chen
Abstract A multiple-regime threshold nonlinear financial time series model, with a fat-tailed error distribution, is discussed and Bayesian estimation and inference are considered. Furthermore, approximate Bayesian posterior model comparison among competing models with different numbers of regimes is considered which is effectively a test for the number of required regimes. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is designed, while importance sampling is employed to estimate Bayesian residuals for model diagnostic testing. Our modeling framework provides a parsimonious representation of well-known stylized features of financial time series and facilitates statistical inference in the presence of high or explosive persistence and dynamic conditional volatility. We focus on the three-regime case where the main feature of the model is to capturing of mean and volatility asymmetries in financial markets, while allowing an explosive volatility regime. A simulation study highlights the properties of our MCMC estimators and the accuracy and favourable performance as a model selection tool, compared with a deviance criterion, of the posterior model probability approximation method. An empirical study of eight international oil and gas markets provides strong support for the three-regime model over its competitors, in most markets, in terms of model posterior probability and in showing three distinct regime behaviours: falling/explosive, dormant and rising markets. Copyright © 2009 John Wiley & Sons, Ltd. [source]


One-way analysis of variance with long memory errors and its application to stock return data

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 6 2007
Jaechoul Lee
Abstract Recent empirical results indicate that many financial time series, including stock volatilities, often have long-range dependencies. Comparing volatilities in stock returns is a crucial part of the risk management of stock investing. This paper proposes two test statistics for testing the equality of mean volatilities of stock returns using the analysis of variance (ANOVA) model with long memory errors. They are modified versions of the ordinary F statistic used in the ANOVA models with independently and identically distributed errors. One has a form of the ordinary F statistic multiplied by a correction factor, which reflects slowly decaying autocorrelations, that is, long-range dependence. The other is a test statistic such that the degrees of freedom of the denominator in the ordinary F test statistic is calibrated by the so-called effective sample size. Empirical sizes and powers of the proposed test statistics are examined via Monte Carlo simulation. An application to German stock returns is presented. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Modelling financial time series with threshold nonlinearity in returns and trading volume

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2007
Mike K. P. So
Abstract This paper investigates the effect of past returns and trading volumes on the temporal behaviour of international market returns. We propose a class of nonlinear threshold time-series models with generalized autoregressive conditional heteroscedastic disturbances. Using Bayesian approach, an implementation of Markov chain Monte Carlo procedure is used to obtain estimates of unknown parameters. The proposed family of models incorporates changes in log of volumes in the sense of regime changes and asymmetric effects on the volatility functions. The results show that when differences of log volumes are involved in the system of log return and volatility models, an optimum selection can be achieved. In all the five markets considered, both mean and variance equations involve volumes in the best models selected. Our best models produce higher posterior-odds ratios than that in Gerlach et al.'s (Phys. A Statist. Mech. Appl. 2006; 360:422,444) models, indicating that our return,volume partition of regimes can offer extra gain in explaining return-volatility term structure. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Generalized dynamic linear models for financial time series

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2001
Patrizia Campagnoli
Abstract In this paper we consider a class of conditionally Gaussian state-space models and discuss how they can provide a flexible and fairly simple tool for modelling financial time series, even in the presence of different components in the series, or of stochastic volatility. Estimation can be computed by recursive equations, which provide the optimal solution under rather mild assumptions. In more general models, the filter equations can still provide approximate solutions. We also discuss how some models traditionally employed for analysing financial time series can be regarded in the state-space framework. Finally, we illustrate the models in two examples to real data sets. Copyright © 2001 John Wiley & Sons, Ltd. [source]


HEAVY-TAILED-DISTRIBUTED THRESHOLD STOCHASTIC VOLATILITY MODELS IN FINANCIAL TIME SERIES

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2008
Cathy W. S. Chen
Summary To capture mean and variance asymmetries and time-varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy-tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time-delay parameter. Self-exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value-at-risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models. [source]