High-frequency Data (high-frequency + data)

Distribution by Scientific Domains


Selected Abstracts


Consistent High-precision Volatility from High-frequency Data

ECONOMIC NOTES, Issue 2 2001
Fulvio Corsi
Estimates of daily volatility are investigated. Realized volatility can be computed from returns observed over time intervals of different sizes. For simple statistical reasons, volatility estimators based on high-frequency returns have been proposed, but such estimators are found to be strongly biased as compared to volatilities of daily returns. This bias originates from microstructure effects in the price formation. For foreign exchange, the relevant microstructure effect is the incoherent price formation, which leads to a strong negative first-order autocorrelation ,(1),40 per cent for tick-by-tick returns and to the volatility bias. On the basis of a simple theoretical model for foreign exchange data, the incoherent term can be filtered away from the tick-by-tick price series. With filtered prices, the daily volatility can be estimated using the information contained in high-frequency data, providing a high-precision measure of volatility at any time interval. (J.E.L.: C13, C22, C81). [source]


The effect of a transaction tax on exchange rate volatility

INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 2 2010
Markku Lanne
Abstract We argue that a transaction tax is likely to amplify, not dampen, volatility in foreign exchange markets. Our argument stems from the decentralized trading practice and the presumable discrepancy between ,informed' and ,uninformed' traders' valuations. Given that the informed valuations are likely to be less dispersed, a transaction tax penalizes informed trades disproportionately, leading to increased volatility. Empirical support for this prediction is found by investigating the effect of transaction costs on the volatility of DEM/USD and JPY/USD returns. High-frequency data are used and an increase in transaction costs is found to have a significant positive effect on volatility. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Consistent High-precision Volatility from High-frequency Data

ECONOMIC NOTES, Issue 2 2001
Fulvio Corsi
Estimates of daily volatility are investigated. Realized volatility can be computed from returns observed over time intervals of different sizes. For simple statistical reasons, volatility estimators based on high-frequency returns have been proposed, but such estimators are found to be strongly biased as compared to volatilities of daily returns. This bias originates from microstructure effects in the price formation. For foreign exchange, the relevant microstructure effect is the incoherent price formation, which leads to a strong negative first-order autocorrelation ,(1),40 per cent for tick-by-tick returns and to the volatility bias. On the basis of a simple theoretical model for foreign exchange data, the incoherent term can be filtered away from the tick-by-tick price series. With filtered prices, the daily volatility can be estimated using the information contained in high-frequency data, providing a high-precision measure of volatility at any time interval. (J.E.L.: C13, C22, C81). [source]


Realising the future: forecasting with high-frequency-based volatility (HEAVY) models

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2010
Professor Neil Shephard
This paper studies in some detail a class of high-frequency-based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realised measures constructed from high-frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model-based bootstrap which allows us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models. Copyright © 2010 John Wiley & Sons, Ltd. [source]


On portfolio optimization: How and when do we benefit from high-frequency data?

JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2009
Qianqiu Liu
We examine how the use of high-frequency data impacts the portfolio optimization decision. Prior research has documented that an estimate of realized volatility is more precise when based upon intraday returns rather than daily returns. Using the framework of a professional investment manager who wishes to track the S&P 500 with the 30 Dow Jones Industrial Average stocks, we find that the benefits of using high-frequency data depend upon the rebalancing frequency and estimation horizon. If the portfolio is rebalanced monthly and the manager has access to at least the previous 12 months of data, daily data have the potential to perform as well as high-frequency data. However, substantial improvements in the portfolio optimization decision from high-frequency data are realized if the manager rebalances daily or has less than a 6-month estimation window. These findings are robust to transaction costs. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Optimal sampling frequency for volatility forecast models for the Indian stock markets

JOURNAL OF FORECASTING, Issue 1 2009
Malay Bhattacharyya
Abstract This paper evaluates the performance of conditional variance models using high-frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non-normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Forecasting volatility with noisy jumps: an application to the Dow Jones Industrial Average stocks

JOURNAL OF FORECASTING, Issue 3 2008
Basel M. A. Awartani
Abstract Empirical high-frequency data can be used to separate the continuous and the jump components of realized volatility. This may improve on the accuracy of out-of-sample realized volatility forecasts. A further improvement may be realized by disentangling the two components using a sampling frequency at which the market microstructure effect is negligible, and this is the objective of the paper. In particular, a significant improvement in the accuracy of volatility forecasts is obtained by deriving the jump information from time intervals at which the noise effect is weak. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Realized kernels in practice: trades and quotes

THE ECONOMETRICS JOURNAL, Issue 3 2009
O. E. Barndorff-Nielsen
Summary, Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated with high volumes. One explanation for this is that they are due to non-trivial liquidity effects. [source]


Data assimilation of high-density observations.

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 605 2005
I: Impact on initial conditions for the MAP/SOP IOP2b
Abstract An attempt is made to evaluate the impact of the data assimilation of high-frequency data on the initial conditions. The data assimilation of all the data available on the Mesoscale Alpine Program archive for a test case is performed using the objective analysis and the Variational Data Assimilation (Var) techniques. The objective analysis is performed using two different schemes: Cressman and multiquadric; 3D-Var is used for the variational analysis. The European Centre for Medium-Range Weather Forecasts analyses are used as first guess, and they are blended together with the observations to generate an improved set of mesoscale initial and boundary conditions for the Intensive Observing Period 2b (17,21 September 1999). A few experiments are performed using the initialization procedure of MM5, the mesoscale model from Penn State University/National Center for Atmospheric Research. The comparison between improved initial conditions and observations shows: (i) the assimilation of the surface and upper-air data has a large positive impact on the initial conditions depending on the technique used for the objective analysis; (ii) a large decrease of the error for the meridional component of the wind V at the initial time is found, if assimilation of three-hourly data is performed by objective analysis; (iii) a comparable improvement of the initial conditions with respect to the objective analysis is found if 3D-Var is used, but a large error is obtained for the V component of the wind. Copyright © 2005 Royal Meteorological Society [source]