Series Models (series + models)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Series Models

  • time series models


  • Selected Abstracts


    ROBUST ESTIMATION IN PARAMETRIC TIME SERIES MODELS UNDER LONG- AND SHORT-RANGE-DEPENDENT STRUCTURES

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2009
    Jiti Gao
    Summary This paper studies the asymptotic behaviour of an M-estimator of regression parameters in the linear model when the design variables are either stationary short-range dependent (SRD), ,-mixing or long-range dependent (LRD), and the errors are LRD. The weak consistency and the asymptotic distributions of the M-estimator are established. We present some simulated examples to illustrate the efficiency of the proposed M-estimation method. [source]


    The Effect of the Estimation on Goodness-of-Fit Tests in Time Series Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2005
    Yue Fang
    Abstract., We analyze, by simulation, the finite-sample properties of goodness-of-fit tests based on residual autocorrelation coefficients (simple and partial) obtained using different estimators frequently used in the analysis of autoregressive moving-average time-series models. The estimators considered are unconditional least squares, maximum likelihood and conditional least squares. The results suggest that although the tests based on these estimators are asymptotically equivalent for particular models and parameter values, their sampling properties for samples of the size commonly found in economic applications can differ substantially, because of differences in both finite-sample estimation efficiencies and residual regeneration methods. [source]


    Stochastic models for simulation of strong ground motion in Iceland

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2001
    Símon Ólafsson
    Abstract Two types of modelling approaches for simulating ground motion in Iceland are studied and compared. The first type of models, named discrete-time series models (ARMA), are based solely on measured acceleration in earthquakes occurring in Iceland. The second type of models are based on a theoretical seismic source model called the extended Brune model. Based on measured acceleration in Iceland during the period 1986,1996, the parameters for the extended Brune models have been estimated. The seismic source models are presented here as ARMA models, which simplifies the simulation process. A single-layer soil amplification model is used in conjunction with the extended Brune model to estimate local site amplification. Emphasis is put on the ground motion models representing the variability in the measured earthquakes, with respect to energy, duration and frequency content. Demonstration is made using these models for constructing linear and non-linear probabilistic response spectra using a discretised version of the Bouc,Wen model for the hysteresis of the second-order system. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Testing Models of Low-Frequency Variability

    ECONOMETRICA, Issue 5 2008
    Ulrich K. Müller
    We develop a framework to assess how successfully standard time series models explain low-frequency variability of a data series. The low-frequency information is extracted by computing a finite number of weighted averages of the original data, where the weights are low-frequency trigonometric series. The properties of these weighted averages are then compared to the asymptotic implications of a number of common time series models. We apply the framework to twenty U.S. macroeconomic and financial time series using frequencies lower than the business cycle. [source]


    Contending with space,time interaction in the spatial prediction of pollution: Vancouver's hourly ambient PM10 field

    ENVIRONMETRICS, Issue 5-6 2002
    Jim Zidek
    Abstract In this article we describe an approach for predicting average hourly concentrations of ambient PM10 in Vancouver. We know our solution also applies to hourly ozone fields and believe it may be quite generally applicable. We use a hierarchical Bayesian approach. At the primary level we model the logarithmic field as a trend model plus Gaussian stochastic residual. That trend model depends on hourly meteorological predictors and is common to all sites. The stochastic component consists of a 24-hour vector response that we model as a multivariate AR(3) temporal process with common spatial parameters. Removing the trend and AR structure leaves ,whitened' time series of vector series. With this approach (as opposed to using 24 separate univariate time series models), there is little loss of spatial correlation in these residuals compared with that in just the detrended residuals (prior to removing the AR component). Moreover our multivariate approach enables predictions for any given hour to ,borrow strength' through its correlation with adjoining hours. On this basis we develop a spatial predictive distribution for these residuals at unmonitored sites. By transforming the predicted residuals back to the original data scales we can impute Vancouver's hourly PM10 field. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Alcohol and mortality: methodological and analytical issues in aggregate analyses

    ADDICTION, Issue 1s1 2001
    Thor Norström
    This supplement includes a collection of papers that aim at estimating the relationship between per capita alcohol consumption and various forms of mortality, including mortality from liver cirrhosis, accidents, suicide, homicide, ischaemic heart disease, and total mortality. The papers apply a uniform methodological protocol, and they are all based on time series data covering the post-war period in the present EU countries and Norway. In this paper we discuss various methodological and analytical issues that are common to these papers. We argue that analysis of time series data is the most feasible approach for assessing the aggregate health consequences of changes in population drinking. We further discuss how aggregate data may also be useful for judging the plausibility of individual-level relationships, particularly those prone to be confounded by selection effects. The aggregation of linear and curvilinear risk curves is treated as well as various methods for dealing with the time-lag problem. With regard to estimation techniques we find country specific analyses preferable to pooled cross-sectional/time series models since the latter incorporate the dubious element of geographical co-variation, and conceal potentially interesting variations in alcohol effects. The approach taken in the papers at hand is instead to pool the country specific results into three groups of countries that represent different drinking cultures; traditional wine countries of southern Europe, beer countries of central Europe and the British Isles and spirits countries of northern Europe. The findings of the papers reinforce the central tenet of the public health perspective that overall consumption is an important determinant of alcohol-related harm rates. However, there is a variation across country groups in alcohol effects, particularly those on violent deaths, that indicates the potential importance of drinking patterns. There is no support for the notion that increases in per capita consumption have any cardioprotective effects at the population level. [source]


    OPTIMAL FORECAST COMBINATION UNDER REGIME SWITCHING*

    INTERNATIONAL ECONOMIC REVIEW, Issue 4 2005
    Graham Elliott
    This article proposes a new forecast combination method that lets the combination weights be driven by regime switching in a latent state variable. An empirical application that combines forecasts from survey data and time series models finds that the proposed regime switching combination scheme performs well for a variety of macroeconomic variables. Monte Carlo simulations shed light on the type of data-generating processes for which the proposed combination method can be expected to perform better than a range of alternative combination schemes. Finally, we show how time variations in the combination weights arise when the target variable and the predictors share a common factor structure driven by a hidden Markov process. [source]


    Pattern hunting in climate: a new method for finding trends in gridded climate data

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 1 2007
    A. Hannachi
    Abstract Trends are very important in climate research and are ubiquitous in the climate system. Trends are usually estimated using simple linear regression. Given the complexity of the system, trends are expected to have various features such as global and local characters. It is therefore important to develop methods that permit a systematic decomposition of climate data into different trend patterns and remaining no-trend patterns. Empirical orthogonal functions and closely related methods, widely used in atmospheric science, are unable in general to capture trends because they are not devised for that purpose. The present paper presents a novel method capable of systematically capturing trend patterns from gridded data. The method is based on an eigenanalysis of the covariance/correlation matrix obtained using correlations between time positions of the sorted data, and trends are associated with the leading nondegenerate eigenvalues. Application to simple low-dimensional time series models and reanalyses data are presented and discussed. Copyright © 2006 Royal Meteorological Society. [source]


    Time Series Based Errors and Empirical Errors in Fertility Forecasts in the Nordic Countries,

    INTERNATIONAL STATISTICAL REVIEW, Issue 1 2004
    Nico Keilman
    Summary We use ARCH time series models to derive model based prediction intervals for the Total Fertility Rate (TFR) in Norway, Sweden, Finland, and Denmark up to 2050. For the short term (5,10 yrs), expected TFR-errors are compared with empirical forecast errors observed in historical population forecasts prepared by the statistical agencies in these countries since 1969. Medium-term and long-term (up to 50 years) errors are compared with error patterns based on so-called naïve forecasts, i.e. forecasts that assume that recently observed TFR-levels also apply for the future. Résumé Nous avons construit un modèle de séries temporelles du type ARCH pour calculer des intervalles de prédiction pour l'Indice Synthétique de Fécondité (ISF) pour la Norvège, la Suède, la Finlande, et le Danemark jusqu'àl'année 2050. Pour le court terme (5,10 ans dans le futur), on compare les erreurs attendues pour l'ISF avec les erreurs calculées dans des prévisions démographiques historiques, préparées par des bureaux de statistique dans ces pays depuis 1969. Les erreurs à moyen terme et long terme (jusqu'à 50 ans dans le futur), sont comparées avec des structures d'erreur fondée sur des prévisions dites "naïves", c'està-dire, des prévisions qui supposent que le niveau d'ISF observé pour une période récente est valable aussi pour le futur. À court terme, nous trouvons que les intervalles de prédiction calculés par le modèle de séries temporelles et ceux dérivés des erreurs historiques sont du mêeme ordre d'amplitude. Cependant, il fautêtre prudent, car la collecte des données de base historiques est limitée. Les erreurs "naïves" fournissent de l'information utile pour le court terme et le long terme. En effet, des intervalles de prédiction fondés sur des erreurs naïves à 50 ans dans le futur se comparent très bien avec des intervalles fondés sur le modèle de séries temporelles, sauf pour le Danemark. Pour ce pays, les données de base ne nous permettent pas de calculer des intervalles "naäfs" pour des périodes de prévision au-delà de 20 ans. En général, on peut conclure que les erreurs historiques et les erreurs naïves ne montrent pas que les intervalles de prédiction fondés sur des modèles de séries temporelles du type ARCH sont excessivement larges. Nous avons constaté que les intervalles à 67 pour cent de l'ISF ont une amplitude d'environ 0.5 enfants par femme à l'horizon de 10 ans, et approximativement 0.85 enfants par femmeà 50 ans. [source]


    Small Area Estimation-New Developments and Directions

    INTERNATIONAL STATISTICAL REVIEW, Issue 1 2002
    Danny Pfeffermann
    Summary The purpose of this paper is to provide a critical review of the main advances in small area estimation (SAE) methods in recent years. We also discuss some of the earlier developments, which serve as a necessary background for the new studies. The review focuses on model dependent methods with special emphasis on point prediction of the target area quantities, and mean square error assessments. The new models considered are models used for discrete measurements, time series models and models that arise under informative sampling. The possible gains from modeling the correlations among small area random effects used to represent the unexplained variation of the small area target quantities are examined. For review and appraisal of the earlier methods used for SAE, see Ghosh & Rao (1994). [source]


    A linear benchmark for forecasting GDP growth and inflation?

    JOURNAL OF FORECASTING, Issue 4 2008
    Massimiliano MarcellinoArticle first published online: 30 APR 200
    Abstract Predicting the future evolution of GDP growth and inflation is a central concern in economics. Forecasts are typically produced either from economic theory-based models or from simple linear time series models. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can outperform standard linear models for forecasting GDP growth and inflation. We consider a large variety of models and evaluation criteria, using a bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can hardly be beaten if they are carefully specified. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Forecasting domestic liquidity during a crisis: what works best?

    JOURNAL OF FORECASTING, Issue 6 2007
    Winston R. MooreArticle first published online: 3 JUL 200
    Abstract The 1990s were a turbulent time for Latin American and Caribbean countries. During this period, the region suffered from no less than 16 banking crises. One the most important determinants of the severity of banking a crisis is commercial bank liquidity. Banking systems that are relatively liquid are better able to deal with the large deposit withdrawals which tend to accompany bank runs. This study provides an assessment of whether behavioural models, linear time series or nonlinear time series models are better able to account for liquidity dynamics during a crisis.,,Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A forecasting procedure for nonlinear autoregressive time series models

    JOURNAL OF FORECASTING, Issue 5 2005
    Yuzhi CaiArticle first published online: 2 AUG 200
    Abstract Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi-step-ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m -step-ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Unemployment variation over the business cycles: a comparison of forecasting models

    JOURNAL OF FORECASTING, Issue 7 2004
    Saeed Moshiri
    Abstract Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non-linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non-linearity in the unemployment series. Only recently have there been some developments in applying non-linear models to estimate and forecast unemployment rates. A major concern of non-linear modelling is the model specification problem; it is very hard to test all possible non-linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non-linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back-propagation model and a generalized regression neural network model to estimate and forecast post-war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out-of-sample forecast results obtained by the ANN models with those obtained by several linear and non-linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Modeling software evolution defects: a time series approach

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2009
    Uzma Raja
    Abstract The Department of Information Systems, Statistics and Management Science, prediction of software defects and defect patterns is and will continue to be a critically important software evolution research topic. This study presents a time series analysis of multi-organizational multi-project defects reported during ongoing software evolution efforts. Using data from monthly defect reports for eight open source software projects over five years, this study builds and tests time series models for each sampled project. The resulting model accounts for the ripple effects of defect detection and correction by modeling the autocorrelation of code defect data. The autoregressive integrated moving average model (0,1,1) was found to hold for all sampled projects and thus provide a basis for both descriptive and predictive software defect analysis that is computationally efficient, comprehensible, and easy to apply. The model may be used to evaluate and compare the reliability of candidate software solutions, and to facilitate planning for software evolution budget and time allocation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Model choice in time series studies of air pollution and mortality

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2006
    Roger D. Peng
    Summary., Multicity time series studies of particulate matter and mortality and morbidity have provided evidence that daily variation in air pollution levels is associated with daily variation in mortality counts. These findings served as key epidemiological evidence for the recent review of the US national ambient air quality standards for particulate matter. As a result, methodological issues concerning time series analysis of the relationship between air pollution and health have attracted the attention of the scientific community and critics have raised concerns about the adequacy of current model formulations. Time series data on pollution and mortality are generally analysed by using log-linear, Poisson regression models for overdispersed counts with the daily number of deaths as outcome, the (possibly lagged) daily level of pollution as a linear predictor and smooth functions of weather variables and calendar time used to adjust for time-varying confounders. Investigators around the world have used different approaches to adjust for confounding, making it difficult to compare results across studies. To date, the statistical properties of these different approaches have not been comprehensively compared. To address these issues, we quantify and characterize model uncertainty and model choice in adjusting for seasonal and long-term trends in time series models of air pollution and mortality. First, we conduct a simulation study to compare and describe the properties of statistical methods that are commonly used for confounding adjustment. We generate data under several confounding scenarios and systematically compare the performance of the various methods with respect to the mean-squared error of the estimated air pollution coefficient. We find that the bias in the estimates generally decreases with more aggressive smoothing and that model selection methods which optimize prediction may not be suitable for obtaining an estimate with small bias. Second, we apply and compare the modelling approaches with the National Morbidity, Mortality, and Air Pollution Study database which comprises daily time series of several pollutants, weather variables and mortality counts covering the period 1987,2000 for the largest 100 cities in the USA. When applying these approaches to adjusting for seasonal and long-term trends we find that the Study's estimates for the national average effect of PM10 at lag 1 on mortality vary over approximately a twofold range, with 95% posterior intervals always excluding zero risk. [source]


    Seasonality with trend and cycle interactions in unobserved components models

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2009
    Siem Jan Koopman
    Summary., Unobserved components time series models decompose a time series into a trend, a season, a cycle, an irregular disturbance and possibly other components. These models have been successfully applied to many economic time series. The standard assumption of a linear model, which is often appropriate after a logarithmic transformation of the data, facilitates estimation, testing, forecasting and interpretation. However, in some settings the linear,additive framework may be too restrictive. We formulate a non-linear unobserved components time series model which allows interactions between the trend,cycle component and the seasonal component. The resulting model is cast into a non-linear state space form and estimated by the extended Kalman filter, adapted for models with diffuse initial conditions. We apply our model to UK travel data and US unemployment and production series, and show that it can capture increasing seasonal variation and cycle-dependent seasonal fluctuations. [source]


    Subsampling the mean of heavy-tailed dependent observations

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2004
    Piotr Kokoszka
    Abstract., We establish the validity of subsampling confidence intervals for the mean of a dependent series with heavy-tailed marginal distributions. Using point process theory, we focus on GARCH-like time series models. We propose a data-dependent method for the optimal block size selection and investigate its performance by means of a simulation study. [source]


    Temporal aggregation and spurious instantaneous causality in multiple time series models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2002
    JÖRG BREITUNG
    Large aggregation interval asymptotics are used to investigate the relation between Granger causality in disaggregated vector autoregressions (VARs) and associated contemporaneous correlation among innovations of the aggregated system. One of our main contributions is that we outline various conditions under which the informational content of error covariance matrices yields insight into the causal structure of the VAR. Monte Carlo results suggest that our asymptotic findings are applicable even when the aggregation interval is small, as long as the time series are not characterized by high levels of persistence. [source]


    Large Sample Properties of Parameter Estimates for Periodic ARMA Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2001
    I. V. Basawa
    This paper studies the asymptotic properties of parameter estimates for causal and invertible periodic autoregressive moving-average (PARMA) time series models. A general limit result for PARMA parameter estimates with a moving-average component is derived. The paper presents examples that explicitly identify the limiting covariance matrix for parameter estimates from a general periodic autoregression (PAR), a first-order periodic moving average (PMA(1)), and the mixed PARMA(1,1) model. Some comparisons and contrasts to univariate and vector autoregressive moving-average sequences are made. [source]


    Functional Coefficient Autoregressive Models: Estimation and Tests of Hypotheses

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2001
    Rong Chen
    In this paper, we study nonparametric estimation and hypothesis testing procedures for the functional coefficient AR (FAR) models of the form Xt=f1(Xt,d)Xt, 1+ ... +fp(Xt,d)Xt,p+,t, first proposed by Chen and Tsay (1993). As a direct generalization of the linear AR model, the FAR model is a rich class of models that includes many useful parametric nonlinear time series models such as the threshold AR models of Tong (1983) and exponential AR models of Haggan and Ozaki (1981). We propose a local linear estimation procedure for estimating the coefficient functions and study its asymptotic properties. In addition, we propose two testing procedures. The first one tests whether all the coefficient functions are constant, i.e. whether the process is linear. The second one tests if all the coefficient functions are continuous, i.e. if any threshold type of nonlinearity presents in the process. The results of some simulation studies as well as a real example are presented. [source]


    On Business Cycle Asymmetries in G7 Countries

    OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 3 2004
    Khurshid M. Kiani
    Abstract We investigate whether business cycle dynamics in seven industrialized countries (the G7) are characterized by asymmetries in conditional mean. We provide evidence on this issue using a variety of time series models. Our approach is fully parametric. Our testing strategy is robust to any conditional heteroskedasticity, outliers, and/or long memory that may be present. Our results indicate fairly strong evidence of nonlinearities in the conditional mean dynamics of the GDP growth rates for Canada, Germany, Italy, Japan, and the US. For France and the UK, the conditional mean dynamics appear to be largely linear. Our study shows that while the existence of conditional heteroskedasticity and long memory does not have much effect on testing for linearity in the conditional mean, accounting for outliers does reduce the evidence against linearity. [source]


    Assessing the Forecasting Performance of Regime-Switching, ARIMA and GARCH Models of House Prices

    REAL ESTATE ECONOMICS, Issue 2 2003
    Gordon W. Crawford
    While price changes on any particular home are difficult to predict, aggregate home price changes are forecastable. In this context, this paper compares the forecasting performance of three types of univariate time series models: ARIMA, GARCH and regime-switching. The underlying intuition behind regime-switching models is that the series of interest behaves differently depending on the realization of an unobservable regime variable. Regime-switching models are a compelling choice for real estate markets that have historically displayed boom and bust cycles. However, we find that, while regime-switching models can perform better in-sample, simple ARIMA models generally perform better in out-of-sample forecasting. [source]


    On testing for multivariate ARCH effects in vector time series models

    THE CANADIAN JOURNAL OF STATISTICS, Issue 3 2003
    Pierre Duchesne
    Abstract Using a spectral approach, the authors propose tests to detect multivariate ARCH effects in the residuals from a multivariate regression model. The tests are based on a comparison, via a quadratic norm, between the uniform density and a kernel-based spectral density estimator of the squared residuals and cross products of residuals. The proposed tests are consistent under an arbitrary fixed alternative. The authors present a new application of the test due to Hosking (1980) which is seen to be a special case of their approach involving the truncated uniform kernel. However, they typically obtain more powerful procedures when using a different weighting. The authors consider especially the procedure of Robinson (1991) for choosing the smoothing parameter of the spectral density estimator. They also introduce a generalized version of the test for ARCH effects due to Ling & Li (1997). They investigate the finite-sample performance of their tests and compare them to existing tests including those of Ling & Li (1997) and the residual-based diagnostics of Tse (2002).Finally, they present a financial application. Adoptant une approche spectrale, les auteurs proposent des tests permettant de détecter des effets ARCH multivariés dans les résidus d'un modèle de régression multivarié. Leurs tests reposent sur une comparaison en norme quadratique de la densité spectrale uniforme et d'un estimateur à noyau de la densité spectrale des résidus carrés et des produits croisés des résidus. Ces tests sont convergents sous une contre-hypothèse fixe quelconque. Les auteurs présentent une nouvelle application du test de Hosking (1980) qui correspond dans leur approche au choix particulier d'un noyau uniforme tronqué. Cependant, l'emploi d'autres pondérations leur permet d'obtenir des test encore plus puissants. Les auteurs étudient notamment la procédure de Robinson (1991) pour le choix du paramètre de lissage de l'estimateur de la densité spectrale. Os proposent aussi une version généralisée du test pour effets ARCH de Ling & Li (1997). Ils examinent le comportement de leurs tests dans de petits échantillons par voie de simulation et les comparent aux tests de Ling & Li (1997) et aux diagnostiques de Tse (2002) fondés sur les résidus, us présentent en outre une application financière. [source]


    On the estimation of the heavy-tail exponent in time series using the max-spectrum

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2010
    Stilian A. Stoev
    Abstract This paper addresses the problem of estimating the tail index , of distributions with heavy, Pareto-type tails for dependent data, that is of interest in the areas of finance, insurance, environmental monitoring and teletraffic analysis. A novel approach based on the max self-similarity scaling behavior of block maxima is introduced. The method exploits the increasing lack of dependence of maxima over large size blocks, which proves useful for time series data. We establish the consistency and asymptotic normality of the proposed max-spectrum estimator for a large class of m -dependent time series, in the regime of intermediate block-maxima. In the regime of large block-maxima, we demonstrate the distributional consistency of the estimator for a broad range of time series models including linear processes. The max-spectrum estimator is a robust and computationally efficient tool, which provides a novel time-scale perspective to the estimation of the tail exponents. Its performance is illustrated over synthetic and real data sets. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Falling and explosive, dormant, and rising markets via multiple-regime financial time series models

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2010
    Cathy W. S. Chen
    Abstract A multiple-regime threshold nonlinear financial time series model, with a fat-tailed error distribution, is discussed and Bayesian estimation and inference are considered. Furthermore, approximate Bayesian posterior model comparison among competing models with different numbers of regimes is considered which is effectively a test for the number of required regimes. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is designed, while importance sampling is employed to estimate Bayesian residuals for model diagnostic testing. Our modeling framework provides a parsimonious representation of well-known stylized features of financial time series and facilitates statistical inference in the presence of high or explosive persistence and dynamic conditional volatility. We focus on the three-regime case where the main feature of the model is to capturing of mean and volatility asymmetries in financial markets, while allowing an explosive volatility regime. A simulation study highlights the properties of our MCMC estimators and the accuracy and favourable performance as a model selection tool, compared with a deviance criterion, of the posterior model probability approximation method. An empirical study of eight international oil and gas markets provides strong support for the three-regime model over its competitors, in most markets, in terms of model posterior probability and in showing three distinct regime behaviours: falling/explosive, dormant and rising markets. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Semi-Empirical Equations for the Residence Time Distributions in Disperse Systems , Part 1: Continuous Phase

    CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 11 2004
    J.-H. Ham
    Abstract Residence time distributions (RTD) are often described on the basis of the dispersion or the tanks in series models, whereby the fitting is not always good. In addition, the underlying ideas of these models only roughly characterize the real existing processes. Two semi-empirical equations are presented based on characteristic parameters (mean, minimum, maximum residence time) and on an empirical exponent to permit better fitting. The determination of the parameters and their influence on the RTD are discussed. The usefulness of the models is shown in this first part for single-phase systems and for the continuous phase of multiphase systems using data from literature for laminar and turbulent flows in different apparatuses. A comparison with the results of other models is also done. [source]