Home About us Contact | |||
Moving Average (moving + average)
Kinds of Moving Average Terms modified by Moving Average Selected AbstractsAll-cause mortality and fatal alcohol poisoning in Belarus, 1970,2005DRUG AND ALCOHOL REVIEW, Issue 5 2008YURY E. RAZVODOVSKY Abstract Introduction and Aims. Although alcohol appears to be an important contributor to the burden of disease in the countries of eastern Europe, little systematic research has been undertaken on its impact on mortality in the former Soviet republic of Belarus. There may be a number of factors underlying the particularly negative effect of alcohol on mortality in Belarus, including the pattern of drinking and use of surrogates. A solid body of research and empirical evidence suggests that hazardous patterns of alcohol consumption (binge drinking) lead to quicker and deeper intoxication, increasing the propensity for alcohol-related mortality. Design and Method. To estimate the aggregate level effect of binge drinking on the all-cause mortality rate, trends in the all-cause mortality and fatal alcohol poisoning rates (as a proxy for binge drinking) in Belarus from 1970 to 2005 were analysed employing AutoRegressive Integrated Moving Average (ARIMA) time,series analysis in order to assess a bivariate relationship between the two time,series. Results. The results of time,series analysis suggest a close relationship between all-cause mortality and fatal alcohol poisoning rates at the population level. Conclusions. This study supports the hypothesis that alcohol and all-cause mortality are connected closely in countries where the drinking culture is characterised by heavy drinking episodes and adds to the growing body of evidence that a substantial proportion of total mortality in Belarus is due to acute effects of binge drinking. [source] Image signal-to-noise ratio estimation using Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average modelMICROSCOPY RESEARCH AND TECHNIQUE, Issue 10 2008K.S. Sim Abstract We propose to cascade the Shape-Preserving Piecewise Cubic Hermite model with the Autoregressive Moving Average (ARMA) interpolator; we call this technique the Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average (SP2CHARMA) model. In a few test cases involving different images, this model is found to deliver an optimum solution for signal to noise ratio (SNR) estimation problems under different noise environments. The performance of the proposed estimator is compared with two existing methods: the autoregressive-based and autoregressive moving average estimators. Being more robust with noise, the SP2CHARMA estimator has efficiency that is significantly greater than those of the two methods. Microsc. Res. Tech., 2008. © 2008 Wiley-Liss, Inc. [source] Effect of Noise on T-Wave Alternans Measurement in Ambulatory ECGs Using Modified Moving Average versus Spectral MethodPACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 5 2009RAJA J. SELVARAJ M.D. Background: The modified moving average (MMA) and spectral method (SM) are commonly used to measure T-wave alternans (TWA), but their accuracy has not been compared in ambulatory electrocardiograms (ECGs) where TWA signal-to-noise ratio is low. Our objective was to compare the effect of noise and signal nonstationarity on the accuracy of TWA measurement using MMA versus SM when applied to synthetic and ambulatory ECGs. Methods: Periodic and nonperiodic noise were added to noiseless synthetic ECGs. Simulated TWA (0,20 ,V) was added to synthetic ECGs and ambulatory ECG recordings. TWA was measured using SM and MMA, and the measurement error relative to added TWA was compared. An MMA ratio was used to discriminate TWA signal from noise. Signal nonstationarity was simulated by changing heart rate, TWA magnitude, and TWA phase. Results: With no added TWA, MMA falsely measured TWA in synthetic and ambulatory ECGs, while false measurement was not seen with SM. An MMA ratio > 1.2 eliminated false TWA detection. In the presence of low TWA magnitude (<10 ,V), TWA was overestimated by MMA and underestimated by SM in proportion to the noise level. In synthetic ECGs with periodic noise and 10-,V added TWA, MMA was less accurate than SM. The effects of simulated signal nonstationarity on the TWA magnitude measured with MMA versus SM were similar using a 64-beat analysis window. Conclusions: In the presence of noise, MMA falsely detects or overestimates simulated TWA in ambulatory ECG recordings. In this setting, the proposed MMA ratio improves the specificity of MMA. [source] Consistency of dynamic site response at Port IslandEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2001Laurie G. Baise Abstract System identification (SI) methods are used to determine empirical Green's functions (EGF) for soil intervals at the Port Island Site in Kobe, Japan and in shake table model tests performed by the Port and Harbor Research Institute (PHRI) to emulate the site during the 17 January 1995 Hyogo-ken Nanbu earthquake. The model form for the EGFs is a parametric auto-regressive moving average (ARMA) model mapping the ground motions recorded at the base of a soil interval to the top of that interval, hence capturing the effect of the soil on the through-passing wave. The consistency of site response at Port Island before, during, and after the mainshock is examined by application of small motion foreshock EGFs to incoming ground motions over these time intervals. The prediction errors (or misfits) for the foreshocks, the mainshock, and the aftershocks, are assessed to determine the extent of altered soil response as a result of liquefaction of the ground during the mainshock. In addition, the consistency of soil response between field and model test is verified by application of EGFs calculated from the shake table test to the 17 January input data. The prediction error is then used to assess the consistency of behaviour between the two cases. By using EGFs developed for small-amplitude foreshock ground motions, ground motions were predicted for all intervals of the vertical array except those that liquefied with small error. Analysis of the post-liquefied ground conditions implies that the site response gradually returns to a pre-earthquake state. Site behaviour is found to be consistent between foreshocks and the mainshock for the native ground (below 16 m in the field) with a normalized mean square error (NMSE) of 0.080 and a peak ground acceleration (PGA) of 0.5g. When the soil actually liquefies (change of state), recursive models are needed to track the variable soil behaviour for the remainder of the shaking. The recursive models are shown to demonstrate consistency between the shake table tests and the field with a NMSE of 0.102 for the 16 m to surface interval that liquefied. The aftershock ground response was not modelled well with the foreshock EGF immediately after the mainshock (NMSE ranging from 0.37 to 0.92). One month after the mainshock, the prediction error from the foreshock modeled was back to the foreshock error level. Copyright © 2001 John Wiley Sons, Ltd. [source] On the Use of the Moving Average Trading Rule to Test for Weak Form Efficiency in Capital MarketsECONOMIC NOTES, Issue 2 2008Alexandros E. Milionis The examination for the possible existence of predictive power in the moving average trading rule has been used extensively to test the hypothesis of weak form market efficiency in capital markets. This work focuses mainly on the study of the variation of the moving average (MA) trading rule performance as a function of the length of the longer MA. Empirical analysis of daily data from NYSE and the Athens Stock Exchange reveal high variability of the performance of the MA trading rule as a function of the MA length and on some occasions the series of successive trading rule total returns is non-stationary. These findings have direct implications in weak form market efficiency testing. Indeed, given this high variability of the performance of the MA trading rule, by just finding out that trading rules with some specific combinations of MA lengths can or cannot beat the market, as is the case in most of the published work thus far, is not enough evidence for or against the existence of weak form market efficiency. Results also show that on average in about three out of four cases trading rule signals are false, a fact that leaves a lot of space for improved trading rule performance if trading rule signals are combined with other information (e.g. filters, or volume of trade). Finally, some evidence of enhanced trading rule performance for the shorter MA lengths was found. This enhanced performance is partly attributed to the higher probability that a trading rule signal is not a whipsaw, as well as to the larger number of days out-of-the-market which are associated with shorter MA lengths. [source] The two pillars of the European Central BankECONOMIC POLICY, Issue 40 2004Stefan Gerlach SUMMARY The Pillars of The ECB I interpret the European Central Bank's two-pillar strategy by proposing an empirical model for inflation that distinguishes between the short- and long-run components of inflation. The latter component depends on an exponentially weighted moving average of past monetary growth and the former on the output gap. Estimates for the 1971,2003 period suggest that money can be combined with other indicators to form the ,broadly based assessment of the outlook for future price developments' that constitutes the ECB's second pillar. However, the analysis does not suggest that money should be treated differently from other indicators. While money is a useful policy indicator, all relevant indicators should be assessed in an integrated manner, and a separate pillar focused on monetary aggregates does not appear necessary. ,Stefan Gerlach [source] Signal denoising and baseline correction by discrete wavelet transform for microchip capillary electrophoresisELECTROPHORESIS, Issue 18 2003Bi-Feng Liu Abstract Signal denoising and baseline correction using discrete wavelet transform (DWT) are described for microchip capillary electrophoresis (MCE). DWT was performed on an electropherogram describing a separation of nine tetramethylrohodamine-5-isothiocyanate labeled amino acids, following MCE with laser-induced fluorescence detection, using Daubechies 5 wavelet at a decomposition level of 6. The denoising efficiency was compared with, and proved to be superior to, other commonly used denoising techniques such as Fourier transform, Savitzky-Golay smoothing and moving average, in terms of noise removal and peak preservation by directly visual inspection. Novel strategies for baseline correction were proposed, with a special interest in baseline drift that frequently occurred in chromatographic and electrophoretic separations. [source] Homicide in Chicago from 1890 to 1930: prohibition and its impact on alcohol- and non-alcohol-related homicidesADDICTION, Issue 3 2009Mark Asbridge ABSTRACT Aim The aim of the current paper is to examine the impact of the enactment of constitutional prohibition in the United States in 1920 on total homicides, alcohol-related homicides and non-alcohol-related homicides in Chicago. Design Data are drawn from the Chicago Historical Homicide Project, a data set chronicling 11 018 homicides in Chicago between 1870 and 1930. Interrupted time,series and autoregression integrated moving average (ARIMA) models are employed to examine the impact of prohibition on three separate population-adjusted homicide series. All models control for potential confounding from World War I demobilization and from trend data drawn from Wesley Skogan's Time,Series Data from Chicago. Findings Total and non-alcohol-related homicide rates increased during prohibition by 21% and 11%, respectively, while alcohol-related homicides remained unchanged. For other covariates, alcohol-related homicides were related negatively to the size of the Chicago police force and positively to police expenditures and to the proportion of the Chicago population aged 21 years and younger. Non-alcohol-related homicides were related positively to police expenditures and negatively to the size of the Chicago police force. Conclusions While total and non-alcohol-related homicides in the United States continued to rise during prohibition, a finding consistent with other studies, the rate of alcohol-related homicides remained unchanged. The divergent impact of prohibition on alcohol- and non-alcohol-related homicides is discussed in relation to previous studies of homicide in this era. [source] Impact of US and Canadian precursor regulation on methamphetamine purity in the United StatesADDICTION, Issue 3 2009James K. Cunningham ABSTRACT Aims Reducing drug purity is a major, but largely unstudied, goal of drug suppression. This study examines whether US methamphetamine purity was impacted by the suppression policy of US and Canadian precursor chemical regulation. Design Autoregressive integrated moving average (ARIMA)-intervention time,series analysis. Setting Continental United States and Hawaii (1985,May 2005). Interventions US federal regulations targeting precursors, ephedrine and pseudoephedrine, in forms used by large-scale producers were implemented in November 1989, August 1995 and October 1997. US regulations targeting precursors in forms used by small-scale producers (e.g. over-the-counter medications) were implemented in October 1996 and October 2001. Canada implemented federal precursor regulations in January 2003 and July 2003 and an essential chemical (e.g. acetone) regulation in January 2004. Measurements Monthly median methamphetamine purity series. Findings US regulations targeting large-scale producers were associated with purity declines of 16,67 points; those targeting small-scale producers had little or no impact. Canada's precursor regulations were associated with purity increases of 13,15 points, while its essential chemical regulation was associated with a 13-point decrease. Hawaii's purity was consistently high, and appeared to vary little with the 1990s/2000s regulations. Conclusions US precursor regulations targeting large-scale producers were associated with substantial decreases in continental US methamphetamine purity, while regulations targeting over-the-counter medications had little or no impact. Canada's essential chemical regulation was also associated with a decrease in continental US purity. However, Canada's precursor regulations were associated with purity increases: these regulations may have impacted primarily producers of lower-quality methamphetamine, leaving higher-purity methamphetamine on the market by default. Hawaii's well-known preference for ,ice' (high-purity methamphetamine) may have helped to constrain purity there to a high, attenuated range, possibly limiting its sensitivity to precursor regulation. [source] Combining wavelet-based feature extractions with relevance vector machines for stock index forecastingEXPERT SYSTEMS, Issue 2 2008Shian-Chang Huang Abstract: The relevance vector machine (RVM) is a Bayesian version of the support vector machine, which with a sparse model representation has appeared to be a powerful tool for time-series forecasting. The RVM has demonstrated better performance over other methods such as neural networks or autoregressive integrated moving average based models. This study proposes a hybrid model that combines wavelet-based feature extractions with RVM models to forecast stock indices. The time series of explanatory variables are decomposed using some wavelet bases and the extracted time-scale features serve as inputs of an RVM to perform the non-parametric regression and forecasting. Compared with traditional forecasting models, our proposed method performs best. The root-mean-squared forecasting errors are significantly reduced. [source] Time series forecasting by combining the radial basis function network and the self-organizing mapHYDROLOGICAL PROCESSES, Issue 10 2005Gwo-Fong Lin Abstract Based on a combination of a radial basis function network (RBFN) and a self-organizing map (SOM), a time-series forecasting model is proposed. Traditionally, the positioning of the radial basis centres is a crucial problem for the RBFN. In the proposed model, an SOM is used to construct the two-dimensional feature map from which the number of clusters (i.e. the number of hidden units in the RBFN) can be figured out directly by eye, and then the radial basis centres can be determined easily. The proposed model is examined using simulated time series data. The results demonstrate that the proposed RBFN is more competent in modelling and forecasting time series than an autoregressive integrated moving average (ARIMA) model. Finally, the proposed model is applied to actual groundwater head data. It is found that the proposed model can forecast more precisely than the ARIMA model. For time series forecasting, the proposed model is recommended as an alternative to the existing method, because it has a simple structure and can produce reasonable forecasts. Copyright © 2005 John Wiley & Sons, Ltd. [source] Testicular cancer: Marked birth cohort effects on incidence and a decline in mortality in southern Netherlands since 1970INTERNATIONAL JOURNAL OF CANCER, Issue 3 2008Rob Verhoeven Abstract The aim of our study was to interpret the changing incidence, and to describe the mortality of patients with testicular cancer in the south of the Netherlands between 1970 and 2004. On the basis of data from the Eindhoven Cancer Registry and Statistics Netherlands, 5-year moving average standardised incidence and mortality rates were calculated. An age-period-cohort (APC) Poisson regression analysis was performed to disentangle time and birth cohort effects on incidence. The incidence rate remained stable for all ages at about 3 per 100,000 person-years until 1989 but increased annually thereafter by 4% to 6 in 2004. This increase can almost completely be attributed to an increase in localised tumours. The largest increase was found for seminoma testicular cancer (TC) patients aged 35,39 and non-seminoma TC patients aged 20,24 years. Relatively more localised and tumours with lymph node metastases were detected in the later periods. APC analysis showed the best fit with an age-cohort model. An increase in incidence of TC was found for birth cohorts since 1950. The mortality rate dropped from 1.0 per 100,000 person-years in 1970 to 0.3 in 2005, with a steep annual decline of 12% in the period 1979,1986. In conclusion, the increase in incidence of TC was strongly correlated with birth cohorts since 1945. The increase in incidence is possibly caused by in utero or early life exposure to a yet unknown risk factor. There was a steep decline in mortality in the period 1979,1986. © 2007 Wiley-Liss, Inc. [source] An efficient approach for computing non-Gaussian ARMA model coefficients using Pisarenko's methodINTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2005Adnan Al-Smadi Abstract This paper addresses the problem of estimating the coefficients of a general autoregressive moving average (ARMA) model from only third order cumulants (TOCs) of the noisy observations of the system output. The observed signal may be corrupted by additive coloured Gaussian noise. The system is driven by a zero-mean independent and identically distributed (i.i.d.) non-Gaussian sequence. The input is not observed. The unknown model coefficients are obtained using eigenvalue,eigenvector decomposition. The derivation of this procedure is an extension of Pisarenko harmonic autocorrelation-based (PHA) method to third order statistics. It will be shown that the desired ARMA coefficients vector corresponds to the eigenvector associated with the minimum eigenvalue of a data covariance matrix of TOCs. The proposed method is also compared with well-known algorithms as well as with the PHA method. Copyright © 2005 John Wiley & Sons, Ltd. [source] Multidecadal climate variability of global lands and oceansINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2006Gregory J. McCabe Abstract Principal components analysis (PCA) and singular value decomposition (SVD) are used to identify the primary modes of decadal and multidecadal variability in annual global Palmer Drought Severity Index (PDSI) values and sea-surface temperatures (SSTs). The PDSI and SST data for 1925,2003 were detrended and smoothed (with a 10-year moving average) to isolate the decadal and multidecadal variability. The first two principal components (PCs) of the PDSI PCA explained almost 38% of the decadal and multidecadal variance in the detrended and smoothed global annual PDSI data. The first two PCs of detrended and smoothed global annual SSTs explained nearly 56% of the decadal variability in global SSTs. The PDSI PCs and the SST PCs are directly correlated in a pairwise fashion. The first PDSI and SST PCs reflect variability of the detrended and smoothed annual Pacific Decadal Oscillation (PDO), as well as detrended and smoothed annual Indian Ocean SSTs. The second set of PCs is strongly associated with the Atlantic Multidecadal Oscillation (AMO). The SVD analysis of the cross-covariance of the PDSI and SST data confirmed the close link between the PDSI and SST modes of decadal and multidecadal variation and provided a verification of the PCA results. These findings indicate that the major modes of multidecadal variations in SSTs and land-surface climate conditions are highly interrelated through a small number of spatially complex but slowly varying teleconnections. Therefore, these relations may be adaptable to providing improved baseline conditions for seasonal climate forecasting. Copyright © 2006 John Wiley & Sons, Ltd. [source] PAQM: an adaptive and proactive queue management for end-to-end TCP congestion controlINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2004Seungwan RyuArticle first published online: 2 SEP 200 Abstract Two functions, the congestion indicator (i.e. how to detect congestion) and the congestion control function (i.e. how to avoid and control congestion), are used at a router to support end-to-end congestion control in the Internet. Random early detection (RED) (IEEE/ACM Trans. Networking 1993; 1(4):397,413) enhanced the two functions by introducing queue length averaging and probabilistic early packet dropping. In particular, RED uses an exponentially weighted moving average (EWMA) queue length not only to detect incipient congestion but also to smooth the bursty incoming traffic and its resulting transient congestion. Following RED, many active queue management (AQM)-based extensions have been proposed. However, many AQM proposals have shown severe problems with detection and control of the incipient congestion adaptively to the dynamically changing network situations. In this paper, we introduce and analyse a feedback control model of TCP/AQM dynamics. Then, we propose the Pro-active Queue Management (PAQM) mechanism, which is able to provide proactive congestion avoidance and control using an adaptive congestion indicator and a control function under a wide range of traffic environments. The PAQM stabilizes the queue length around the desired level while giving smooth and low packet loss rates and high network resource utilization. Copyright © 2004 John Wiley & Sons, Ltd. [source] Near-optimum short-term fade prediction on satellite links at Ka and V-bandsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2008Andrew P. Chambers Abstract Several short-term predictors of rain attenuation are implemented and tested using data recorded from a satellite link in Southern England, and a comparison is made in terms of the root-mean-square error and the cumulative distribution of under-predictions. A hybrid of an autoregressive moving average and adaptive linear element predictor is created that makes use of Gauss,Newton and gradient direction coefficient updates and exhibits the best prediction error performance of all prediction methods in the majority of cases. Copyright © 2007 John Wiley & Sons, Ltd. [source] Developing a measure of patient access to primary care: the access response index (AROS)JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2003Glyn Elwyn BA MSc PhD FRCGP Abstract Access to appointments in primary care is not routinely measured, and there is no one standardized method for doing so. Any measurement tool has to take account of the dynamic status of appointment availability and the definitional problems of appointment types. The aim of this study was to develop and trial a method for measuring access that is valid, reliable, quick and provides a daily longitudinal record of access on an organizational basis (not for individual clinicians). Using the results of a literature review and following discussions with clinicians and managers a tool was designed following agreed specifications. After initial adjustments of the tool a feasibility study tested the acceptability of a data collection exercise on 11 practices of varying types, over a 4- to 8-week period. The development phase led to the design of a tool named the access response index (AROS). The method was well received in the practices, with a low incidence of missed days and only one practice failing to return data. The index measures the number of days' wait to the next available appointment with any general practitioner. The inclusion in the score of urgent appointments was abandoned due to definitional problems. A 5-day moving average was chosen to represent the data in graph form to demonstrate overall trends. AROS is a useful tool usable in any practice, and our feasibility study points to it being widely acceptable in the field. Data are represented in clear graphical daily format, either just for one practice or as an anonymous composite graph with other practices in the locality. [source] The impact of multiple volatilities on import demand for U.S. commodities: the case of soybeansAGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 2 2010Qiang Zhang The focus of this study is the effects of exchange rate, commodity price, and ocean freight cost risks on import demand with forward-futures markets. The case of U.S. and Brazilian soybeans is analyzed empirically using monthly data. A two-way error component two-stage least squares procedure for panel data is used for the analysis. Risk for these three effects is measured by the moving average of the standard deviation. Major soybean importers are sensitive to exchange rate risk. Importing countries in general are not sensitive to soybean price and ocean shipping cost risks for Brazilian or U.S. soybeans. © 2010 Wiley Periodicals, Inc. [source] Forecasting volatility with support vector machine-based GARCH modelJOURNAL OF FORECASTING, Issue 4 2010Shiyi Chen Abstract Recently, support vector machine (SVM), a novel artificial neural network (ANN), has been successfully used for financial forecasting. This paper deals with the application of SVM in volatility forecasting under the GARCH framework, the performance of which is compared with simple moving average, standard GARCH, nonlinear EGARCH and traditional ANN-GARCH models by using two evaluation measures and robust Diebold,Mariano tests. The real data used in this study are daily GBP exchange rates and NYSE composite index. Empirical results from both simulation and real data reveal that, under a recursive forecasting scheme, SVM-GARCH models significantly outperform the competing models in most situations of one-period-ahead volatility forecasting, which confirms the theoretical advantage of SVM. The standard GARCH model also performs well in the case of normality and large sample size, while EGARCH model is good at forecasting volatility under the high skewed distribution. The sensitivity analysis to choose SVM parameters and cross-validation to determine the stopping point of the recurrent SVM procedure are also examined in this study. Copyright © 2009 John Wiley & Sons, Ltd. [source] A non-Gaussian generalization of the Airline model for robust seasonal adjustmentJOURNAL OF FORECASTING, Issue 5 2006JOHN A. D. ASTON Abstract In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log-likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy-tailed densities such as the t or the mixture of normals. For this class of non-Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data.,,Copyright © 2006 John Wiley & Sons, Ltd. [source] Why Has U.S. Inflation Become Harder to Forecast?JOURNAL OF MONEY, CREDIT AND BANKING, Issue 2007JAMES H. STOCK Phillips curve; trend-cycle model; moving average; great moderation We examine whether the U.S. rate of price inflation has become harder to forecast and, to the extent that it has, what changes in the inflation process have made it so. The main finding is that the univariate inflation process is well described by an unobserved component trend-cycle model with stochastic volatility or, equivalently, an integrated moving average process with time-varying parameters. This model explains a variety of recent univariate inflation forecasting puzzles and begins to explain some multivariate inflation forecasting puzzles as well. [source] Local and marginal control charts applied to methicillin resistant Staphylococcus aureus bacteraemia reports in UK acute National Health Service trustsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009O. A. Grigg Summary., We consider the general problem of simultaneously monitoring multiple series of counts, applied in this case to methicillin resistant Staphylococcus aureus (MRSA) reports in 173 UK National Health Service acute trusts. Both within-trust changes from baseline (,local monitors') and overall divergence from the bulk of trusts (,relative monitors') are considered. After standardizing for type of trust and overall trend, a transformation to approximate normality is adopted and empirical Bayes shrinkage methods are used for estimating an appropriate baseline for each trust. Shewhart, exponentially weighted moving average and cumulative sum charts are then set up for both local and relative monitors: the current state of each is summarized by a p -value, which is processed by a signalling procedure that controls the false discovery rate. The performance of these methods is illustrated by using 4.5 years of MRSA data, and the appropriate use of such methods in practice is discussed. [source] A Bayesian nonlinearity test for threshold moving average modelsJOURNAL OF TIME SERIES ANALYSIS, Issue 5 2010Qiang Xia We propose a Bayesian test for nonlinearity of threshold moving average (TMA) models. First, we obtain the marginal posterior densities of all parameters, including the threshold and delay, of the TMA model using Gibbs sampler with the Metropolis,Hastings algorithm. And then, we adopt reversible-jump Markov chain Monte Carlo methods to calculate the posterior probabilities for MA and TMA models. Posterior evidence in favour of the TMA model indicates threshold nonlinearity. Simulation experiments and a real example show that our method works very well in distinguishing MA and TMA models. [source] A light-tailed conditionally heteroscedastic model with applications to river flowsJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2008Péter Elek Abstract., A conditionally heteroscedastic model, different from the more commonly used autoregressive moving average,generalized autoregressive conditionally heteroscedastic (ARMA-GARCH) processes, is established and analysed here. The time-dependent variance of innovations passing through an ARMA filter is conditioned on the lagged values of the generated process, rather than on the lagged innovations, and is defined to be asymptotically proportional to those past values. Designed this way, the model incorporates certain feedback from the modelled process, the innovation is no longer of GARCH type, and all moments of the modelled process are finite provided the same is true for the generating noise. The article gives the condition of stationarity, and proves consistency and asymptotic normality of the Gaussian quasi-maximum likelihood estimator of the variance parameters, even though the estimated parameters of the linear filter contain an error. An analysis of six diurnal water discharge series observed along Rivers Danube and Tisza in Hungary demonstrates the usefulness of such a model. The effect of lagged river discharge turns out to be highly significant on the variance of innovations, and nonparametric estimation approves its approximate linearity. Simulations from the new model preserve well the probability distribution, the high quantiles, the tail behaviour and the high-level clustering of the original series, further justifying model choice. [source] Embedding a Gaussian discrete-time autoregressive moving average process in a Gaussian continuous-time autoregressive moving average processJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2007Mituaki Huzii Abstract., Embedding a discrete-time autoregressive moving average (DARMA) process in a continuous-time ARMA (CARMA) process has been discussed by many authors. These authors have considered the relationship between the autocovariance structures of continuous-time and related discrete-time processes. In this article, we treat the problem from a slightly different point of view. We define embedding in a more rigid way by taking account of the probability structure. We consider Gaussian processes. First we summarize the necessary and sufficient condition for a DARMA process to be able to be embedded in a CARMA process. Secondly, we show a concrete condition such that a DARMA process can be embeddable in a CARMA process. This condition is new and general. Thirdly, we show some special cases including new examples. We show how we can examine embeddability for these special cases. [source] A Note on Non-Negative Arma ProcessesJOURNAL OF TIME SERIES ANALYSIS, Issue 3 2007Henghsiu Tsai Abstract., Recently, there has been much research on developing models suitable for analysing the volatility of a discrete-time process. Since the volatility process, like many others, is necessarily non-negative, there is a need to construct models for stationary processes which are non-negative with probability one. Such models can be obtained by driving autoregressive moving average (ARMA) processes with non-negative kernel by non-negative white noise. This raises the problem of finding simple conditions under which an ARMA process with given coefficients has a non-negative kernel. In this article, we derive a necessary and sufficient condition. This condition is in terms of the generating function of the ARMA kernel which has a simple form. Moreover, we derive some readily verifiable necessary and sufficient conditions for some ARMA processes to be non-negative almost surely. [source] High Moment Partial Sum Processes of Residuals in ARMA Models and their ApplicationsJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2007Hao Yu Abstract., In this article, we study high moment partial sum processes based on residuals of a stationary autoregressive moving average (ARMA) model with known or unknown mean parameter. We show that they can be approximated in probability by the analogous processes which are obtained from the i.i.d. errors of the ARMA model. However, if a unknown mean parameter is used, there will be an additional term that depends on model parameters and a mean estimator. When properly normalized, this additional term will vanish. Thus the processes converge weakly to the same Gaussian processes as if the residuals were i.i.d. Applications to change-point problems and goodness-of-fit are considered, in particular, cumulative sum statistics for testing ARMA model structure changes and the Jarque,Bera omnibus statistic for testing normality of the unobservable error distribution of an ARMA model. [source] Moving Average Representations for Multivariate Stationary ProcessesJOURNAL OF TIME SERIES ANALYSIS, Issue 6 2006A. R. Soltani Abstract., Backward and forward moving average (MA) representations are established for multivariate stationary processes. It is observed that in the multivariate case, in contrast to the univariate case, the backward and forward MA coefficients correspondingly, in general, are different. A method is presented to adopt the known techniques in deriving the backward MA to obtain the forward ones. [source] Asymptotic self-similarity and wavelet estimation for long-range dependent fractional autoregressive integrated moving average time series with stable innovationsJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2005Stilian Stoev Primary 60G18; 60E07; Secondary 62M10; 63G20 Abstract., Methods for parameter estimation in the presence of long-range dependence and heavy tails are scarce. Fractional autoregressive integrated moving average (FARIMA) time series for positive values of the fractional differencing exponent d can be used to model long-range dependence in the case of heavy-tailed distributions. In this paper, we focus on the estimation of the Hurst parameter H = d + 1/, for long-range dependent FARIMA time series with symmetric , -stable (1 < , < 2) innovations. We establish the consistency and the asymptotic normality of two types of wavelet estimators of the parameter H. We do so by exploiting the fact that the integrated series is asymptotically self-similar with parameter H. When the parameter , is known, we also obtain consistent and asymptotically normal estimators for the fractional differencing exponent d = H , 1/,. Our results hold for a larger class of causal linear processes with stable symmetric innovations. As the wavelet-based estimation method used here is semi-parametric, it allows for a more robust treatment of long-range dependent data than parametric methods. [source] First-Order Autoregressive Processes with Heterogeneous PersistenceJOURNAL OF TIME SERIES ANALYSIS, Issue 3 2003JOANN JASIAK Abstract. We propose a semi-nonparametric method of identification and estimation for Gaussian autoregressive processes with stochastic autoregressive coefficients. The autoregressive coefficient is considered as a latent process with either a moving average or regime switching representation. We develop a consistent estimator of the distribution of the autoregressive coefficient based on nonlinear canonical decomposition of the observed process. The approach is illustrated by simulations. [source] |