Home About us Contact | |||
Time Series (time + series)
Kinds of Time Series Terms modified by Time Series Selected AbstractsCHAOTIC FORECASTING OF DISCHARGE TIME SERIES: A CASE STUDY,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 2 2001Francesco Lisi ABSTRACT: This paper considers the problem of forecasting the discharge time series of a river by means of a chaotic approach. To this aim, we first check for some evidence of chaotic behavior in the dynamic by considering a set of different procedures, namely, the phase portrait of the attractor, the correlation dimension, and the largest Lyapunov exponent. Their joint application seems to confirm the presence of a nonlinear deterministic dynamic of chaotic type. Second, we consider the so-called nearest neighbors predictor and we compare it with a classical linear model. By comparing these two predictors, it seems that nonlinear river flow modeling, and in particular chaotic modeling, is an effective method to improve predictions. [source] HEAVY-TAILED-DISTRIBUTED THRESHOLD STOCHASTIC VOLATILITY MODELS IN FINANCIAL TIME SERIESAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2008Cathy W. S. Chen Summary To capture mean and variance asymmetries and time-varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy-tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time-delay parameter. Self-exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value-at-risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models. [source] ELICITING A DIRECTED ACYCLIC GRAPH FOR A MULTIVARIATE TIME SERIES OF VEHICLE COUNTS IN A TRAFFIC NETWORKAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2007Catriona M. Queen Summary The problem of modelling multivariate time series of vehicle counts in traffic networks is considered. It is proposed to use a model called the linear multiregression dynamic model (LMDM). The LMDM is a multivariate Bayesian dynamic model which uses any conditional independence and causal structure across the time series to break down the complex multivariate model into simpler univariate dynamic linear models. The conditional independence and causal structure in the time series can be represented by a directed acyclic graph (DAG). The DAG not only gives a useful pictorial representation of the multivariate structure, but it is also used to build the LMDM. Therefore, eliciting a DAG which gives a realistic representation of the series is a crucial part of the modelling process. A DAG is elicited for the multivariate time series of hourly vehicle counts at the junction of three major roads in the UK. A flow diagram is introduced to give a pictorial representation of the possible vehicle routes through the network. It is shown how this flow diagram, together with a map of the network, can suggest a DAG for the time series suitable for use with an LMDM. [source] Analyzing Bank Filtration by Deconvoluting Time Series of Electric ConductivityGROUND WATER, Issue 3 2007Olaf A. Cirpka Knowing the travel-time distributions from infiltrating rivers to pumping wells is important in the management of alluvial aquifers. Commonly, travel-time distributions are determined by releasing a tracer pulse into the river and measuring the breakthrough curve in the wells. As an alternative, one may measure signals of a time-varying natural tracer in the river and in adjacent wells and infer the travel-time distributions by deconvolution. Traditionally this is done by fitting a parametric function such as the solution of the one-dimensional advection-dispersion equation to the data. By choosing a certain parameterization, it is impossible to determine features of the travel-time distribution that do not follow the general shape of the parameterization, i.e., multiple peaks. We present a method to determine travel-time distributions by nonparametric deconvolution of electric-conductivity time series. Smoothness of the inferred transfer function is achieved by a geostatistical approach, in which the transfer function is assumed as a second-order intrinsic random time variable. Nonnegativity is enforced by the method of Lagrange multipliers. We present an approach to directly compute the best nonnegative estimate and to generate sets of plausible solutions. We show how the smoothness of the transfer function can be estimated from the data. The approach is applied to electric-conductivity measurements taken at River Thur, Switzerland, and five wells in the adjacent aquifer, but the method can also be applied to other time-varying natural tracers such as temperature. At our field site, electric-conductivity fluctuations appear to be an excellent natural tracer. [source] Time Series Based Errors and Empirical Errors in Fertility Forecasts in the Nordic Countries,INTERNATIONAL STATISTICAL REVIEW, Issue 1 2004Nico Keilman Summary We use ARCH time series models to derive model based prediction intervals for the Total Fertility Rate (TFR) in Norway, Sweden, Finland, and Denmark up to 2050. For the short term (5,10 yrs), expected TFR-errors are compared with empirical forecast errors observed in historical population forecasts prepared by the statistical agencies in these countries since 1969. Medium-term and long-term (up to 50 years) errors are compared with error patterns based on so-called naïve forecasts, i.e. forecasts that assume that recently observed TFR-levels also apply for the future. Résumé Nous avons construit un modèle de séries temporelles du type ARCH pour calculer des intervalles de prédiction pour l'Indice Synthétique de Fécondité (ISF) pour la Norvège, la Suède, la Finlande, et le Danemark jusqu'àl'année 2050. Pour le court terme (5,10 ans dans le futur), on compare les erreurs attendues pour l'ISF avec les erreurs calculées dans des prévisions démographiques historiques, préparées par des bureaux de statistique dans ces pays depuis 1969. Les erreurs à moyen terme et long terme (jusqu'à 50 ans dans le futur), sont comparées avec des structures d'erreur fondée sur des prévisions dites "naïves", c'està-dire, des prévisions qui supposent que le niveau d'ISF observé pour une période récente est valable aussi pour le futur. À court terme, nous trouvons que les intervalles de prédiction calculés par le modèle de séries temporelles et ceux dérivés des erreurs historiques sont du mêeme ordre d'amplitude. Cependant, il fautêtre prudent, car la collecte des données de base historiques est limitée. Les erreurs "naïves" fournissent de l'information utile pour le court terme et le long terme. En effet, des intervalles de prédiction fondés sur des erreurs naïves à 50 ans dans le futur se comparent très bien avec des intervalles fondés sur le modèle de séries temporelles, sauf pour le Danemark. Pour ce pays, les données de base ne nous permettent pas de calculer des intervalles "naäfs" pour des périodes de prévision au-delà de 20 ans. En général, on peut conclure que les erreurs historiques et les erreurs naïves ne montrent pas que les intervalles de prédiction fondés sur des modèles de séries temporelles du type ARCH sont excessivement larges. Nous avons constaté que les intervalles à 67 pour cent de l'ISF ont une amplitude d'environ 0.5 enfants par femme à l'horizon de 10 ans, et approximativement 0.85 enfants par femmeà 50 ans. [source] Bootstrapping Financial Time SeriesJOURNAL OF ECONOMIC SURVEYS, Issue 3 2002Esther Ruiz It is well known that time series of returns are characterized by volatility clustering and excess kurtosis. Therefore, when modelling the dynamic behavior of returns, inference and prediction methods, based on independent and/or Gaussian observations may be inadequate. As bootstrap methods are not, in general, based on any particular assumption on the distribution of the data, they are well suited for the analysis of returns. This paper reviews the application of bootstrap procedures for inference and prediction of financial time series. In relation to inference, bootstrap techniques have been applied to obtain the sample distribution of statistics for testing, for example, autoregressive dynamics in the conditional mean and variance, unit roots in the mean, fractional integration in volatility and the predictive ability of technical trading rules. On the other hand, bootstrap procedures have been used to estimate the distribution of returns which is of interest, for example, for Value at Risk (VaR) models or for prediction purposes. Although the application of bootstrap techniques to the empirical analysis of financial time series is very broad, there are few analytical results on the statistical properties of these techniques when applied to heteroscedastic time series. Furthermore, there are quite a few papers where the bootstrap procedures used are not adequate. [source] Diagnostic Checks in Time SeriesJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2005Mohamed Afzal Norat No abstract is available for this article. [source] Identification of Persistent Cycles in Non-Gaussian Long-Memory Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2008Mohamed Boutahar Abstract., Asymptotic distribution is derived for the least squares estimates (LSE) in the unstable AR(p) process driven by a non-Gaussian long-memory disturbance. The characteristic polynomial of the autoregressive process is assumed to have pairs of complex roots on the unit circle. In order to describe the limiting distribution of the LSE, two limit theorems involving long-memory processes are established in this article. The first theorem gives the limiting distribution of the weighted sum, is a non-Gaussian long-memory moving-average process and (cn,k,1 , k , n) is a given sequence of weights; the second theorem is a functional central limit theorem for the sine and cosine Fourier transforms [source] Robust Estimation For Periodic Autoregressive Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2008Q. Shao Abstract., A robust estimation procedure for periodic autoregressive (PAR) time series is introduced. The asymptotic properties and the asymptotic relative efficiency are discussed by the estimating equation approach. The performance of the robust estimators for PAR time-series models with order one is illustrated by a simulation study. The technique is applied to a real data analysis. [source] Order Patterns in Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 5 2007Chstoph Bandt We determine probabilities of order patterns in Gaussian and autoregressive moving-average (ARMA) processes. Two order functions are introduced which characterize a time series in a way similar to autocorrelation. For stationary ergodic processes, all finite-dimensional distributions are obtained from the one-dimensional distribution plus the order structure of a typical time series. [source] Influence of Missing Values on the Prediction of a Stationary Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2005Pascal Bondon Primary 62M10; secondary 60G25 Abstract., The influence of missing observations on the linear prediction of a stationary time series is investigated. Simple bounds for the prediction error variance and asymptotic behaviours for short and long-memory processes respectively are presented. [source] Nonparametric Estimation and Testing in Panels of Intercorrelated Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 6 2004Vidar Hjellvik Abstract., We consider nonparametric estimation and testing of linearity in a panel of intercorrelated time series. We place the emphasis on the situation where there are many time series in the panel but few observations for each of the series. The intercorrelation is described by a latent process, and a conditioning argument involving this process plays an important role in deriving the asymptotic theory. To be accurate the asymptotic distribution of the test functional of linearity requires a very large number of observations, and bootstrapping gives much better finite sample results. A number of simulation experiments and an illustration on a real data set are included. [source] Bayesian Subset Model Selection for Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 5 2004N. K. Unnikrishnan Abstract., This paper considers the problem of subset model selection for time series. In general, a few lags which are not necessarily continuous, explain lag structure of a time-series model. Using the reversible jump Markov chain technique, the paper develops a fully Bayesian solution for the problem. The method is illustrated using the self-exciting threshold autoregressive (SETAR), bilinear and AR models. The Canadian lynx data, the Wolfe's sunspot numbers and Series A of Box and Jenkins (1976) are analysed in detail. [source] A Direct Test for Cointegration Between a Pair of Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2002STEPHEN J. LEYBOURNE In this paper we introduce a new test of the null hypothesis of no cointegration between a pair of time series. For a very simple generating model, our test compares favourably with the Engle,Granger/Dickey,Fuller test and the Johansen trace test. Indeed, shortcomings of the former motivated the development of our test. The applicability of our test is extended to series generated by low-order vector autoregressions. Again, we find evidence that this general version of our test is more powerful than the Johansen test. The paper concludes with an empirical example in which the new test finds strong evidence of cointegration, but the Johansen test does not. [source] Model Selection for Broadband Semiparametric Estimation of Long Memory in Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 6 2001Clifford M. Hurvich We study the properties of Mallows' CL criterion for selecting a fractional exponential (FEXP) model for a Gaussian long-memory time series. The aim is to minimize the mean squared error of a corresponding regression estimator dFEXP of the memory parameter, d. Under conditions which do not require that the data were actually generated by a FEXP model, it is known that the mean squared error MSE=E[dFEXP,d]2 can converge to zero as fast as (log n)/n, where n is the sample size, assuming that the number of parameters grows slowly with n in a deterministic fashion. Here, we suppose that the number of parameters in the FEXP model is chosen so as to minimize a local version of CL, restricted to frequencies in a neighborhood of zero. We show that, under appropriate conditions, the expected value of the local CL is asymptotically equivalent to MSE. A combination of theoretical and simulation results give guidance as to the choice of the degree of locality in CL. [source] Recursive Relations for Multistep Prediction of a Stationary Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2001Pascal Bondon Recursive relations are established between the coefficients of the finite past multistep linear predictors of a stationary time series. These relations generalize known results when the prediction is based on infinite past and permit simplification of the numerical calculation of the finite past predictors. [source] Testing Stochastic Cycles in Macroeconomic Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2001L. A. Gil-Alana A particular version of the tests of Robinson (1994) for testing stochastic cycles in macroeconomic time series is proposed in this article. The tests have a standard limit distribution and are easy to implement in raw time series. A Monte Carlo experiment is conducted, studying the size and the power of the tests against different alternatives, and the results are compared with those based on other tests. An empirical application using historical US annual data is also carried out at the end of the article. [source] Testing for the Presence of Self-Similarity of Gaussian Time Series Having Stationary IncrementsJOURNAL OF TIME SERIES ANALYSIS, Issue 5 2000Jean-Marc Bardet A method for testing for the presence of self-similarity of a Gaussian time series with stationary increments is presented. The test is based on estimation of the distance between the time series and a set of time series containing all the fractional Brownian motions. This distance is constructed from two estimations of multiscale generalized quadratic variations expectations. The second one requires regression estimates of the self-similarity index H. Two estimations of H are then introduced. They present good robustness and computing time properties compared with the Whittle approach, with nearly similar convergence rate. The test is applied on simulated and real data. The self-similarity assumption is notably accepted for the famous Nile River data. [source] Prediction Variance and Information Worth of Observations in Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2000Mohsen Pourahmadi The problem of developing measures of worth of observations in time series has not received much attention in the literature. Any meaningful measure of worth should naturally depend on the position of the observation as well as the objectives of the analysis, namely parameter estimation or prediction of future values. We introduce a measure that quantifies worth of a set of observations for the purpose of prediction of outcomes of stationary processes. The worth is measured as the change in the information content of the entire past due to exclusion or inclusion of a set of observations. The information content is quantified by the mutual information, which is the information theoretic measure of dependency. For Gaussian processes, the measure of worth turns out to be the relative change in the prediction error variance due to exclusion or inclusion of a set of observations. We provide formulae for computing predictive worth of a set of observations for Gaussian autoregressive moving-average processs. For non-Gaussian processes, however, a simple function of its entropy provides a lower bound for the variance of prediction error in the same manner that Fisher information provides a lower bound for the variance of an unbiased estimator via the Cramer-Rao inequality. Statistical estimation of this lower bound requires estimation of the entropy of a stationary time series. [source] An Efficient Taper for Potentially Overdifferenced Long-memory Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2000Clifford M. Hurvich We propose a new complex-valued taper and derive the properties of a tapered Gaussian semiparametric estimator of the long-memory parameter d, (,0.5, 1.5). The estimator and its accompanying theory can be applied to generalized unit root testing. In the proposed method, the data are differenced once before the taper is applied. This guarantees that the tapered estimator is invariant with respect to deterministic linear trends in the original series. Any detrimental leakage effects due to the potential noninvertibility of the differenced series are strongly mitigated by the taper. The proposed estimator is shown to be more efficient than existing invariant tapered estimators. Invariance to kth order polynomial trends can be attained by differencing the data k times and then applying a stronger taper, which is given by the kth power of the proposed taper. We show that this new family of tapers enjoys strong efficiency gains over comparable existing tapers. Analysis of both simulated and actual data highlights potential advantages of the tapered estimator of d compared with the nontapered estimator. [source] Least-squares Estimation of an Unknown Number of Shifts in a Time SeriesJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2000Marc Lavielle In this contribution, general results on the off-line least-squares estimate of changes in the mean of a random process are presented. First, a generalisation of the Hajek-Renyi inequality, dealing with the fluctuations of the normalized partial sums, is given. This preliminary result is then used to derive the consistency and the rate of convergence of the change-points estimate, in the situation where the number of changes is known. Strong consistency is obtained under some mixing conditions. The limiting distribution is also computed under an invariance principle. The case where the number of changes is unknown is then addressed. All these results apply to a large class of dependent processes, including strongly mixing and also long-range dependent processes. [source] Modelling Long-memory Time Series with Finite or Infinite Variance: a General ApproachJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2000Remigijus Leipus We present a class of generalized fractional filters which is stable with respect to series and parallel connection. This class extends the so-called fractional ARUMA and fractional ARMA filters previously introduced by e.g. Goncalves (1987) and Robinson (1994) and recently studied by Giraitis and Leipus (1995) and Viano et al. (1995). Conditions for the existence of the induced stationary S,S and L2 processes are given. We describe the asymptotic dependence structure of these processes via the codifference and the covariance sequences respectively. In the L2 case, we prove the weak convergence of the normalized partial sums. [source] Measuring Conditional Persistence in Nonlinear Time Series,OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 3 2007George Kapetanios Abstract The persistence properties of economic time series have been a primary object of investigation in a variety of guises since the early days of econometrics. Recently, work on nonlinear modelling for time series has introduced the idea that persistence of a shock at a point in time may vary depending on the state of the process at that point in time. This article suggests investigating the persistence of processes conditioning on their history as a tool that may aid parametric nonlinear modelling. In particular, we suggest that examining the nonparametrically estimated derivatives of the conditional expectation of a variable with respect to its lag(s) may be a useful indicator of the variation in persistence with respect to its past history. We discuss in detail the implementation of the measure and present a Monte Carlo investigation. We further apply the persistence analysis to real exchange rates. [source] Detection of Structural Change in the Long-run Persistence in a Univariate Time Series,OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2 2005Eiji Kurozumi Abstract In this paper, we investigate a test for structural change in the long-run persistence in a univariate time series. Our model has a unit root with no structural change under the null hypothesis, while under the alternative it changes from a unit-root process to a stationary one or vice versa. We propose a Lagrange multiplier-type test, a test with the quasi-differencing method, and ,demeaned versions' of these tests. We find that the demeaned versions of these tests have better finite-sample properties, although they are not necessarily superior in asymptotics to the other tests. [source] Trends in NE Atlantic landings (southern Portugal): identifying the relative importance of fisheries and environmental variablesFISHERIES OCEANOGRAPHY, Issue 3 2005KARIM ERZINI Abstract Time series of commercial landings from the Algarve (southern Portugal) from 1982 to 1999 were analyzed using min/max autocorrelation factor analysis (MAFA) and dynamic factor analysis (DFA). These techniques were used to identify trends and explore the relationships between the response variables (annual landings of 12 species) and explanatory variables [sea surface temperature, rainfall, an upwelling index, Guadiana river (south-east Portugal) flow, the North Atlantic oscillation, the number of licensed fishing vessels and the number of commercial fishermen]. Landings were more highly correlated with non-lagged environmental variables and in particular with Guadiana river flow. Both techniques gave coherent results, with the most important trend being a steady decline over time. A DFA model with two explanatory variables (Guadiana river flow and number of fishermen) and three common trends (smoothing functions over time) gave good fits to 10 of the 12 species. Results of other models indicated that river flow is the more important explanatory variable in this model. Changes in the mean flow and discharge regime of the Guadiana river resulting from the construction of the Alqueva dam, completed in 2002, are therefore likely to have a significant and deleterious impact on Algarve fisheries landings. [source] Impact of freshwater input and wind on landings of anchovy (Engraulis encrasicolus) and sardine (Sardina pilchardus) in shelf waters surrounding the Ebre (Ebro) River delta (north-western Mediterranean)FISHERIES OCEANOGRAPHY, Issue 2 2004J. Lloret Abstract Time series analyses (Box,Jenkins models) were used to study the influence of river runoff and wind mixing index on the productivity of the two most abundant species of small pelagic fish exploited in waters surrounding the Ebre (Ebro) River continental shelf (north-western Mediterranean): anchovy (Engraulis encrasicolus) and sardine (Sardina pilchardus). River flow and wind were selected because they are known to enhance fertilization and local planktonic production, thus being crucial for the survival of fish larvae. Time series of the two environmental variables and landings of the two species were analysed to extract the trend and seasonality. All series displayed important seasonal and interannual fluctuations. In the long term, landings of anchovy declined while those of sardine increased. At the seasonal scale, landings of anchovy peaked during spring/summer while those of sardine peaked during spring and autumn. Seasonality in landings of anchovy was stronger than in sardine. Concerning the environmental series, monthly average Ebre runoff showed a progressive decline from 1960 until the late 1980s, and the wind mixing index was highest during 1994,96. Within the annual cycle, the minimum river flow occurs from July to October and the wind mixing peaks in winter (December,April, excluding January). The results of the analyses showed a significant correlation between monthly landings of anchovy and freshwater input of the Ebre River during the spawning season of this species (April,August), with a time lag of 12 months. In contrast, monthly landings of sardine were significantly positively correlated with the wind mixing index during the spawning season of this species (November,March), with a lag of 18 months. The results provide evidence of the influence of riverine inputs and wind mixing on the productivity of small pelagic fish in the north-western Mediterranean. The time lags obtained in the relationships stress the importance of river runoff and wind mixing for the early stages of anchovy and sardine, respectively, and their impact on recruitment. [source] New Evidence for The Role of The North Sea , Caspian Pattern on The Temperature and Precipitation Regimes in Continental Central TurkeyGEOGRAFISKA ANNALER SERIES A: PHYSICAL GEOGRAPHY, Issue 4 2005H. Kutiel Abstract Monthly mean temperatures and monthly precipitation totals at six stations from theCappadocian sub-region in the continental Central Anatolia region of Turkey were analysed in order to detect the response of the variability in the Cappadocian climate to the variability of the North Sea - Caspian Pattern Index (NCPI). Most of this region is classified as semi-arid according to various climate classifications. Time series of the NCPI for the period 1958,1998, enabled each month from October toApril to be classified as belonging to the negative phase NCP(,), positive phase NCP(+) or neutral conditions. Monthly temperature and precipitation series for each station were analysed separately for both phases. Temperatures during NCP(,) were found to be considerably higher than during NCP(+). These results confirm previous results regarding the role of the NCP in controlling the temperature regime in that region. No significant differences were found in precipitation totals between the two phases, but major differences were identified in their spatial structure. [source] The Euro and International Capital MarketsINTERNATIONAL FINANCE, Issue 1 2000Carsten Detken Long before the introduction of the euro there was an active debate among researchers, policy-makers and financial market participants over how the new European money would change the relative roles of currencies in the international monetary and financial system. A widely held view was that the euro's use in international capital markets would be the key element. Therefore, this paper provides a broad empirical examination of the major currencies' roles in international capital markets, with a special emphasis on the first year of the euro. A contribution is made as to how to measure these roles, both from the viewpoint of international financing and from that of international investment activities. Time series of these new measures are presented, including euro aggregates calculated up to six years back in time. The data allow for the identification of changes in the role of the euro during 1999 compared to the aggregate of euro predecessor currencies, net of intra-euro area assets/liabilities, since the start of stage 2 of EMU in 1994. A number of key factors determining the currency distribution of international portfolio investments, such as relative market liquidity and relative risk characteristics of assets, are also examined empirically. It turns out that for almost all important market segments for which data are available, the euro immediately became the second most widely used currency for international financing and investment. For the flow of international bond and note issuance it even slightly overtook the US dollar in the second half of 1999. The data also suggest that most of this early supply of euro bonds by non-euro area residents, clearly exceeding the euro-predecessor currency aggregate, is actually absorbed by euro area residents and not by outside investors so far. [source] Climate of the seasonal cycle in the North Pacific and the North Atlantic oceansINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 4 2001Igor M. Yashayaev Abstract Time series of monthly sea-surface temperature (SST), air temperature (AT) and sea level pressure (SLP) were constructed from merged releases of the Comprehensive Ocean-Atmosphere Data Set (COADS). The time series were decomposed into seasonal and non-seasonal (short and long-term) components. The contribution of the seasonal cycle to the total variance of SST and AT exceeds 80% in the mid and in some high latitude locations and reaches its peak (>95%) in the centres of subtropical gyres. In most cases, a combination of annual and semiannual harmonics accounts for more than 95% of the seasonal variability. Amplitudes of SST and AT annual cycles are highest near the western boundaries of the oceans; annual phases of SST and AT increase toward the eastern tropical oceans, revealing a southeastern propagation of the annual cycle over the Northern Hemisphere oceans. The annual cycle of AT leads that of SST by 1,3 weeks. The largest phase differences are observed in the regions of western boundary currents in the North Pacific and the North Atlantic oceans. This is consistent with spatial patterns of integral air,sea heat fluxes. Annual phases of SST increase along the Gulf Stream and the Kuroshio Current. This points to the importance of signal transport by the major ocean currents. The lowest annual amplitudes of SLP are observed along the equator (0°,10°N) in both oceans. There are three distinct areas of high annual amplitudes of SLP in the North Pacific Ocean: Asian, Aleutian and Californian. Unlike the North Pacific, only one such area exists in the North Atlantic centred to the west of Iceland. A remarkable feature in the climate of the North Pacific is a maximum of semiannual SLP amplitudes, centred near 40°N and 170°W. It is also an absolute maximum in the entire Northern Hemisphere. Analysis of phases of harmonics of SLP seasonal cycle has revealed the trajectories of propagation of the annual and semiannual cycles. Analysis of semiannual to annual amplitudes ratio has revealed the regions of semiannual cycle dominance. Copyright © 2001 Royal Meteorological Society [source] Prescription Duration After Drug Copay Changes in Older People: Methodological AspectsJOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 3 2002Sebastian Schneeweiss MD OBJECTIVES: Impact assessment of drug benefits policies is a growing field of research that is increasingly relevant to healthcare planning for older people. Some cost-containment policies are thought to increase noncompliance. This paper examines mechanisms that can produce spurious reductions in drug utilization measures after drug policy changes when relying on pharmacy dispensing data. Reference pricing, a copayment for expensive medications above a fixed limit, for angiotensin-converting enzyme (ACE) inhibitors in older British Columbia residents, is used as a case example. DESIGN: Time series of 36 months of individual claims data. Longitudinal data analysis, adjusting for autoregressive data. SETTING: Pharmacare, the drug benefits program covering all patients aged 65 and older in the province of British Columbia, Canada. PARTICIPANTS: All noninstitutionalized Pharmacare beneficiaries aged 65 and older who used ACE inhibitors between 1995 and 1997 (N = 119,074). INTERVENTION: The introduction of reference drug pricing for ACE inhibitors for patients aged 65 and older. MEASUREMENTS: Timing and quantity of drug use from a claims database. RESULTS: We observed a transitional sharp decline of 11%± a standard error of 3% (P = .02) in the overall utilization rate of all ACE inhibitors after the policy implementation; five months later, utilization rates had increased, but remained under the predicted prepolicy trend. Coinciding with the sharp decrease, we observed a reduction in prescription duration by 31% in patients switching to no-cost drugs. This reduction may be attributed to increased monitoring for intolerance or treatment failure in switchers, which in turn led to a spurious reduction in total drug utilization. We ruled out the extension of medication use over the prescribed duration through reduced daily doses (prescription stretching) by a quantity-adjusted analysis of prescription duration. CONCLUSION: The analysis of prescription duration after drug policy interventions may provide alternative explanations to apparent short-term reductions in drug utilization and adds important insights to time trend analyses of drug utilization data in the evaluation of drug benefit policy changes. J Am Geriatr Soc 50:521,525, 2002. [source] |