Time Series Data (time + series_data)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting

Selected Abstracts

Adaptive Fourier Series and the Analysis of Periodicities in Time Series Data

Robert V. Foutz
A Fourier series decomposes a function x(t) into a sum of periodic components that have sinusoidal shapes. This paper describes an adaptive Fourier series where the periodic components of x(t) may have a variety of differing shapes. The periodic shapes are adaptive since they depend on the function x(t) and the period. The results, which extend both Fourier analysis and Walsh,Fourier analysis, are applied to investigate the shapes of periodic components in time series data sets. [source]

Amygdala,prefrontal dissociation of subliminal and supraliminal fear

Leanne M. Williams
Abstract Facial expressions of fear are universally recognized signals of potential threat. Humans may have evolved specialized neural systems for responding to fear in the absence of conscious stimulus detection. We used functional neuroimaging to establish whether the amygdala and the medial prefrontal regions to which it projects are engaged by subliminal fearful faces and whether responses to subliminal fear are distinguished from those to supraliminal fear. We also examined the time course of amygdala-medial prefrontal responses to supraliminal and subliminal fear. Stimuli were fearful and neutral baseline faces, presented under subliminal (16.7 ms and masked) or supraliminal (500 ms) conditions. Skin conductance responses (SCRs) were recorded simultaneously as an objective index of fear perception. SPM2 was used to undertake search region-of-interest (ROI) analyses for the amygdala and medial prefrontal (including anterior cingulate) cortex, and complementary whole-brain analyses. Time series data were extracted from ROIs to examine activity across early versus late phases of the experiment. SCRs and amygdala activity were enhanced in response to both subliminal and supraliminal fear perception. Time series analysis showed a trend toward greater right amygdala responses to subliminal fear, but left-sided responses to supraliminal fear. Cortically, subliminal fear was distinguished by right ventral anterior cingulate activity and supraliminal fear by dorsal anterior cingulate and medial prefrontal activity. Although subcortical amygdala activity was relatively persistent for subliminal fear, supraliminal fear showed more sustained cortical activity. The findings suggest that preverbal processing of fear may occur via a direct rostral,ventral amygdala pathway without the need for conscious surveillance, whereas elaboration of consciously attended signals of fear may rely on higher-order processing within a dorsal cortico,amygdala pathway. Hum Brain Mapp, 2005. © 2005 Wiley-Liss, Inc. [source]

Model choice in time series studies of air pollution and mortality

Roger D. Peng
Summary., Multicity time series studies of particulate matter and mortality and morbidity have provided evidence that daily variation in air pollution levels is associated with daily variation in mortality counts. These findings served as key epidemiological evidence for the recent review of the US national ambient air quality standards for particulate matter. As a result, methodological issues concerning time series analysis of the relationship between air pollution and health have attracted the attention of the scientific community and critics have raised concerns about the adequacy of current model formulations. Time series data on pollution and mortality are generally analysed by using log-linear, Poisson regression models for overdispersed counts with the daily number of deaths as outcome, the (possibly lagged) daily level of pollution as a linear predictor and smooth functions of weather variables and calendar time used to adjust for time-varying confounders. Investigators around the world have used different approaches to adjust for confounding, making it difficult to compare results across studies. To date, the statistical properties of these different approaches have not been comprehensively compared. To address these issues, we quantify and characterize model uncertainty and model choice in adjusting for seasonal and long-term trends in time series models of air pollution and mortality. First, we conduct a simulation study to compare and describe the properties of statistical methods that are commonly used for confounding adjustment. We generate data under several confounding scenarios and systematically compare the performance of the various methods with respect to the mean-squared error of the estimated air pollution coefficient. We find that the bias in the estimates generally decreases with more aggressive smoothing and that model selection methods which optimize prediction may not be suitable for obtaining an estimate with small bias. Second, we apply and compare the modelling approaches with the National Morbidity, Mortality, and Air Pollution Study database which comprises daily time series of several pollutants, weather variables and mortality counts covering the period 1987,2000 for the largest 100 cities in the USA. When applying these approaches to adjusting for seasonal and long-term trends we find that the Study's estimates for the national average effect of PM10 at lag 1 on mortality vary over approximately a twofold range, with 95% posterior intervals always excluding zero risk. [source]

Sampling designs of insect time series data: are they all irregularly spaced?

OIKOS, Issue 1 2009
D. V. Beresford
Time series data are commonly obtained by trapping over a standardized period of time, for example daily or weekly. In this paper we present evidence that such sampling designs are inherently irregularly spaced due to the varying developmental rates and population parameters caused by changing temperatures during a sampling season. We modeled an exponentially growing population based on stable fly population growth rates, and then compare different sampling regimes to determine which produces the best estimate of population growth rate. These results are then compared to field data based on weekly sampling at three dairy farms in Ontario over two summers. Transforming catch numbers (N) to ln(N)/(number of degree days within the sampling period) corrects for the irregular spaced sampling in these data. These results support the use of measuring population parameters such as population growth rates in terms of degree days. [source]

Performance of nonlinear smoothers in signal recovery

W. J. Conradie
Abstract Time series data can be decomposed as signal plus noise. A good smoother should be able to recover a smooth signal reasonably well from time series data. The performance of two classes of nonlinear smoothers in signal recovery is discussed in this paper. The first class is the well-known class of median smoothers. The other one is a relatively new class of smoothers based on extreme-order statistics, called lower-upper-lower-upper smoothers. Sinusoidal signals of different frequencies with contaminated normal noise and impulsive noise added were simulated. Members of the two classes of nonlinear smoothers were applied to remove the ,non-Gaussian' and impulsive noise. To this output linear smoothing was applied to remove the remaining Gaussian noise. By means of a simulation study, the success of the two classes of smoothers was investigated using as measures of success the least-squares regression of the smoothed sequence on the signal and the integrated mean square error. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Implicit Surface Modelling with a Globally Regularised Basis of Compact Support

C. Walder
We consider the problem of constructing a globally smooth analytic function that represents a surface implicitly by way of its zero set, given sample points with surface normal vectors. The contributions of the paper include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable interpolation properties previously only associated with fully supported bases. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem lying at the core of kernel-based machine learning methods. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data and four-dimensional interpolation between three-dimensional shapes. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Curve, surface, solid, and object representations [source]


We used trucking industry's response to the U.S. Environmental Protection Agency's acceleration of 2004 diesel emissions standards as a case study to examine the importance of accounting for regulatees' strategic behaviors in drafting of environmental regulations. Our analysis of the time series data of aggregate U.S. and Canada heavy-duty truck production data from 1992 through 2003 found that heavy-duty trucks production increased by 20%,23% in the 6 mo prior to the date of compliance. The increases might be due to truck operators pre-buying trucks with less expensive but noncompliant engines and behaving strategically in anticipation of other uncertainties. (JEL L51, Q25) [source]

Measurement error and estimates of population extinction risk

John M. McNamara
Abstract It is common to estimate the extinction probability for a vulnerable population using methods that are based on the mean and variance of the long-term population growth rate. The numerical values of these two parameters are estimated from time series of population censuses. However, the proportion of a population that is registered at each census is typically not constant but will vary among years because of stochastic factors such as weather conditions at the time of sampling. Here, we analyse how such sampling errors influence estimates of extinction risk and find sampling errors to produce two opposite effects. Measurement errors lead to an exaggerated overall variance, but also introduce negative autocorrelations in the time series (which means that estimates of annual growth rates tend to alternate in size). If time series data are treated properly these two effects exactly counter balance. We advocate routinely incorporating a measure of among year correlations in estimating population extinction risk. [source]

A Three-step Method for Choosing the Number of Bootstrap Repetitions

ECONOMETRICA, Issue 1 2000
Donald W. K. Andrews
This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p -values, and bias correction. For each of these problems, the paper provides a three-step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p -value, or bias-corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=,. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well. [source]

Market Shares, Financial Constraints and Pricing Behaviour in the Export Market

ECONOMICA, Issue 276 2002
Nils Gottfries
A structural dynamic model of price and quantity adjustment is estimated on time series data for exports and export prices. Two sources of dynamics are considered: customer markets and preset prices. As predicted by the customer market model, the market share adjusts slowly after a change in the relative price and financial conditions affect prices. Prices are found to be sticky in the sense that they do not reflect the most recent information about costs and exchange rates. A parsimoniously parameterized structural model explains about 90% of the variation in market share and the relative price. [source]

Nonparametric harmonic regression for estuarine water quality data

Melanie A. Autin
Abstract Periodicity is omnipresent in environmental time series data. For modeling estuarine water quality variables, harmonic regression analysis has long been the standard for dealing with periodicity. Generalized additive models (GAMs) allow more flexibility in the response function. They permit parametric, semiparametric, and nonparametric regression functions of the predictor variables. We compare harmonic regression, GAMs with cubic regression splines, and GAMs with cyclic regression splines in simulations and using water quality data collected from the National Estuarine Reasearch Reserve System (NERRS). While the classical harmonic regression model works well for clean, near-sinusoidal data, the GAMs are competitive and are very promising for more complex data. The generalized additive models are also more adaptive and require less-intervention. Copyright © 2009 John Wiley & Sons, Ltd. [source]

A missing values imputation method for time series data: an efficient method to investigate the health effects of sulphur dioxide levels

Swarna Weerasinghe
Abstract Environmental data contains lengthy records of sequential missing values. Practical problem arose in the analysis of adverse health effects of sulphur dioxide (SO2) levels and asthma hospital admissions for Sydney, Nova Scotia, Canada. Reliable missing values imputation techniques are required to obtain valid estimates of the associations with sparse health outcomes such as asthma hospital admissions. In this paper, a new method that incorporates prediction errors to impute missing values is described using mean daily average sulphur dioxide levels following a stationary time series with a random error. Existing imputation methods failed to incorporate the prediction errors. An optimal method is developed by extending a between forecast method to include prediction errors. Validity and efficacy are demonstrated comparing the performances with the values that do not include prediction errors. The performances of the optimal method are demonstrated by increased validity and accuracy of the , coefficient of the Poisson regression model for the association with asthma hospital admissions. Visual inspection of the imputed values of sulphur dioxide levels with prediction errors demonstrated that the variation is better captured. The method is computationally simple and can be incorporated into the existing statistical software. Copyright © 2009 John Wiley & Sons, Ltd. [source]

On case-crossover methods for environmental time series data

Heather J. Whitaker
Abstract Case-crossover methods are widely used for analysing data on the association between health events and environmental exposures. In recent years, several approaches to choosing referent periods have been suggested, with much discussion of two types of bias: bias due to temporal trends, and overlap bias. In the present paper, we revisit the case-crossover method, focusing on its origin in the case-control paradigm, in order to throw new light on these biases. We emphasise the distinction between methods based on case-control logic (such as the symmetric bi-directional (SBI) method), for which overlap bias is a consequence of non-exchangeability of the exposure series, and methods based on cohort logic (such as the time-stratified (TS) method), for which overlap bias does not arise. We show by example that the TS method may suffer severe bias from residual seasonality. This method can be extended to control for seasonality. However, time series regression is more flexible than case-crossover methods for the analysis of data on shared environmental exposures. We conclude that time series regression ought to be adopted as the method of choice in such applications. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Extreme value predictions based on nonstationary time series of wave data

Christos N. Stefanakos
Abstract A new method for calculating return periods of various level values from nonstationary time series data is presented. The key idea of the method is a new definition of the return period, based on the MEan Number of Upcrossings of the level x* (MENU method). In the present article, the case of Gaussian periodically correlated time series is studied in detail. The whole procedure is numerically implemented and applied to synthetic wave data in order to test the stability of the method. Results obtained by using several variants of traditional methods (Gumbel's approach and the POT method) are also presented for comparison purposes. The results of the MENU method showed an extraordinary stability, in contrast to the wide variability of the traditional methods. The predictions obtained by means of the MENU method are lower than the traditional predictions. This is in accordance with the results of other methods that also take into account the dependence structure of the examined time series. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Alcohol and mortality: methodological and analytical issues in aggregate analyses

ADDICTION, Issue 1s1 2001
Thor Norström
This supplement includes a collection of papers that aim at estimating the relationship between per capita alcohol consumption and various forms of mortality, including mortality from liver cirrhosis, accidents, suicide, homicide, ischaemic heart disease, and total mortality. The papers apply a uniform methodological protocol, and they are all based on time series data covering the post-war period in the present EU countries and Norway. In this paper we discuss various methodological and analytical issues that are common to these papers. We argue that analysis of time series data is the most feasible approach for assessing the aggregate health consequences of changes in population drinking. We further discuss how aggregate data may also be useful for judging the plausibility of individual-level relationships, particularly those prone to be confounded by selection effects. The aggregation of linear and curvilinear risk curves is treated as well as various methods for dealing with the time-lag problem. With regard to estimation techniques we find country specific analyses preferable to pooled cross-sectional/time series models since the latter incorporate the dubious element of geographical co-variation, and conceal potentially interesting variations in alcohol effects. The approach taken in the papers at hand is instead to pool the country specific results into three groups of countries that represent different drinking cultures; traditional wine countries of southern Europe, beer countries of central Europe and the British Isles and spirits countries of northern Europe. The findings of the papers reinforce the central tenet of the public health perspective that overall consumption is an important determinant of alcohol-related harm rates. However, there is a variation across country groups in alcohol effects, particularly those on violent deaths, that indicates the potential importance of drinking patterns. There is no support for the notion that increases in per capita consumption have any cardioprotective effects at the population level. [source]

A semi-parametric gap-filling model for eddy covariance CO2 flux time series data

Abstract This paper introduces a method for modelling the deterministic component of eddy covariance CO2 flux time series in order to supplement missing data in these important data sets. The method is based on combining multidimensional semi-parametric spline interpolation with an assumed but unstated dependence of net CO2 flux on light, temperature and time. We test the model using a range of synthetic canopy data sets generated using several canopy simulation models realized for different micrometeorological and vegetation conditions. The method appears promising for filling large systematic gaps providing the associated missing data do not overerode critical information content in the conditioning data used for the model optimization. [source]

The effect of respiration variations on independent component analysis results of resting state functional connectivity

Rasmus M. Birn
Abstract The analysis of functional connectivity in fMRI can be severely affected by cardiac and respiratory fluctuations. While some of these artifactual signal changes can be reduced by physiological noise correction routines, signal fluctuations induced by slower breath-to-breath changes in the depth and rate of breathing are typically not removed. These slower respiration-induced signal changes occur at low frequencies and spatial locations similar to the fluctuations used to infer functional connectivity, and have been shown to significantly affect seed-ROI or seed-voxel based functional connectivity analysis, particularly in the default mode network. In this study, we investigate the effect of respiration variations on functional connectivity maps derived from independent component analysis (ICA) of resting-state data. Regions of the default mode network were identified by deactivations during a lexical decision task. Variations in respiration were measured independently and correlated with the MRI time series data. ICA appears to separate the default mode network and the respiration-related changes in most cases. In some cases, however, the component automatically identified as the default mode network was the same as the component identified as respiration-related. Furthermore, in most cases the time series associated with the default mode network component was still significantly correlated with changes in respiration volume per time, suggesting that current methods of ICA may not completely separate respiration from the default mode network. An independent measure of the respiration provides valuable information to help distinguish the default mode network from respiration-related signal changes, and to assess the degree of residual respiration related effects. Hum Brain Mapp 2008. © 2008 Wiley-Liss, Inc. [source]

Spectral decomposition of periodic ground water fluctuation in a coastal aquifer

David Ching-Fang Shih
Abstract This research accomplished by the descriptive statistics and spectral analysis of six kinds of time series data gives a complete assessment of periodic fluctuation in significant constituents for the Huakang Shan earthquake monitoring site. Spectral analysis and bandpass filtering techniques are demonstrated to accurately analyse the significant component. Variation in relative ground water heads with a period of 12·6 h is found to be highly related to seawater level fluctuation. Time lag is estimated about 3·78 h. Based on these phenomena, the coastal aquifer formed in an unconsolidated formation can be affected by the nearby seawater body for the semi-diurnal component. Fluctuation in piezometric heads is found to correspond at a rate of 1000 m h,1. Atmospheric pressure presents the significant components at periods of 10·8 h and 7·2 h in a quite different type, compared to relative ground water head and seawater level. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Time series forecasting by combining the radial basis function network and the self-organizing map

Gwo-Fong Lin
Abstract Based on a combination of a radial basis function network (RBFN) and a self-organizing map (SOM), a time-series forecasting model is proposed. Traditionally, the positioning of the radial basis centres is a crucial problem for the RBFN. In the proposed model, an SOM is used to construct the two-dimensional feature map from which the number of clusters (i.e. the number of hidden units in the RBFN) can be figured out directly by eye, and then the radial basis centres can be determined easily. The proposed model is examined using simulated time series data. The results demonstrate that the proposed RBFN is more competent in modelling and forecasting time series than an autoregressive integrated moving average (ARIMA) model. Finally, the proposed model is applied to actual groundwater head data. It is found that the proposed model can forecast more precisely than the ARIMA model. For time series forecasting, the proposed model is recommended as an alternative to the existing method, because it has a simple structure and can produce reasonable forecasts. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Error analysis for the evaluation of model performance: rainfall,runoff event time series data

Edzer J. Pebesma
Abstract This paper provides a procedure for evaluating model performance where model predictions and observations are given as time series data. The procedure focuses on the analysis of error time series by graphing them, summarizing them, and predicting their variability through available information (recalibration). We analysed two rainfall,runoff events from the R-5 data set, and evaluated 12 distinct model simulation scenarios for these events, of which 10 were conducted with the quasi-physically-based rainfall,runoff model (QPBRRM) and two with the integrated hydrology model (InHM). The QPBRRM simulation scenarios differ in their representation of saturated hydraulic conductivity. Two InHM simulation scenarios differ with respect to the inclusion of the roads at R-5. The two models, QPBRRM and InHM, differ strongly in the complexity and number of processes included. For all model simulations we found that errors could be predicted fairly well to very well, based on model output, or based on smooth functions of lagged rainfall data. The errors remaining after recalibration are much more alike in terms of variability than those without recalibration. In this paper, recalibration is not meant to fix models, but merely as a diagnostic tool that exhibits the magnitude and direction of model errors and indicates whether these model errors are related to model inputs such as rainfall. Copyright © 2004 John Wiley & Sons, Ltd. [source]

Remote Monitoring Integrated State Variables for AR Model Prediction of Daily Total Building Air-Conditioning Power Consumption

Chuzo Ninagawa Member
Abstract It is extremely difficult to predict daily accumulated power consumption of the entire building air-conditioning facilities because of a huge number of variables. We propose new integrated state variables, i.e. the daily operation amount and the daily operation-capacity-weighted average set temperature. Taking advantage of a remote monitoring technology, time series data of the integrated state variables were collected and an autoregressive (AR) model prediction for the daily total power consumption has been tried. © 2010 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]

Assessing the long-run economic impact of labour law systems: a theoretical reappraisal and analysis of new time series data

Simon Deakin
ABSTRACT Standard economic theory sees labour law as an exogenous interference with market relations and predicts mostly negative impacts on employment and productivity. We argue for a more nuanced theoretical position: labour law is, at least in part, endogenous, with both the production and the application of labour law norms influenced by national and sectoral contexts, and by complementarities between the institutions of the labour market and those of corporate governance and financial markets. Legal origin may also operate as a force shaping the content of the law and its economic impact. Time-series analysis using a new data set on legal change from the 1970s to the mid-2000s shows evidence of positive correlations between regulation and growth in employment and productivity, at least for France and Germany. No relationship, either positive or negative, is found for the UK and, although the United States shows a weak negative relationship between regulation and employment growth, this is offset by productivity gains. [source]

An approach to the linguistic summarization of time series using a fuzzy quantifier driven aggregation

Janusz Kacprzyk
We extend our previous work on the linguistic summarization of time series data meant as the linguistic summarization of trends, i.e. consecutive parts of the time series, which may be viewed as exhibiting a uniform behavior under an assumed (degree of) granulation, and identified with straight line segments of a piecewise linear approximation of the time series. We characterize the trends by the dynamics of change, duration, and variability. A linguistic summary of a time series is then viewed to be related to a linguistic quantifier driven aggregation of trends. We primarily employ for this purpose the classic Zadeh's calculus of linguistically quantified propositions, which is presumably the most straightforward and intuitively appealing, using the classic minimum operation and mentioning other t -norms. We also outline the use of the Sugeno and Choquet integrals proposed in our previous papers. We show an application to the absolute performance type analysis of time series data on daily quotations of an investment fund over an 8-year period, by presenting first an analysis of characteristic features of quotations, under various (degrees of) granulations assumed, and then by listing some more interesting and useful summaries obtained. We propose a convenient presentation of linguistic summaries focused on some characteristic feature exemplified by what happens "almost always," "very often," "quite often," "almost never," etc. All these analyses are meant to provide means to support a human user to make decisions. © 2010 Wiley Periodicals, Inc. [source]

Fuzzy information granules in time series data

Michael R. Berthold
Often, it is desirable to represent a set of time series through typical shapes in order to detect common patterns. The algorithm presented here compares pieces of a different time series in order to find such similar shapes. The use of a fuzzy clustering technique based on fuzzy c-means allows us to detect shapes that belong to a certain group of typical shapes with a degree of membership. Modifications to the original algorithm also allow this matching to be invariant with respect to a scaling of the time series. The algorithm is demonstrated on a widely known set of data taken from the electrocardiogram (ECG) rhythm analysis experiments performed at the Massachusetts Institute of Technology (MIT) laboratories and on data from protein mass spectrography. © 2004 Wiley Periodicals, Inc. [source]

Forecasting and Finite Sample Performance of Short Rate Models: International Evidence,

ABSTRACT This paper evaluates the forecasting and finite sample performance of short-term interest rate models in a number of countries. Specifically, we run a series of in-sample and out-of-sample tests for both the conditional mean and volatility of one-factor short rate models, and compare the results to the random walk model. Overall, we find that the out-of-sample forecasting performance of one-factor short rate models is poor, stemming from the inability of the models to accommodate jumps and discontinuities in the time series data. In addition, we perform a series of Monte Carlo analyses similar to Chapman and Pearson to document the finite sample performance of the short rate models when ,3 is not restricted to be equal to one. Our results indicate the potential dangers of over-parameterization and highlight the limitations of short-term interest rate models. [source]

A small monetary system for the euro area based on German data

Ralf Brüggemann
Previous euro area money demand studies have used aggregated national time series data from the countries participating in the European Monetary Union (EMU). However, aggregation may be problematic because macroeconomic convergence processes have taken place in the countries of interest. Therefore, in this study, quarterly German data until 1998 are combined with data from the euro area from 1999 until 2002 and these series are used for fitting a small vector error correction model for the monetary sector of the EMU. A stable long-run money demand relation is found for the full sample period. Moreover, impulse responses do not change much when the sample period is extended by the EMU period provided the break in the extended data series is captured by a simple dummy variable. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Real-time forecasting of photosmog episodes: the Naples case study

A. Riccio
Abstract In this paper we analysed the ozone time series data collected by the local monitoring network in the Naples urban area (southern Italy) during the spring/summer period of 1996. Our aim was to identify a reliable and effective model that could be used for the real-time forecasting of photosmog episodes. We studied the applicability of seasonal autoregressive integrated moving average models with some exogenous variables (ARIMAX) to our case study. The choice of exogenous variables,temperature, [NO2]/[NO] ratio and wind speed,was based on physical reasoning. The forecasting performance of all models was evaluated with data not used in model development, by means of an array of statistical indices: the comparison between observed and forecast means and standard deviations; intercept and slope of a least squares regression of forecast variable on observed variable; mean absolute and root mean square errors; and 95% confidence limits of forecast variable. The assessment of all models was also based on their tendency to forecast critical episodes. It was found that the model using information from the temperature data set to predict peak ozone levels gives satisfactory results, about 70% of critical episodes being correctly predicted by the 24,h ahead forecast function. Copyright © 2001 John Wiley & Sons, Ltd. [source]

Information theoretical measures to analyze trajectories in rational molecular design

K. Hamacher
Abstract We develop a new methodology to analyze molecular dynamics trajectories and other time series data from simulation runs. This methodology is based on an information measure of the difference between distributions of various data extract from such simulations. The method is fast as it only involves the numerical integration/summation of the distributions in one dimension while avoiding sampling issues at the same time. The method is most suitable for applications in which different scenarios are to be compared, e.g. to guide rational molecular design. We show the power of the proposed method in an application of rational drug design by reduced model computations on the BH3 motif in the apoptosis inducing BCL2 protein family. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2007 [source]

Testing for market power in the Australian grains and oilseeds industries

Christopher J. O'Donnell
We formally assess competitive buying and selling behavior in the Australian grains and oilseeds industries using a more realistic empirical model and a less aggregated data set than previously available. We specify a duality model of profit maximization that allows for imperfect competition in both input and output markets and for variable-proportions technologies. Aggregate input-output data are used to define the structure of the relevant industries, and time series data are then used to implement the model for 13 grains and oilseeds products handled by seven groups of agents. The model is estimated in a Bayesian econometrics framework. We find evidence of flour and cereal food product manufacturers exerting market power when purchasing wheat, barley, oats and triticale; beer and malt manufacturers exerting market power when purchasing wheat and barley; and other food product manufacturers exerting market power when purchasing wheat, barley, oats and triticale. [EconLit citations: C11, L66, Q11]. © 2007 Wiley Periodicals, Inc. Agribusiness 23: 349,376, 2007. [source]

Statistical simulation of flood variables: incorporating short-term sequencing

Y. Cai
Abstract The pluvial and fluvial flooding in the United Kingdom over the summer of 2007 arose as a result of anomalous climatic conditions that persisted for over a month. Gaining an understanding of the sequencing of storm events and representing their characteristics within flood risk analysis is therefore of importance. This paper provides a general method for simulating univariate time series data, with a given marginal extreme value distribution and required autocorrelation structure, together with a demonstration of the method with synthetic data. The method is then extended to the multivariate case, where cross-variable correlations are also represented. The multivariate method is shown to work well for a two-variable simulation of wave heights and sea surges at Lerwick. This work was prompted by an engineering need for long time series data for use in continuous simulation studies where gradual deterioration is a contributory factor to flood risk and potential structural failure. [source]