Home About us Contact | |||
Autocorrelation Function (autocorrelation + function)
Selected AbstractsWavelet Packet-Autocorrelation Function Method for Traffic Flow Pattern AnalysisCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2004Xiaomo Jiang A detailed understanding of the properties of traffic flow is essential for building a reliable forecasting model. The discrete wavelet packet transform (DWPT) provides more coefficients than the conventional discrete wavelet transform (DWT), representing additional subtle details of a signal. In wavelet multiresolution analysis, an important decision is the selection of the decomposition level. In this research, the statistical autocorrelation function (ACF) is proposed for the selection of the decomposition level in wavelet multiresolution analysis of traffic flow time series. A hybrid wavelet packet-ACF method is proposed for analysis of traffic flow time series and determining its self-similar, singular, and fractal properties. A DWPT-based approach combined with a wavelet coefficients penalization scheme and soft thresholding is presented for denoising the traffic flow. The proposed methodology provides a powerful tool in removing the noise and identifying singularities in the traffic flow. The methods created in this research are of value in developing accurate traffic-forecasting models. [source] Roughness Characterization through 3D Textured Image Analysis: Contribution to the Study of Road Wear LevelCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2004M. Khoudeir The microtexture is defined as surface irregularities whose height ranges from 0.001 mm to 0.5 mm and whose width is less than 0.5 mm (Alvarez and Mprel, 1994). The deterioration due to the road traffic, especially polishing effect, involves a change in the microtexture. So, we suggest a method to characterize, through image analysis, wear level or microroughness of road surfaces. We propose then, on one hand a photometric model for road surface, and, on the other hand, a geometrical model for road surface profile. These two models allow us to develop roughness criteria based on the study of the statistical properties of: the distribution of the gray levels in the image, the distribution of the absolute value of its gradient, the form of its autocorrelation function, and the distribution of its curvature map. Experiments have been done with images of laboratory-made road specimens at different wear levels. The obtained results are similar to those obtained by a direct method using road profiles. [source] Parallel bandwidth characteristics calculations for thin avalanche photodiodes on a SGI Origin 2000 supercomputerCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2004Yi Pan Abstract An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today's long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p - i - n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors. Copyright © 2004 John Wiley & Sons, Ltd. [source] Suspended sediment load estimation and the problem of inadequate data sampling: a fractal viewEARTH SURFACE PROCESSES AND LANDFORMS, Issue 4 2006Bellie Sivakumar Abstract Suspended sediment load estimation at high resolutions is an extremely difficult task, because: (1) it depends on the availability of high-resolution water discharge and suspended sediment concentration measurements, which are often not available; (2) any errors in the measurements of these two components could significantly influence the accuracy of suspended sediment load estimation; and (3) direct measurements are very expensive. The purpose of this study is to approach this sampling problem from a new perspective of fractals (or scaling), which could provide important information on the transformation of suspended sediment load data from one scale to another. This is done by investigating the possible presence of fractal behaviour in the daily suspended sediment load data for the Mississippi River basin (at St. Louis, Missouri). The presence of fractal behaviour is investigated using five different methods, ranging from general to specific and from mono-fractal to multi-fractal: (1) autocorrelation function; (2) power spectrum; (3) probability distribution function; (4) box dimension; and (5) statistical moment scaling function. The results indicate the presence of multi-fractal behaviour in the suspended sediment load data, suggesting the possibility of transformation of data from one scale to another using a multi-dimensional model. Copyright © 2005 John Wiley & Sons, Ltd. [source] Estimation of an optimal mixed-phase inverse filterGEOPHYSICAL PROSPECTING, Issue 4 2000Bjorn Ursin Inverse filtering is applied to seismic data to remove the effect of the wavelet and to obtain an estimate of the reflectivity series. In many cases the wavelet is not known, and only an estimate of its autocorrelation function (ACF) can be computed. Solving the Yule-Walker equations gives the inverse filter which corresponds to a minimum-delay wavelet. When the wavelet is mixed delay, this inverse filter produces a poor result. By solving the extended Yule-Walker equations with the ACF of lag , on the main diagonal of the filter equations, it is possible to decompose the inverse filter into a finite-length filter convolved with an infinite-length filter. In a previous paper we proposed a mixed-delay inverse filter where the finite-length filter is maximum delay and the infinite-length filter is minimum delay. Here, we refine this technique by analysing the roots of the Z -transform polynomial of the finite-length filter. By varying the number of roots which are placed inside the unit circle of the mixed-delay inverse filter, at most 2, different filters are obtained. Applying each filter to a small data set (say a CMP gather), we choose the optimal filter to be the one for which the output has the largest Lp -norm, with p=5. This is done for increasing values of , to obtain a final optimal filter. From this optimal filter it is easy to construct the inverse wavelet which may be used as an estimate of the seismic wavelet. The new procedure has been applied to a synthetic wavelet and to an airgun wavelet to test its performance, and also to verify that the reconstructed wavelet is close to the original wavelet. The algorithm has also been applied to prestack marine seismic data, resulting in an improved stacked section compared with the one obtained by using a minimum-delay filter. [source] Effect of spatial variability of cross-correlated soil properties on bearing capacity of strip footingINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2010Sung Eun Cho Abstract Geotechnical engineering problems are characterized by many sources of uncertainty. Some of these sources are connected to the uncertainties of soil properties involved in the analysis. In this paper, a numerical procedure for a probabilistic analysis that considers the spatial variability of cross-correlated soil properties is presented and applied to study the bearing capacity of spatially random soil with different autocorrelation distances in the vertical and horizontal directions. The approach integrates a commercial finite difference method and random field theory into the framework of a probabilistic analysis. Two-dimensional cross-correlated non-Gaussian random fields are generated based on a Karhunen,Ločve expansion in a manner consistent with a specified marginal distribution function, an autocorrelation function, and cross-correlation coefficients. A Monte Carlo simulation is then used to determine the statistical response based on the random fields. A series of analyses was performed to study the effects of uncertainty due to the spatial heterogeneity on the bearing capacity of a rough strip footing. The simulations provide insight into the application of uncertainty treatment to geotechnical problems and show the importance of the spatial variability of soil properties with regard to the outcome of a probabilistic assessment. Copyright © 2009 John Wiley & Sons, Ltd. [source] On parameter estimation of a simple real-time flow aggregation modelINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2006Huirong Fu Abstract There exists a clear need for a comprehensive framework for accurately analysing and realistically modelling the key traffic statistics that determine network performance. Recently, a novel traffic model, sinusoid with uniform noise (SUN), has been proposed, which outperforms other models in that it can simultaneously achieve tractability, parsimony, accuracy (in predicting network performance), and efficiency (in real-time capability). In this paper, we design, evaluate and compare several estimation approaches, including variance-based estimation (Var), minimum mean-square-error-based estimation (MMSE), MMSE with the constraint of variance (Var+MMSE), MMSE of autocorrelation function with the constraint of variance (Var+AutoCor+MMSE), and variance of secondary demand-based estimation (Secondary Variance), to determining the key parameters in the SUN model. Integrated with the SUN model, all the proposed methods are able to capture the basic behaviour of the aggregation reservation system and closely approximate the system performance. In addition, we find that: (1) the Var is very simple to operate and provides both upper and lower performance bounds. It can be integrated into other methods to provide very accurate approximation to the aggregation's performance and thus obtain an accurate solution; (2) Var+AutoCor+MMSE is superior to other proposed methods in the accuracy to determine system performance; and (3) Var+MMSE and Var+AutoCor+MMSE differ from the other three methods in that both adopt an experimental analysis method, which helps to improve the prediction accuracy while reducing computation complexity. Copyright © 2005 John Wiley & Sons, Ltd. [source] Linear discriminant analysis in network traffic modellingINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2006Bing-Yi Zhang Abstract It is difficult to give an accurate judgement of whether the traffic model fit the actual traffic. The traditional method is to compare the Hurst parameter, data histogram and autocorrelation function. The method of comparing Hurst parameter cannot give exact results and judgement. The method of comparing data histogram and autocorrelation only gives a qualitative judgement. Based on linear discriminant analysis we proposed a novel arithmetic. Utilizing this arithmetic we analysed some sets of data with large and little differences. We also analysed some sets of data generated by network simulator. The analysis result is accurate. Comparing with traditional method, this arithmetic is useful and can conveniently give an accurate judgement for complex network traffic trace. Copyright © 2005 John Wiley & Sons, Ltd. [source] Low-complexity unambiguous acquisition methods for BOC-modulated CDMA signalsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 6 2008Elena Simona Lohan Abstract The new M-code signals of GPS and the signals proposed for the future Galileo systems are of split-spectrum type, where the pseudorandom (PRN) code is multiplied with rectangular sub-carriers in one or several stages. Sine and cosine binary-offset-carrier (BOC) modulations are examples of modulations, which split the signal spectrum and create ambiguities in the envelope of the autocorrelation function (ACF) of the modulated signals. Thus, the acquisition of split-spectrum signals, based on the ambiguous ACF, poses some challenges, which might be overcome at the expense of higher complexity (e.g. by decreasing the step in searching the timing hypotheses). Recently, two techniques that deal with the ambiguities of the ACF have been proposed, and they were referred to as ,sideband (SB) techniques' (by Betz, Fishman et al.) or ,BPSK-like' techniques (by Martin, Heiries et al.), since they use SB correlation channels and the obtained ACF looks similar to the ACF of a BPSK-modulated PRN code. These techniques allow the use of a higher search step compared with the ambiguous ACF situation. However, both these techniques use SB-selection filters and modified reference PRN codes at the receivers, which affect the implementational complexity. Moreover, the ,BPSK-like' techniques have been so far studied for even BOC-modulation orders (i.e. integer ratio between the sub-carrier frequency and the chip rate) and they fail to work for odd BOC-modulation orders (or equivalently for split-spectrum signals with significant zero-frequency content). We propose here three reduced-complexity methods that remove the ambiguities of the ACF of the split-spectrum signals and work for both even and odd BOC-modulation orders. Two of the proposed methods are extensions of the previously mentioned techniques, and the third one is introduced by the authors and called the unsuppressed adjacent lobes (UAL) technique. We argue via theoretical analysis the choice of the parameters of the proposed methods and we compare the alternative methods in terms of complexity and performance. Copyright © 2008 John Wiley & Sons, Ltd. [source] Consumer,resource interactions and cyclic population dynamics of Tanytarsus gracilentus (Diptera: Chironomidae)JOURNAL OF ANIMAL ECOLOGY, Issue 5 2002Árni Einarsson Summary 1Tanytarsus gracilentus population dynamics in Lake Myvatn show a tendency to cycle, with three oscillations occurring between 1977 and 1999 having periods of roughly 7 years. The population abundance fluctuated over four orders of magnitude. 2A partial autocorrelation function (PACF) accounting for measurement error revealed a strong positive lag-1 autocorrelation and a moderate negative lag-2 partial autocorrelation. This suggests that the dynamics can be explained by a simple second-order autoregressive process. 3We tested the alternative hypotheses that the cyclic dynamics of T. gracilentus were driven by consumer,resource interactions in which T. gracilentus is the consumer, or predator,prey interactions in which T. gracilentus is the prey. We analysed autoregressive models including both consumer,resource interactions and predator,prey interactions. 4Wing length of T. gracilentus was used as a surrogate for resource abundance and/or quality, because body size is known to fluctuate with resource abundance and quality in dipterans. Furthermore, the wing lengths of Micropsectra lindrothi , a species ecologically similar to T. gracilentus , fluctuated synchronously with T. gracilentus wing lengths, thereby indicating that the shared resources of these two species were indeed cycling. Wing lengths of other chironomid species were not synchronized. 5The predators of T. gracilentus included midges in the genera Procladius and Macropelopia , and the fish Gasterosteus aculeatus (three-spined stickleback). 6The autoregressive models supported the hypothesis that T. gracilentus dynamics were driven by consumer,resource interactions, and rejected the hypothesis that the dynamics were driven by predator,prey interactions. 7The models also revealed the consequences of consumer,resource interactions for the magnitude of fluctuations in T. gracilentus abundance. Consumer,resource interactions amplified the exogenous variability affecting T. gracilentus per capita population growth rates (e.g. temperature, rainfall, etc.), leading to variability in abundance more than two orders of magnitude greater than the exogenous variability. [source] Forecasting high-frequency financial data with the ARFIMA,ARCH modelJOURNAL OF FORECASTING, Issue 7 2001Michael A. Hauser Abstract Financial data series are often described as exhibiting two non-standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA,ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid (1987). For daily data on the Swiss 1-month Euromarket interest rate during the period 1986,1989, the ARFIMA,ARCH (5,d,2/4) model with non-integer d is selected by AIC. Model-based out-of-sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (, > 40). Regarding volatility forecasts, however, the selected ARFIMA,ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd. [source] First-order rounded integer-valued autoregressive (RINAR(1)) processJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2009M. Kachour Abstract., We introduce a new class of autoregressive models for integer-valued time series using the rounding operator. Compared with classical INAR models based on the thinning operator, the new models have several advantages: simple innovation structure, autoregressive coefficients with arbitrary signs, possible negative values for time series and possible negative values for the autocorrelation function. Focused on the first-order RINAR(1) model, we give conditions for its ergodicity and stationarity. For parameter estimation, a least squares estimator is introduced and we prove its consistency under suitable identifiability condition. Simulation experiments as well as analysis of real data sets are carried out to attest the model performance. [source] Understanding the halo-mass and galaxy-mass cross-correlation functionsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008Eric Hayashi ABSTRACT We use the Millennium Simulation (MS) to measure the cross-correlation between halo centres and mass (or equivalently the average density profiles of dark haloes) in a Lambda cold dark matter (,CDM) cosmology. We present results for radii in the range 10 h,1 kpc < r < 30 h,1 Mpc and for halo masses in the range 4 × 1010 < M200 < 4 × 1014 h,1 M,. Both at z= 0 and at z= 0.76 these cross-correlations are surprisingly well fitted if the inner region is approximated by a density profile of NFW or Einasto form, the outer region by a biased version of the linear mass autocorrelation function, and the maximum of the two is adopted where they are comparable. We use a simulation of galaxy formation within the MS to explore how these results are reflected in cross-correlations between galaxies and mass. These are directly observable through galaxy,galaxy lensing. Here also we find that simple models can represent the simulation results remarkably well, typically to ,10 per cent. Such models can be used to extend our results to other redshifts, to cosmologies with other parameters, and to other assumptions about how galaxies populate dark haloes. Our galaxy formation simulation already reproduces current galaxy,galaxy lensing data quite well. The characteristic features predicted in the galaxy,galaxy lensing signal should provide a strong test of the ,CDM cosmology as well as a route to understanding how galaxies form within it. [source] Scaling and correlation analysis of galactic imagesMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2001P. Frick Different scaling and autocorrelation characteristics and their application to astronomical images are discussed: the structure function, the autocorrelation function, Fourier spectra and wavelet spectra. The choice of the mathematical tool is of great importance for the scaling analysis of images. The structure function, for example, cannot resolve scales that are close to the dominating large-scale structures, and can lead to the wrong interpretation that a continuous range of scales with a power law exists. The traditional Fourier technique, applied to real data, gives very spiky spectra, in which the separation of real maxima and high harmonics can be difficult. We recommend as the optimal tool the wavelet spectrum with a suitable choice of the analysing wavelet. We introduce the wavelet cross-correlation function, which enables us to study the correlation between images as a function of scale. The cross-correlation coefficient strongly depends on the scale. The classical cross-correlation coefficient can be misleading if a bright, extended central region or an extended disc exists in the galactic images. An analysis of the scaling and cross-correlation characteristics of nine optical and radio maps of the nearby spiral galaxy NGC 6946 is presented. The wavelet spectra allow us to separate structures on different scales like spiral arms and diffuse extended emission. Only the images of thermal radio emission and H, emission give indications of three-dimensional Kolmogorov-type turbulence on the smallest resolved scales . The cross-correlations between the images of NGC 6946 show strong similarities between the images of total radio emission, red light and mid-infrared dust emission on all scales. The best correlation is found between total radio emission and dust emission. Thermal radio continuum and H, emission are best correlated on a scale of about , the typical width of a spiral arm. On a similar scale, the images of polarized radio and H, emission are anticorrelated, a fact that remains undetected with classical cross-correlation analysis. [source] Semi-classical calculation of resonant states of a charged particle interacting with a metallic surfacePHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 10 2005John Jairo Zuluaga Abstract We assess the applicability of the semi-classical approach of Herman,Kluk with filter diagonalization to determine resonant states of either the electron-surface system or the ion-surface system. An effective potential model of the interaction of an electron with a ruthenium metallic surface is used. The evolution of the wave-function and the resonant states of this system are calculated. Analogous results for the interaction of the system formed by the H, and the ruthenium surface are presented. For the calculation of the resonances, the semi-classical wave-function is found, and the autocorrelation function between the initial and final wave-functions is calculated, from which the position and width of the resonances are extracted by using the harmonic inversion by filter diagonalization. The results are compared with results available in the literature for similar models obtained by quantum calculations using fast Fourier Transform. The positions of the lower-lying resonances found with the semi-classical and quantum approaches match closely, while the values of the widths of the resonances show larger discrepancies. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Positron annihilation spectroscopic study of hydrothermal grown n-type zinc oxide single crystalPHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 10 2007C. W. Hui Abstract Positron lifetime and coincidence Doppler broadening spectroscopic (CDBS) measurements were carried out to study the defects in two hydrothermal (HT) grown ZnO single crystal samples (HT1 and HT2) obtained from two companies. Single component model could offer good fittings to the room temperature spectra of HT1 and HT2, with the positron lifetimes equal to 199 ps and 181 ps respectively. These two lifetime components were associated with saturated positron trapping into two VZn -related defects with different microstructures. The positron lifetimes of HT1 was found to be temperature independent. For the HT2 sample, the positron lifetime remained unchanged with T > 200 K and decreased with decreasing temperature as T<200K. This could be explained by the presence of an additional positron trap having similar electronic environment to that of the delocalized state and competing in trapping positrons with the 181 ps component at low temperatures. Positron-electron autocorrelation function, which was the fingerprint of the annihilation site, was extracted from the CDBS spectrum. The obtained autocorrelation functions of HT1 and HT2 at room temperature, and HT2 at 50 K had features consistent with the above postulates that the 181 ps and the 199 ps components had distinct microstructures and the low temperature positron trap existed in HT2. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Identification and fine tuning of closed-loop processes under discrete EWMA and PI adjustmentsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2001Rong Pan Abstract Conventional process identification techniques of a open-loop process use the cross-correlation function between historical values of the process input and of the process output. If the process is operated under a linear feedback controller, however, the cross-correlation function has no information on the process transfer function because of the linear dependency of the process input on the output. In this paper, several circumstances where a closed-loop system can be identified by the autocorrelation function of the output are discussed. It is assumed that a proportional integral controller with known parameters is acting on the process while the output data were collected. The disturbance is assumed to be a member of a simple yet useful family of stochastic models, which is able to represent drift. It is shown that, with these general assumptions, it is possible to identify some dynamic process models commonly encountered in manufacturing. After identification, our approach suggests to tune the controller to a near-optimal setting according to a well-known performance criterion. Copyright © 2001 John Wiley & Sons, Ltd. [source] Car,Parrinello Molecular Dynamics Study of the Blue-Shifted F3CH,,,FCD3 System in Liquid N2CHEMPHYSCHEM, Issue 6 2006Pawel Rodziewicz Dr. Abstract Fluoroform, as confirmed by both experimental and theoretical studies, can participate in improper H-bond formation, which is characterized by a noticeable increase in the fundamental stretching frequency ,(CH) (so-called blue frequency shift), an irregular change of its integral intensity, and a CH bond contraction. A Car,Parrinello molecular dynamics simulation was performed for a complex formed by fluoroform (F3CH) and deuterated methyl fluoride (FCD3) in liquid nitrogen. Vibrational analysis based on the Fourier transform of the dipole moment autocorrelation function reproduces the blue shift of the fundamental stretching frequency ,(CH) and the decrease in the integral intensity. The dynamic contraction of the CH bond is also predicted. The stoichiometry of the solvated, blue-shifted complexes and their residence times are examined. [source] Photophysical Aspects of Single-Molecule Detection by Two-Photon Excitation with Consideration of Sequential Pulsed IlluminationCHEMPHYSCHEM, Issue 5 2004R. Niesner Abstract An important goal in single molecule fluorescence correlation spectroscopy is the theoretical simulation of the fluorescence signal stemming from individual molecules and its autocorrelation function. The simulation approaches developed up to now are based exclusively on continuous-wave (cw) illumination and consequently on cw-excitation. However, this approximation is no longer valid in the case of two-photon excitation, for which pulsed illumination is usually employed. We present a novel theoretical model for the simulation of the fluorescence signal of single molecules and its autocorrelation function with consideration of the time dependence of the excitation flux and thus of all illumination-dependent photoprocesses: two-photon excitation, induced emission and photobleaching. Further important characteristics of our approach are the consideration of the dependence of the photobleaching rate on illumination and the low intersystem-crossing rates of the studied coumarins. Moreover, using our approach, we can predict quantitatively the effect of the laser pulse width on the fluorescence signal of a molecule, that is, the contributions of the photobleaching and saturation effects, and thus we can calculate the optimal laser pulse width. The theoretical autocorrelation functions were fitted to the experimental data, and we could ascertain a good agreement between the resulting and the expected parameters. The most important parameter is the photobleaching constant ,, the cross section of the transition Sn,S1, which characterises the photostability of the molecules independent of the experimental conditions. Its value is 1.7×10,23 cm2 for coumarin 153 and 5×10,23 cm2 for coumarin 314. [source] Patterns of spatial autocorrelation of assemblages of birds, floristics, physiognomy, and primary productivity in the central Great Basin, USADIVERSITY AND DISTRIBUTIONS, Issue 3 2006Erica Fleishman ABSTRACT We fitted spatial autocorrelation functions to distance-based data for assemblages of birds and for three attributes of birds' habitats at 140 locations, separated by up to 65 km, in the Great Basin (Nevada, USA). The three habitat characteristics were taxonomic composition of the vegetation, physical structure of the vegetation, and a measure of primary productivity, the normalized difference vegetation index, estimated from satellite imagery. We found that a spherical model was the best fit to data for avifaunal composition, vegetation composition, and primary productivity, but the distance at which spatial correlation effectively was zero differed substantially among data sets (c. 30 km for birds, 20 km for vegetation composition, and 60 km for primary productivity). A power-law function was the best fit to data for vegetation structure, indicating that the structure of vegetation differed by similar amounts irrespective of distance between locations (up to the maximum distance measured). Our results suggested that the spatial structure of bird assemblages is more similar to vegetation composition than to either vegetation structure or primary productivity, but is autocorrelated over larger distances. We believe that the greater mobility of birds compared with plants may be responsible for this difference. [source] A spectral projection method for the analysis of autocorrelation functions and projection errors in discrete particle simulationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2008André Kaufmann Abstract Discrete particle simulation is a well-established tool for the simulation of particles and droplets suspended in turbulent flows of academic and industrial applications. The study of some properties such as the preferential concentration of inertial particles in regions of high shear and low vorticity requires the computation of autocorrelation functions. This can be a tedious task as the discrete point particles need to be projected in some manner to obtain the continuous autocorrelation functions. Projection of particle properties on to a computational grid, for instance, the grid of the carrier phase, is furthermore an issue when quantities such as particle concentrations are to be computed or source terms between the carrier phase and the particles are exchanged. The errors committed by commonly used projection methods are often unknown and are difficult to analyse. Grid and sampling size limit the possibilities in terms of precision per computational cost. Here, we present a spectral projection method that is not affected by sampling issues and addresses all of the above issues. The technique is only limited by computational resources and is easy to parallelize. The only visible drawback is the limitation to simple geometries and therefore limited to academic applications. The spectral projection method consists of a discrete Fourier-transform of the particle locations. The Fourier-transformed particle number density and momentum fields can then be used to compute the autocorrelation functions and the continuous physical space fields for the evaluation of the projection methods error. The number of Fourier components used to discretize the projector kernel can be chosen such that the corresponding characteristic length scale is as small as needed. This allows to study the phenomena of particle motion, for example, in a region of preferential concentration that may be smaller than the cell size of the carrier phase grid. The precision of the spectral projection method depends, therefore, only on the number of Fourier modes considered. Copyright © 2008 John Wiley & Sons, Ltd. [source] Molecular dynamics of phase transitions in clusters of alkali halidesINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 2 2001Pedro C. R. Rodrigues Abstract Molecular dynamics simulations of unconstrained alkali halide clusters with 8, 64, 216, 512, 1000, 1728, 2744, 4096, 5832, and 8000 ions have been carried out using the Born,Mayer,Huggins potential. All the clusters exhibit first-order melting and freezing transitions. The melting temperature increases with the number of ions and approaches the melting temperature of the bulk. Clusters with a number of ions less than approximately 1000 present hysteresis cycles and practically do not have phase coexistence. Clusters with a number of ions over 1000 present phase coexistence during a significant part of the transition region and hysteresis is progressively eliminated as the clusters size increases. It is suggested that hysteresis is an intrinsic characteristic of small clusters. In the transition regions the calculations have been performed by fixing the total energy of the clusters. It is shown that such a technique provides a better way of analyzing the transition mechanism than the usual procedure of fixing the temperature by ad hoc rescaling the velocities or by using canonical molecular dynamics or Monte Carlo. A detailed analysis of the melting transition is presented. The effects of interfaces and impurities are discussed. A method based on the velocity autocorrelation functions is proposed, in order to determine the molar fraction of the ions present in the solid and liquid phases as well as to produce colored snapshots of the phases in coexistence. The overall agreement of the estimated melting points and enthalpies of melting with the experiment is fairly good. The estimated melting point and enthalpy of melting for KCl in particular are in excellent agreement with the experimental values. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem 84: 169,180, 2001 [source] Experimental and theoretical study of the polarized infrared spectra of the hydrogen bond in 3-thiophenic acid crystalJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 3 2010Rekik Najeh Abstract This article presents the results of experimental and theoretical studies of the vOH and vOD band shapes in the polarized infrared spectra of 3-thiophenic acid crystals measured at room temperature and at 77 K. The line shapes are studied theoretically within the framework of the anharmonic coupling theory, Davydov coupling, Fermi resonance, direct and indirect damping, as well as the selection rule breaking mechanism for forbidden transitions. The adiabatic approximation allowing to separate the high-frequency motion from the slow one of the H-bond bridge is performed for each separate H-bond bridge of the dimer and a strong nonadiabatic correction is introduced via the resonant exchange between the fast-mode excited states of the two moieties. The spectral density is obtained within the linear response theory by Fourier transform of the damped autocorrelation functions. The approach correctly fits the experimental line shape of the hydrogenated compound and predicts satisfactorily the evolution in the line shapes with temperature and the change in the line shape with isotopic substitution. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source] Detection of delayed density dependence in an orchid populationJOURNAL OF ECOLOGY, Issue 2 2000M. P. Gillman Summary 1,Annual censuses of Orchis morio (green-winged orchid) flowering spikes have been taken over a 27-year period in a replicated factorial experiment on the effects of fertilizer application. Census data, combined by block or treatment, were used in time,series analyses to test for density dependence. 2,Partial autocorrelation functions revealed the importance of positive correlations at lag 1 and negative correlations at lag 5. Stepwise multiple regressions provided evidence of delayed density dependence, again with a delay of about 5 years, with no evidence of direct (first-order) density dependence. 3,First-order autocorrelations and delayed density dependence were considered in the light of the known stage structure and generation time of the plant and the possibility of density dependence at different points in the life history. 4,Model structure affects the detection of density dependence, increasing the propensity for type I errors. [source] The efficiency of natural gas futures marketsOPEC ENERGY REVIEW, Issue 2 2003Ahmed El Hachemi Mazighi Recent experience with the emergence of futures markets for natural gas has led to many questions about the drivers and functioning of these markets. Most often, however, studies lack strong statistical support. The objective of this article is to use some classical statistical tests to check whether futures markets for natural gas (NG) are efficient or not. The problem of NG market efficiency is closely linked to the debate on the value of NG. More precisely, if futures markets were really efficient, then: 1) spot prices would reflect the existence of a market assessment, which is proof that speculation and the manipulation of prices are absent; 2) as a consequence, spot prices could give clear signals about the value of NG; and 3) historical series on spot prices could serve as "clean" benchmarks in the pricing of NG in long-term contracts. On the whole, since the major share of NG is sold to power producers, the efficiency of futures markets implies that spot prices for NG are driven increasingly by power prices. On the other hand, if futures markets for natural gas fail the efficiency tests, this will reflect: 1) a lack of liquidity in futures markets and/or possibilities of an excess return in the short term; 2) a pass-through of the seasonality of power demand in the gas market; 3) the existence of a transitory process, before spot markets become efficient and give clear signals about the value of NG. Using monthly data on three segments of the futures markets, our findings show that efficiency is almost completely rejected on both the International Petroleum Exchange in London (UK market) and the New York Mercantile Exchange (US market). On the NYMEX, the principle of "co-movement" between spot and forward prices seems to be respected. However, the autocorrelation functions of the first differences in the price changes show no randomness of price fluctuations for three segments out of four. Further, both the NYMEX and the IPE fail, with regard to the hypothesis that the forward price is an optimal predictor of the spot price. Consequently, unless we have an increase in the liquidity of spot markets and an increase in the relative share of NG spot trading, futures markets cannot be considered as efficient. [source] |