Home About us Contact | |||
Convolution
Kinds of Convolution Terms modified by Convolution Selected AbstractsPreserving Log-Concavity Under Convolution: CommentECONOMETRICA, Issue 3 2002Eugenio J. Miravete No abstract is available for this article. [source] Combinatorial development of polymer nanocomposites using transient processing conditions in twin screw extrusionAICHE JOURNAL, Issue 7 2008Arun K. Kota Abstract A new approach is presented for combinatorial development of polymer nanocomposites with compositional gradients (CGs). The CGs were developed using transient processing conditions in twin screw extrusion with small quantities of expensive nanoscale fillers. Convolution of step input with normalized residence volume distributions (RVDs) was used to establish the processing,structure relationship for the CGs. The normalized RVD was established as a process characteristic independent of processing conditions and measured in situ using an optical probe. The CG determined nondestructively using the new combinatorial approach was validated through comparison with more time-consuming and destructive thermogravimetric analysis. The CG could also be established with relatively inexpensive microscale fillers using the normalized RVD obtained with nanoscale fillers, suggesting that transient effects of the mixing process are independent of the size of the filler. Finally, structure,property relationship of combinatorially developed polymer nanocomposites was established by characterizing their dynamic mechanical behavior (storage modulus, G,, and loss modulus, G,). The dynamic mechanical behavior of the combinatorially developed composites correlated well with the batch-processed ones, indicating that the transient mixing conditions in extrusion do not affect the material properties. © 2008 American Institute of Chemical Engineers AIChE J, 2008 [source] Gradient Estimation in Volume Data using 4D Linear RegressionCOMPUTER GRAPHICS FORUM, Issue 3 2000László Neumann In this paper a new gradient estimation method is presented which is based on linear regression. Previous contextual shading techniques try to fit an approximate function to a set of surface points in the neighborhood of a given voxel. Therefore a system of linear equations has to be solved using the computationally expensive Gaussian elimination. In contrast, our method approximates the density function itself in a local neighborhood with a 3D regression hyperplane. This approach also leads to a system of linear equations but we will show that it can be solved with an efficient convolution. Our method provides at each voxel location the normal vector and the translation of the regression hyperplane which are considered as a gradient and a filtered density value respectively. Therefore this technique can be used for surface smoothing and gradient estimation at the same time. [source] Investigation of the Influence of Overvoltage, Auxiliary Glow Current and Relaxation Time on the Electrical Breakdown Time Delay Distributions in NeonCONTRIBUTIONS TO PLASMA PHYSICS, Issue 2 2005. A. Maluckov Abstract Results of the statistical analysis of the electrical breakdown time delay for neon-filled tube at 13.3 mbar are presented in this paper. Experimental distributions of the breakdown time delay were established on the basis of 200 successive and independent measurements, for different overvoltages, relaxation times and auxiliary glows. Obtained experimental distributions deviate from usual exponential distribution. Breakdown time delay distributions are numerically generated, usingMonte-Carlo method, as the compositions of the two independent random variables with an exponential and a Gaussian distribution. Theoretical breakdown time delay distribution is obtained from the convolution of the exponential and Gaussian distribution. Performed analysis shows that the crucial parameter that determines the complex structure of time delay is the overvoltage and if it is of the order of few percentage, then distribution of time delay must be treated as an convolution of two random variables. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Hippocampal Malformations Do Not Necessarily Evolve into Hippocampal SclerosisEPILEPSIA, Issue 6 2005Arjune Sen Summary:,Purpose: Hippocampal malformations have been proposed to underlie or evolve into hippocampal sclerosis, a common cause of refractory partial epilepsy. We report two patients with chronic epilepsy and developmental abnormalities of the hippocampus and cortex. We seek to address, in patients with recurrent convulsive seizures over many decades, whether hippocampal malformations necessarily progress to hippocampal sclerosis. Methods: The first patient died at age 76 years and had experienced convulsive seizures for 43 years. The second patient, aged 64 years at death, had experienced convulsive seizures for 49 years. The brains were processed routinely. Immunohistochemistry for dynorphin and neuropeptide Y was performed. Results: The first case exhibited bilateral perisylvian polymicrogyria. Both hippocampi demonstrated abnormal convolution in the CA1 subfield and subiculum. In the second case, periventricular heterotopia was found in the wall of the right lateral ventricle. The right hippocampus was abnormally oriented with excessive convolutions of the pyramidal cell layer between CA1 and the subiculum. In neither patient did the hippocampi exhibit neuronal loss. Furthermore, dynorphin immunohistochemistry revealed no reactivity in the molecular layers, and staining with neuropeptide Y confirmed normal numbers of hilar interneurons. Conclusions: These two cases demonstrate histologically that, even in long-standing epilepsy, malformations of the hippocampus do not necessarily develop into hippocampal sclerosis. [source] Neural responses to uninterrupted natural speech can be extracted with precise temporal resolutionEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 1 2010Edmund C. Lalor Abstract The human auditory system has evolved to efficiently process individual streams of speech. However, obtaining temporally detailed responses to distinct continuous natural speech streams has hitherto been impracticable using standard neurophysiological techniques. Here a method is described which provides for the estimation of a temporally precise electrophysiological response to uninterrupted natural speech. We have termed this response AESPA (Auditory Evoked Spread Spectrum Analysis) and it represents an estimate of the impulse response of the auditory system. It is obtained by assuming that the recorded electrophysiological function represents a convolution of the amplitude envelope of a continuous speech stream with the to-be-estimated impulse response. We present examples of these responses using both scalp and intracranially recorded human EEG, which were obtained while subjects listened to a binaurally presented recording of a male speaker reading naturally from a classic work of fiction. This method expands the arsenal of stimulation types that can now be effectively used to derive auditory evoked responses and allows for the use of considerably more ecologically valid stimulation parameters. Some implications for future research efforts are presented. [source] Synthesisation of pulse shaping waveforms for spectral efficient digital modulations: some practical approachesEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2006Hsiao-Hwa Chen Bandwidth efficiency of all digital modulations is associated closely with baseband pulse shaping waveforms (PSW's) before carrier modulation. Traditional PSW designing approaches usually work in a way such that the design of PSW's in the time domain hardly reveals a direct connection with their spectral characteristics. In other words, designing process of the PSW's offers too little degree-of-freedom to control spectral properties of the resultant signals. This paper will present a methodology, based on which several new PSW synthesising approaches, including time-domain convolution (TDC), steepest sidelobe roll-off (SSR) and zero-point insertion (ZPI) methods, will be proposed to design different PSW's with controllable spectral characteristics, such as sidelobe roll-off rate, main lobe width and null positions in their power spectral density functions. In particular, the SSR and ZPI methods are based on a truncated cosine function series, which can be further generalised to use other seed functions. The results show that the approaches can help us to generate a PSW database containing a wide collection of promising PSW's to suit for diverse wireless applications, such as traditional digital modems as well as emerging ultra-wideband systems. Copyright © 2005 AEIT. [source] Measuring finite-frequency body-wave amplitudes and traveltimesGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006Karin Sigloch SUMMARY We have developed a method to measure finite-frequency amplitude and traveltime anomalies of teleseismic P waves. We use a matched filtering approach that models the first 25 s of a seismogram after the P arrival, which includes the depth phases pP and sP. Given a set of broad-band seismograms from a teleseismic event, we compute synthetic Green's functions using published moment tensor solutions. We jointly deconvolve global or regional sets of seismograms with their Green's functions to obtain the broad-band source time function. The matched filter of a seismogram is the convolution of the Green's function with the source time function. Traveltimes are computed by cross-correlating each seismogram with its matched filter. Amplitude anomalies are defined as the multiplicative factors that minimize the RMS misfit between matched filters and data. The procedure is implemented in an iterative fashion, which allows for joint inversion for the source time function, amplitudes, and a correction to the moment tensor. Cluster analysis is used to identify azimuthally distinct groups of seismograms when source effects with azimuthal dependence are prominent. We then invert for one source time function per group. We implement this inversion for a range of source depths to determine the most likely depth, as indicated by the overall RMS misfit, and by the non-negativity and compactness of the source time function. Finite-frequency measurements are obtained by filtering broad-band data and matched filters through a bank of passband filters. The method is validated on a set of 15 events of magnitude 5.8 to 6.9. Our focus is on the densely instrumented Western US. Quasi-duplet events (,quplets') are used to estimate measurement uncertainty on real data. Robust results are achieved for wave periods between 24 and 2 s. Traveltime dispersion is on the order of 0.5 s. Amplitude anomalies are on the order of 1 db in the lowest bands and 3 db in the highest bands, corresponding to amplification factors of 1.2 and 2.0, respectively. Measurement uncertainties for amplitudes and traveltimes depend mostly on station coverage, accuracy of the moment tensor estimate, and frequency band. We investigate the influence of those parameters in tests on synthetic data. Along the RISTRA array in the Western US, we observe amplitude and traveltime patterns that are coherent on scales of hundreds of kilometres. Below two sections of the array, we observe a combination of frequency-dependent amplitude and traveltime patterns that strongly suggest wavefront healing effects. [source] A comparison of two spectral approaches for computing the Earth response to surface loadsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2000E. Le Meur Summary When predicting the deformation of the Earth under surface loads, most models follow the same methodology, consisting of producing a unit response that is then con-volved with the appropriate surface forcing. These models take into account the whole Earth, and are generally spherical, computing a unit response in terms of its spherical harmonic representation through the use of load Love numbers. From these Love numbers, the spatial pattern of the bedrock response to any particular scenario can be obtained. Two different methods are discussed here. The first, which is related to the convolution in the classical sense, appears to be very sensitive to the total number of degrees used when summing these Love numbers in the harmonic series in order to obtain the corresponding Green's function. We will see from the spectral properties of these Love numbers how to compute these series correctly and how consequently to eliminate in practice the sensitivity to the number of degrees (Gibbs Phenomena). The second method relies on a preliminary harmonic decomposition of the load, which reduces the convolution to a simple product within Fourier space. The convergence properties of the resulting Fourier series make this approach less sensitive to any harmonic cut-off. However, this method can be more or less computationally expensive depending on the loading characteristics. This paper describes these two methods, how to eliminate Gibbs phenomena in the Green's function method, and shows how the load characteristics as well as the available computational resources can be determining factors in selecting one approach. [source] Seismic interferometry, intrinsic losses and Q -estimation,GEOPHYSICAL PROSPECTING, Issue 3 2010Deyan Draganov ABSTRACT Seismic interferometry is the process of generating new seismic traces from the cross-correlation, convolution or deconvolution of existing traces. One of the starting assumptions for deriving the representations for seismic interferometry by cross-correlation is that there is no intrinsic loss in the medium where the recordings are performed. In practice, this condition is not always met. Here, we investigate the effect of intrinsic losses in the medium on the results retrieved from seismic interferometry by cross-correlation. First, we show results from a laboratory experiment in a homogeneous sand chamber with strong losses. Then, using numerical modelling results, we show that in the case of a lossy medium ghost reflections will appear in the cross-correlation result when internal multiple scattering occurs. We also show that if a loss compensation is applied to the traces to be correlated, these ghosts in the retrieved result can be weakened, can disappear, or can reverse their polarity. This compensation process can be used to estimate the quality factor in the medium. [source] Synthesis of a seismic virtual reflector,GEOPHYSICAL PROSPECTING, Issue 3 2010Flavio Poletto ABSTRACT We describe a method to process the seismic data generated by a plurality of sources and registered by an appropriate distribution of receivers, which provides new seismic signals as if in the position of the receivers (or sources) there was an ideal reflector, even if this reflector is not present there. The data provided by this method represent the signals of a virtual reflector. The proposed algorithm performs the convolution and the subsequent sum of the real traces without needing subsurface model information. The approach can be used in combination with seismic interferometry to separate wavefields and process the reflection events. The application is described with synthetic examples, including stationary phase analysis and with real data in which the virtual reflector signal can be appreciated. [source] Assessing the impact of mixing assumptions on the estimation of streamwater mean residence timeHYDROLOGICAL PROCESSES, Issue 12 2010Fabrizio Fenicia Abstract Catchment streamwater mean residence time (Tmr) is an important descriptor of hydrological systems, reflecting their storage and flow pathway properties. Tmr is typically inferred from the composition of stable water isotopes (oxygen-18 and deuterium) in observed rainfall and discharge. Currently, lumped parameter models based on convolution and sinewave functions are usually used for tracer simulation. These traditional models are based on simplistic assumptions that are often known to be unrealistic, in particular, steady flow conditions, linearity, complete mixing and others. However, the effect of these assumptions on Tmr estimation is seldom evaluated. In this article, we build a conceptual model that overcomes several assumptions made in traditional mixing models. Using data from the experimental Maimai catchment (New Zealand), we compare a complete-mixing (CM) model, where rainfall water is assumed to mix completely and instantaneously with the total catchment storage, with a partial-mixing (PM) model, where the tracer input is divided between an ,active' and a ,dead' storage compartment. We show that the inferred distribution of Tmr is strongly dependent on the treatment of mixing processes and flow pathways. The CM model returns estimates of Tmr that are well identifiable and are in general agreement with previous studies of the Maimai catchment. On the other hand, the PM model,motivated by a priori catchment insights,provides Tmr estimates that appear exceedingly large and highly uncertain. This suggests that water isotope composition measurements in rainfall and discharge alone may be insufficient for inferring Tmr. Given our model hypothesis, we also analysed the effect of different controls on Tmr. It was found that Tmr is controlled primarily by the storage properties of the catchment, rather than by the speed of streamflow response. This provides guidance on the type of information necessary to improve Tmr estimation. Copyright © 2010 John Wiley & Sons, Ltd. [source] Discrete singular convolution methodology for free vibration and stability analyses of arbitrary straight-sided quadrilateral platesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2008Ömer Civalek Abstract A new discrete singular convolution (DSC) method is developed for vibration, buckling and static analyses of arbitrary straight-sided quadrilateral plates. The straight-sided quadrilateral domain is mapped into a square domain in the computational space using a four-node element. By using the geometric transformation, the governing equations and boundary conditions of the plate are transformed from the physical domain into a square computational domain. Numerical examples illustrating the accuracy and convergence of the DSC method for straight-sided quadrilateral thin plates such as rectangular, skew, trapezoidal and rhombic plates are presented. The results obtained by the DSC method were compared with those obtained by the other numerical and analytical methods. Copyright © 2007 John Wiley & Sons, Ltd. [source] Vibration analysis of conical panels using the method of discrete singular convolutionINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2008Ömer CivalekArticle first published online: 16 NOV 200 Abstract A discrete singular convolution (DSC) free vibration analysis of conical panels is presented. Regularized Shannon's delta kernel (RSK) is selected as singular convolution to illustrate the present algorithm. In the proposed approach, the derivatives in both the governing equations and the boundary conditions are discretized by the method of DSC. Effects of boundary conditions, vertex and subtended angle on the frequencies of conical panel are investigated. The effect of the circumferential node number on the vibrational behaviour of the panel is also analysed. The obtained results are compared with those of other numerical methods. Numerical results indicate that the DSC is a simple and reliable method for vibration analysis of conical panels. Copyright © 2006 John Wiley & Sons, Ltd. [source] A fast multi-level convolution boundary element method for transient diffusion problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 14 2005C.-H. Wang Abstract A new algorithm is developed to evaluate the time convolution integrals that are associated with boundary element methods (BEM) for transient diffusion. This approach, which is based upon the multi-level multi-integration concepts of Brandt and Lubrecht, provides a fast, accurate and memory efficient time domain method for this entire class of problems. Conventional BEM approaches result in operation counts of order O(N2) for the discrete time convolution over N time steps. Here we focus on the formulation for linear problems of transient heat diffusion and demonstrate reduced computational complexity to order O(N3/2) for three two-dimensional model problems using the multi-level convolution BEM. Memory requirements are also significantly reduced, while maintaining the same level of accuracy as the conventional time domain BEM approach. Copyright © 2005 John Wiley & Sons, Ltd. [source] DSC-Ritz method for high-mode frequency analysis of thick shallow shellsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2005C. W. Lim Abstract This paper addresses a challenging problem in computational mechanics,the analysis of thick shallow shells vibrating at high modes. Existing methods encounter significant difficulties for such a problem due to numerical instability. A new numerical approach, DSC-Ritz method, is developed by taking the advantages of both the discrete singular convolution (DSC) wavelet kernels of the Dirichlet type and the Ritz method for the numerical solution of thick shells with all possible combinations of commonly occurred boundary conditions. As wavelets are localized in both frequency and co-ordinate domains, they give rise to numerical schemes with optimal accurate, stability and flexibility. Numerical examples are considered for Mindlin plates and shells with various edge supports. Benchmark solutions are obtained and analyzed in detail. Experimental results validate the convergence, stability, accuracy and reliability of the proposed approach. In particular, with a reasonable number of grid points, the new DSC-Ritz method is capable of producing highly accurate numerical results for high-mode vibration frequencies, which are hitherto unavailable to engineers. Moreover, the capability of predicting high modes endows us the privilege to reveal a discrepancy between natural higher-order vibration modes of a Mindlin plate and those calculated via an analytical relationship linking Kirchhoff and Mindlin plates. Copyright © 2004 John Wiley & Sons, Ltd. [source] Compression of time-generated matrices in two-dimensional time-domain elastodynamic BEM analysisINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2004D. Soares Jr Abstract This paper describes a new scheme to improve the efficiency of time-domain BEM algorithms. The discussion is focused on the two-dimensional elastodynamic formulation, however, the ideas presented apply equally to any step-by-step convolution based algorithm whose kernels decay with time increase. The algorithm presented interpolates the time-domain matrices generated along the time-stepping process, for time-steps sufficiently far from the current time. Two interpolation procedures are considered here (a large number of alternative approaches is possible): Chebyshev,Lagrange polynomials and linear. A criterion to indicate the discrete time at which interpolation should start is proposed. Two numerical examples and conclusions are presented at the end of the paper. Copyright © 2004 John Wiley & Sons, Ltd. [source] Discrete singular convolution and its application to the analysis of plates with internal supports.INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2002Part 1: Theory, algorithm Abstract This paper presents a novel computational approach, the discrete singular convolution (DSC) algorithm, for analysing plate structures. The basic philosophy behind the DSC algorithm for the approximation of functions and their derivatives is studied. Approximations to the delta distribution are constructed as either bandlimited reproducing kernels or approximate reproducing kernels. Unified features of the DSC algorithm for solving differential equations are explored. It is demonstrated that different methods of implementation for the present algorithm, such as global, local, Galerkin, collocation, and finite difference, can be deduced from a single starting point. The use of the algorithm for the vibration analysis of plates with internal supports is discussed. Detailed formulation is given to the treatment of different plate boundary conditions, including simply supported, elastically supported and clamped edges. This work paves the way for applying the DSC approach in the following paper to plates with complex support conditions, which have not been fully addressed in the literature yet. Copyright © 2002 John Wiley & Sons, Ltd. [source] A comparison of modern data analysis methods for X-ray and neutron specular reflectivity dataJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 5 2007A. Van Der Lee Data analysis methods for specular X-ray or neutron reflectivity are compared. The methods that have been developed over the years can be classified into different types. The so-called classical methods are based on Parrat's or Abelès' formalism and rely on minimization using more or less evolved Levenberg,Marquardt or simplex routines. A second class uses the same formalism, but optimization is carried out using simulated annealing or genetic algorithms. A third class uses alternative expressions for the reflectivity, such as the Born approximation or distorted Born approximation. This makes it easier to invert the specular data directly, coupled or not with classical least-squares or iterative methods using over-relaxation or charge-flipping techniques. A fourth class uses mathematical methods founded in scattering theory to determine the phase of the scattered waves, but has to be coupled in certain cases with (magnetic) reference layers. The strengths and weaknesses of a number of these methods are evaluated using simulated and experimental data. It is shown that genetic algorithms are by far superior to traditional and advanced least-squares methods, but that they fail when the layers are less well defined. In the latter case, the methods from the third or fourth class are the better choice, because they permit at least a first estimate of the density profile to be obtained that can be refined using the classical methods of the first class. It is also shown that different analysis programs may calculate different reflectivities for a similar chemical system. One reason for this is that the representation of the layers is either described by chemical composition or by scattering length or electronic densities, between which the conversion of the absorptive part is not straightforward. A second important reason is that routines that describe the convolution with the instrumental resolution function are not identical. [source] Calculation of the instrumental function in X-ray powder diffractionJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2006A. D. Zuev A new method for calculating the total instrumental function of a conventional Bragg,Brentano diffractometer has been developed. The method is based on an exact analytical solution, derived from diffraction optics, for the contribution of each incident ray to the intensity registered by a detector of limited size. Because an incident ray is determined by two points (one is related to the source of the X-rays and the other to the sample) the effects of the coupling of specific instrumental functions, for example, equatorial and axial divergence instrumental functions, are treated together automatically. The intensity at any arbitrary point of the total instrumental profile is calculated by integrating the intensities over two simple rectangular regions: possible point positions on the source and possible point positions on the sample. The effects of Soller slits, a monochromator and sample absorption can also be taken into account. The main difference between the proposed method and the convolutive approach (in which the line profile is synthesized by convolving the specific instrumental functions) lies in the fact that the former provides an exact solution for the total instrumental function (exact solutions for specific instrumental functions can be obtained as special cases), whereas the latter is based on the approximations for the specific instrumental functions, and their coupling effects after the convolution are unknown. Unlike the ray-tracing method, in the proposed method the diffracted rays contributing to the registered intensity are considered as combined (part of the diffracted cone) and, correspondingly, the contribution to the instrumental line profile is obtained analytically for this part of the diffracted cone and not for a diffracted unit ray as in ray-tracing simulations. [source] A deconvolution method for the reconstruction of underlying profiles measured using large sampling volumesJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2006Y.-S. Xiong A deconvolution method for diffraction measurements based on a statistical learning technique is presented. The radial-basis function network is used to model the underlying function. A full probabilistic description of the measurement is introduced, incorporating a Bayesian algorithm based on an evidence framework. This method allows predictions of both the convolution and the underlying function from noisy measurements. In addition, the method can provide an estimation of the prediction uncertainty, i.e. error-bars. In order to assess the capability of the method, the model was tested first on synthetic data of controllable quality and sparsity; it is shown that the method works very well, even for inaccurately measured (noisy) data. Subsequently, the deconvolution method was applied to real data sets typical of neutron and synchrotron residual stress (strain) data, recovering features not immediately evident in the large-gauge-volume measurements themselves. Finally, the extent to which short-period components are lost as a function of the measurement gauge dimensions is discussed. The results seem to indicate that for a triangular sensor-sensitivity function, measurements are best made with a gauge of a width approximately equal to the wavelength of the expected strain variation, but with a significant level of overlap (,80%) between successive points; this is contrary to current practice for neutron strain measurements. [source] Analysis of scattering from polydisperse structure using Mellin convolutionJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2006Norbert Stribeck This study extends a mathematical concept for the description of heterogeneity and polydispersity in the structure of materials to multiple dimensions. In one dimension, the description of heterogeneity by means of Mellin convolution is well known. In several papers by the author, the method has been applied to the analysis of data from materials with one-dimensional structure (layer stacks or fibrils along their principal axis). According to this concept, heterogeneous structures built from polydisperse ensembles of structural units are advantageously described by the Mellin convolution of a representative template structure with the size distribution of the templates. Hence, the polydisperse ensemble of similar structural units is generated by superposition of dilated templates. This approach is particularly attractive considering the advantageous mathematical properties enjoyed by the Mellin convolution. Thus, average particle size, and width and skewness of the particle size distribution can be determined from scattering data without the need to model the size distributions themselves. The present theoretical treatment demonstrates that the concept is generally extensible to dilation in multiple dimensions. Moreover, in an analogous manner, a representative cluster of correlated particles (e.g. layer stacks or microfibrils) can be considered as a template on a higher level. Polydispersity of such clusters is, again, described by subjecting the template structure to the generalized Mellin convolution. The proposed theory leads to a simple pathway for the quantitative determination of polydispersity and heterogeneity parameters. Consistency with the established theoretical approach of polydispersity in scattering theory is demonstrated. The method is applied to the best advantage in the field of soft condensed matter when anisotropic nanostructured materials are to be characterized by means of small-angle scattering (SAXS, USAXS, SANS). [source] Symmetrization of diffraction peak profiles measured with a high-resolution synchrotron X-ray powder diffractometerJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2006H. Hibino The asymmetry of diffraction peak profiles observed with a high-resolution synchrotron powder X-ray diffractometer has been successfully removed by a double deconvolution method. In the first step, the asymmetry caused by the axial divergence aberration of the diffractometer is removed by a whole-pattern deconvolution method based on an a priori theoretical model for the aberration. In the second step, the residual asymmetry, the origin of which can be ascribed to the aberrations of the beamline optics, is also removed by a whole-pattern deconvolution method, based on an empirical model derived from the analysis of experimental diffraction peak profiles of a standard Si powder (NIST SRM640b). The beamline aberration has been modelled by the convolution of a pseudo-Voigt or Voigt function with an exponential distribution function. It has been found that the angular dependence of the asymmetry parameter in the exponential function is almost proportional to tan,, which supports the idea that the residual asymmetry should be ascribed mainly to the intrinsic asymmetry in the spectroscopic distribution of the source X-ray supplied by the beamline optics of the synchrotron facility. Recently developed procedures of whole-pattern deconvolution have been improved to treat the singularity of the instrumental function in the measured angular range. Formulae for the whole-pattern deconvolution based on the Williamson,Hall-type dependence of the width parameter of the instrumental function have also been developed. The method was applied to the diffraction intensity data of a standard ZnO powder sample (NIST SRM674) measured with a high-resolution powder diffractometer on beamline BL4B2 at the Photon Factory. The structure parameters of ZnO were refined from the integrated peak intensities, which were extracted by an individual profile fitting method applying symmetric profile models. The refined structure parameters coincide fairly well with those obtained from single-crystal data. [source] Deconvolution of instrumental aberrations for synchrotron powder X-ray diffractometryJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2003T. Ida A method to remove the effects of instrumental aberrations from the whole powder diffraction pattern measured with a high-resolution synchrotron powder diffractometer is presented. Two types of asymmetry in the peak profiles caused by (i) the axial-divergence aberration of the diffractometer (diffractometer aberration) and (ii) the aberration of the monochromator and focusing optics on the beamline (beamline aberration) are both taken into account. The method is based on the whole-pattern deconvolution by Fourier technique combined with the abscissa-scale transformation appropriate for each instrumental aberration. The experimental powder diffraction data of LaB6 (NIST SRM660) measured on beamline BL-4B2 at the Photon Factory in Tsukuba have been analysed by the method. The formula of the scale transformation for the diffractometer aberration has a priori been derived from the instrumental function with geometric parameters of the optics. The strongly deformed experimental peak profiles at low diffraction angles have been transformed to sharp peak profiles with less asymmetry by the deconvolution of the diffractometer aberration. The peak profiles obtained by the deconvolution of the diffractometer aberration were modelled by an asymmetric model profile function synthesized by the convolution of the extended pseudo-Voigt function and an asymmetric component function with an empirical asymmetry parameter, which were linearly dependent on the diffraction angle. Fairly symmetric peak profiles have been obtained by further deconvolution of the empirically determined asymmetric component of the beamline aberration. [source] Instrument line-profile synthesis in high-resolution synchrotron powder diffractionJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2003O. Masson An accurate method for synthesizing the instrumental line profile of high-resolution synchrotron powder diffraction instruments is presented. It is shown that the instrumental profile can be modelled by the convolution of four physical aberration functions: the equatorial intensity distribution, the monochromator and analyser transfer functions, and the axial divergence aberration function. Moreover, each equatorial aberration is related to an angle-independent function by a scale transform factor. The principles of the instrument line-profile calculation are general. They are applied in the case of the angle-dispersive powder X-ray diffraction beamline BM16 at the ESRF. The effects of each optical element on the overall instrument profile are discussed and the importance of the quality of the different optical elements of the instrument is emphasized. Finally, it is shown that the high resolution combined with the precise modelling of the instrument profile shape give access to a particle size as large as 3,µm. [source] Retrieval of spectral and dynamic properties from two-dimensional infrared pump-probe experimentsJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 9 2008Riccardo Chelli Abstract We have developed a fitting algorithm able to extract spectral and dynamic properties of a three level oscillator from a two-dimensional infrared spectrum (2D-IR) detected in time resolved nonlinear experiments. Such properties go from the frequencies of the ground-to-first and first-to-second vibrational transitions (and hence anharmonicity) to the frequency-fluctuation correlation function. This last is represented through a general expression that allows one to approach the various strategies of modeling proposed in the literature. The model is based on the Kubo picture of stochastic fluctuations of the transition frequency as a result of perturbations by a fluctuating surrounding. To account for the line-shape broadening due to pump pulse spectral width in double-resonance measurements, we supply the fitting algorithm with the option to perform the convolution of the spectral signal with a Lorentzian function in the pump-frequency dimension. The algorithm is tested here on 2D-IR pump-probe spectra of a Gly-Ala dipeptide recorded at various pump-probe delay times. Speedup benchmarks have been performed on a small Beowulf cluster. The program is written in FORTRAN language for both serial and parallel architectures and is available free of charge to the interested reader. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2008 [source] Design and use of multi-affinity surfaces in biomolecular interaction analysis,mass spectrometry (BIA/MS): a step toward the design of SPR/MS arraysJOURNAL OF MOLECULAR RECOGNITION, Issue 1 2003Dobrin Nedelkov Abstract The feasibility of multi-affinity ligand surfaces in biomolecular interaction analysis,mass spectrometry (BIA/MS) was explored in this work. Multi-protein affinity surfaces were constructed by utilizing antibodies to beta-2-microglobulin, cystatin C, retinol binding protein, transthyretin, serum amyloid P and C-reactive protein. In the initial experiments, all six antibodies were immobilized on a single site (flow cell) on the sensor chip surface, followed by verification of the surface activity via separate injections of purified proteins. After an injection of diluted human plasma aliquot over the antibodies-derivatized surfaces, and subsequent MALDI-TOF MS analysis, signals representing five out of the six targeted proteins were observed in the mass spectra. Further, to avoid the complexity of the spectra, the six proteins were divided into two groups (according to their molecular weight) and immobilized on two separate surfaces on a single sensor chip, followed by an injection of human plasma aliquot. The resulting mass spectra showed signals from all proteins. Also, the convolution resulting from the multiply charged ion species was eliminated. The ability to create such multi-affinity surfaces indicates that smaller-size ligand areas/spots can be employed in the BIA/MS protein interaction screening experiments, and opens up the possibilities for construction of novel multi-arrayed SPR-MS platforms and methods for high-throughput parallel protein interaction investigations. Copyright © 2003 John Wiley & Sons, Ltd. [source] Alterations in Brain Serotonin Synthesis in Male Alcoholics Measured Using Positron Emission TomographyALCOHOLISM, Issue 2 2009Masami Nishikawa Background:, A consistent association between low endogenous 5HT function and high alcohol preference has been observed, and a number of serotonergic manipulations (uptake blockers, agonists) alter alcohol consumption in animals and humans. Studies have also shown an inverse relationship between alcohol use and cerebrospinal fluid levels of serotonin metabolites, suggesting that chronic alcohol consumption produces alterations in serotonin synthesis or release. Methods:, The objective of the study was to characterize regional brain serotonin synthesis in nondepressed chronic alcoholics at treatment entry in comparison to normal nonalcoholic controls using PET and the tracer ,-[11C]-methyl- l -tryptophan. Results:, Comparisons of the alcoholics and controls by SPM found that there were significant differences in the rate of serotonin synthesis between groups. Serotonin synthesis was significantly lower among alcoholics in Brodmann Area (BA) 9, 10, and 32. However, serotonin synthesis among the alcoholics group was significantly higher than controls at BA19 in the occipital lobe and around the transverse temporal convolution in the left superior temporal gyrus (BA41). In addition, there were correlations between regional serotonin synthesis and a quantity-frequency measure of alcohol consumption. Regions showing a significant negative correlation with QF included the bilateral rectus gyri (BA11) in the orbitofrontal area, the bilateral medial frontal area (BA6), and the right amygdala. Conclusions:, Current alcoholism is associated with serotonergic abnormalities in brain regions that are known to be involved in planning, judgment, self-control, and emotional regulation. [source] Numerical evaluation of pressure from experimentally measured film thickness in EHL point contactLUBRICATION SCIENCE, Issue 1 2008Michal Vaverka Abstract This paper is concerned with elastohydrodynamic lubrication, especially the determination of lubricant film thickness and contact pressure within a point contact of friction surfaces of machine parts. A new solution technique for numerical determination of contact pressure is introduced. The direct measurement of contact pressure is very difficult. Hence, input data of lubricant film thickness obtained from the experiment based on colorimetric interferometry are used for the calculation of pressure using the inverse elasticity theory. The algorithm is enhanced by convolution in order to increase calculation speed. The approach described in this contribution gives reliable results on smooth contact and in the future, it will be extended to enable the study of contact of friction surfaces with asperities. Copyright © 2007 John Wiley & Sons, Ltd. [source] An efficient gridding reconstruction method for multishot non-Cartesian imaging with correction of off-resonance artifactsMAGNETIC RESONANCE IN MEDICINE, Issue 6 2010Yuguang Meng Abstract An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. Magn Reson Med, 2010. © 2010 Wiley-Liss, Inc. [source] |