Home About us Contact | |||
Good Convergence (good + convergence)
Selected AbstractsString Fit: a new structurally oriented X-ray and neutron reflectivity evaluation techniqueJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2001Erich Politsch A novel method for the analysis of neutron and X-ray reflectivity measurements is presented. In contrast to existing methods, the new data fitting approach is structurally oriented and therefore only requires information about the chemical structure of studied molecules and no other ad hoc assumptions. Apart from the inversion of reflectivity into scattering length density profile, the inversion of scattering length density profile into molecular arrangement is addressed systematically for non-trivial molecular conformations for the first time. This includes the calculation of structural characteristics, such as radius of gyration or chain order parameters, based on measured reflectograms. Another important option is the possibility to evaluate simultaneously neutron and X-ray reflectograms of a given sample. For better convergence, especially for complex simultaneous evaluations, an effective extension of the normally used least-squares deviation function is introduced. Different simulated molecular ensembles are used to illustrate the features of the new approach; typically, excellent agreement between the simulated starting and final deduced data sets is achieved. [source] Comparing MEG and fMRI views to naming actions and objectsHUMAN BRAIN MAPPING, Issue 6 2009Mia Liljeström Abstract Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygen-level dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions. Hum Brain Mapp, 2009. © 2009 Wiley-Liss, Inc. [source] A node-based agglomeration AMG solver for linear elasticity in thin bodiesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2009Prasad S. Sumant Abstract This paper describes the development of an efficient and accurate algebraic multigrid finite element solver for analysis of linear elasticity problems in two-dimensional thin body elasticity. Such problems are commonly encountered during the analysis of thin film devices in micro-electro-mechanical systems. An algebraic multigrid based on element interpolation is adopted and streamlined for the development of the proposed solver. A new node-based agglomeration scheme is proposed for computationally efficient, aggressive and yet effective generation of coarse grids. It is demonstrated that the use of appropriate finite element discretization along with the proposed algebraic multigrid process preserves the rigid body modes that are essential for good convergence of the multigrid solution. Several case studies are taken up to validate the approach. The proposed node-based agglomeration scheme is shown to lead to development of sparse and efficient intergrid transfer operators making the overall multigrid solution process very efficient. The proposed solver is found to work very well even for Poisson's ratio >0.4. Finally, an application of the proposed solver is demonstrated through a simulation of a micro-electro-mechanical switch. Copyright © 2008 John Wiley & Sons, Ltd. [source] A rapidly converging filtered-error algorithm for multichannel active noise controlINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2007A. P. Berkhoff Abstract In this paper, a multichannel adaptive control algorithm is described which has good convergence properties while having relatively small computational complexity. This complexity is similar to that of the filtered-error algorithm. In order to obtain these properties, the algorithm is based on a preprocessing step for the actuator signals using a stable and causal inverse of the minimum-phase part of the transfer path between actuators and error sensors, the secondary path. The latter algorithm is known from the literature as postconditioned filtered-error algorithm, which improves convergence rate for the case that the minimum-phase part of the secondary path increases the eigenvalue spread. However, the convergence rate of this algorithm suffers from delays in the adaptation path because adaptation rates have to be reduced for larger delays. The contribution of this paper is to modify the postconditioned filtered-error scheme in such a way that the adaptation rate can be set to a higher value. Consequently, the scheme also provides good convergence if the system contains significant delays. Furthermore, a regularized extension of the scheme is given which can be used to limit the actuator signals. Copyright © 2006 John Wiley & Sons, Ltd. [source] Comparing Two Alternative Measures of General Personality in the Assessment of Psychopathy: A Test of the NEO PI-R and the MPQJOURNAL OF PERSONALITY, Issue 4 2009Eric T. Gaughan ABSTRACT This study examined the interrelations between two measures of personality, the Revised NEO Personality Inventory (NEO PI-R; P. T. Costa & R. R. McCrae, 1992) and the Multidimensional Personality Questionnaire (MPQ; Tellegen & Waller, 2008), and their relations with psychopathy in a sample of undergraduates. Results revealed good convergence between conceptually related personality traits; however, the NEO PI-R facets accounted for more variance in the MPQ subscales (mean R2=.49) than did MPQ subscales in NEO PI-R facets (mean R2=.35). Both accounted for substantial proportions of variance in psychopathy scores, although the NEO PI-R accounted for larger proportions and manifested greater incremental validity when using the broader domains of each measure; the differences decreased when the narrower facets/subscales were used. The results suggest that, although both measures assess psychopathy-related traits, the NEO PI-R provides a more complete description because of its assessment of interpersonal antagonism and the central role of this construct in psychopathy. [source] The temperature of the intergalactic medium and the Compton y parameterMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2004Pengjie Zhang ABSTRACT The thermal Sunyaev,Zeldovich (SZ) effect directly probes the thermal energy of the Universe. Its precision modelling and future high-accuracy measurements will provide a powerful way to constrain the thermal history of the Universe. In this paper, we focus on the precision modelling of the gas density weighted temperature and the mean SZ Compton y parameter. We run high-resolution adiabatic hydrodynamic simulations adopting the WMAP cosmology to study the temperature and density distribution of the intergalactic medium (IGM). To quantify possible simulation limitations, we run n=,1, , 2 self-similar simulations. Our analytical model on is based on energy conservation and matter clustering and has no free parameter. Combining both simulations and analytical models thus provides the precision modelling of and . We find that the simulated temperature probability distribution function and shows good convergence. For the WMAP cosmology, our highest-resolution simulation (10243 cells, 100 Mpc h,1 box size) reliably simulates with better than 10 per cent accuracy for z, 0.5. Toward z= 0, the simulation mass-resolution effect becomes stronger and causes the simulated to be slightly underestimated (at z= 0, ,20 per cent underestimated). Since is mainly contributed by the IGM at z, 0.5, this simulation effect on is no larger than ,10 per cent. Furthermore, our analytical model is capable of correcting this artefact. It passes all tests of self-similar simulations and WMAP simulations and is able to predict and to several per cent accuracy. For a low matter density ,CDM cosmology, the present is 0.32 (,8/0.84)(,m/0.268) keV, which accounts for 10,8 of the critical cosmological density and 0.024 per cent of the cosmic microwave background (CMB) energy. The mean y parameter is 2.6 × 10,6 (,8/0.84)(,m/0.268). The current upper limit of y < 1.5 × 10,5 measured by FIRAS has already ruled out combinations of high ,8, 1.1 and high ,m, 0.5. [source] Gravity wave drag estimation from global analyses using variational data assimilation principles.THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 609 2005I: Theory, implementation Abstract A novel technique to estimate gravity wave drag from global-scale analyses is presented. It is based on the principles of four-dimensional variational data assimilation, using a dynamical model of the middle atmosphere and its adjoint. The global analyses are treated as observations. A cost function that measures the mismatch between the model state and observations is defined. The control variables are the components of the three-dimensional gravity wave drag field, so that minimization of the cost function gives the optimal gravity wave drag field. The minimization is performed using a conjugate gradient method, with the adjoint model used to calculate the gradient of the cost function. In this work, we present the theory behind the new technique and evaluate extensively the ability of the technique to estimate the gravity wave drag using so-called twin experiments, in which the ,observations' are given by the evolution of the dynamical model with a prescribed gravity wave drag. The results show that the technique can estimate accurately the prescribed gravity wave drag. When the cost function is suitably defined, there is good convergence of the minimization scheme under realistic atmospheric conditions. We also show that the cost function gradient is well approximated taking into account only adiabatic processes. We note some limitations of the technique for estimating gravity wave drag in tropical regions if satellite temperature measurements are the only observational information available. Copyright © 2005 Royal Meteorological Society. [source] |