Computational Speed (computational + speed)

Distribution by Scientific Domains


Selected Abstracts


Comparison of methods to model the gravitational gradients from topographic data bases

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2006
Christopher Jekeli
SUMMARY A number of methods have been developed over the last few decades to model the gravitational gradients using digital elevation data. All methods are based on second-order derivatives of the Newtonian mass integral for the gravitational potential. Foremost are algorithms that divide the topographic masses into prisms or more general polyhedra and sum the corresponding gradient contributions. Other methods are designed for computational speed and make use of the fast Fourier transform (FFT), require a regular rectangular grid of data, and yield gradients on the entire grid, but only at constant altitude. We add to these the ordinary numerical integration (in horizontal coordinates) of the gradient integrals. In total we compare two prism, two FFT and two ordinary numerical integration methods using 1, elevation data in two topographic regimes (rough and moderate terrain). Prism methods depend on the type of finite elements that are generated with the elevation data; in particular, alternative triangulations can yield significant differences in the gradients (up to tens of Eötvös). The FFT methods depend on a series development of the topographic heights, requiring terms up to 14th order in rough terrain; and, one popular method has significant bias errors (e.g. 13 Eötvös in the vertical,vertical gradient) embedded in its practical realization. The straightforward numerical integrations, whether on a rectangular or triangulated grid, yield sub-Eötvös differences in the gradients when compared to the other methods (except near the edges of the integration area) and they are as efficient computationally as the finite element methods. [source]


FLEXMG: A new library of multigrid preconditioners for a spectral/finite element incompressible flow solver

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2010
M. Rasquin
Abstract A new library called FLEXMG has been developed for a spectral/finite element incompressible flow solver called SFELES. FLEXMG allows the use of various types of iterative solvers preconditioned by algebraic multigrid methods. Two families of algebraic multigrid preconditioners have been implemented, namely smooth aggregation-type and non-nested finite element-type. Unlike pure gridless multigrid, both of these families use the information contained in the initial fine mesh. A hierarchy of coarse meshes is also needed for the non-nested finite element-type multigrid so that our approaches can be considered as hybrid. Our aggregation-type multigrid is smoothed with either a constant or a linear least-square fitting function, whereas the non-nested finite element-type multigrid is already smooth by construction. All these multigrid preconditioners are tested as stand-alone solvers or coupled with a GMRES method. After analyzing the accuracy of the solutions obtained with our solvers on a typical test case in fluid mechanics, their performance in terms of convergence rate, computational speed and memory consumption is compared with the performance of a direct sparse LU solver as a reference. Finally, the importance of using smooth interpolation operators is also underlined in the study. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2009
Eui Chul Lee
Abstract With recent increases in security requirements, biometrics such as fingerprints, faces, and irises have been widely used in many recognition applications including door access control, personal authentication for computers, Internet banking, automatic teller machines, and border-crossing controls. Finger vein recognition uses the unique patterns of finger veins to identify individuals at a high level of accuracy. This article proposes a new finger vein recognition method using minutia-based alignment and local binary pattern (LBP)-based feature extraction. Our study makes three novelties compared to previous works. First, we use minutia points such as bifurcation and ending points of the finger vein region for image alignment. Second, instead of using the whole finger vein region, we use several extracted minutia points and a simple affine transform for alignment, which can be performed at fast computational speed. Third, after aligning the finger vein image based on minutia points, we extract a unique finger vein code using a LBP, which reduces false rejection error and thus the equal error rate (EER) significantly. Our resulting EER was 0.081% with a total processing time of 118.6 ms. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 179,186, 2009 [source]


Adjoint network method applied to the performance sensitivities of microwave amplifiers

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2006
F. Güne
Abstract This work focuses on the performance sensitivities of microwave amplifiers using the "adjoint network and adjoint variable" method, via "wave" approaches, which includes sensitivities of the transducer power gain, noise figure, and magnitudes and phases of the input and output reflection coefficients. The method can be extended to sensitivities of the other performance measure functions. The adjoint-variable methods for design-sensitivity analysis offer computational speed and accuracy. They can be used for efficiency-based gradient optimization, in tolerance and yield analyses. In this work, an arbitrarily configured microwave amplifier is considered: firstly, each element in the network is modeled by the scattering matrix formulation, then the topology of the network is taken into account using the connection scattering-matrix formulation. The wave approach is utilized in the evaluation of all the performance-measurement functions, then sensitivity invariants are formulated using Tellegen's theorem. Performance sensitivities of the T- and ,-types of distributed-parameter amplifiers are considered as a worked example. The numerical results of T- and ,-type amplifiers for the design targets of noise figure Freq = 0.46 dB , 1,12 and Vireq = 1, GTreq = 12 dB , 15.86 in the frequency range 2,11 GHz are given in comparison to each other. Furthermore, analytical methods of the "gain factorisation" and "chain sensitivity parameter" are applied to the gain and noise sensitivities as well. In addition, "numerical perturbation" is applied to calculation of all the sensitivities. © 2006 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2006. [source]


Kinetic study of methacrylate copolymerization systems by thermoanalysis methods

JOURNAL OF APPLIED POLYMER SCIENCE, Issue 5 2008
Ali Habibi
Abstract The free-radical solution copolymerization of isobutyl methacrylate with lauryl methacrylate in the presence of an inhibitor was studied with thermoanalysis methods. A set of inhibited polymerization experiments was designed. Four different levels of initial inhibitor/initiator molar ratios were considered. In situ polymerization experiments were carried out with differential scanning calorimetry. Furthermore, to determine the impact of the polymerization media on the rate of initiation, the kinetics of the initiator decomposition were followed with nonisothermal thermoanalysis methods, and the results were compared with in situ polymerization counterparts. The robust M -estimation method was used to retrieve the kinetic parameters of the copolymerization system. This estimation method led to a reasonable prediction error for the dataset with strong multicollinearity. The model-free isoconversional method was employed to find the variation of the Arrhenius activation energy with the conversion. It was found that robust M -estimation outperformed existing methods of estimation in terms of statistical precision and computational speed, while maintaining good robustness. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci 2008 [source]


Complex molecular assemblies at hand via interactive simulations

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2009
Olivier Delalande
Abstract Studying complex molecular assemblies interactively is becoming an increasingly appealing approach to molecular modeling. Here we focus on interactive molecular dynamics (IMD) as a textbook example for interactive simulation methods. Such simulations can be useful in exploring and generating hypotheses about the structural and mechanical aspects of biomolecular interactions. For the first time, we carry out low-resolution coarse-grain IMD simulations. Such simplified modeling methods currently appear to be more suitable for interactive experiments and represent a well-balanced compromise between an important gain in computational speed versus a moderate loss in modeling accuracy compared to higher resolution all-atom simulations. This is particularly useful for initial exploration and hypothesis development for rare molecular interaction events. We evaluate which applications are currently feasible using molecular assemblies from 1900 to over 300,000 particles. Three biochemical systems are discussed: the guanylate kinase (GK) enzyme, the outer membrane protease T and the soluble N -ethylmaleimide-sensitive factor attachment protein receptors complex involved in membrane fusion. We induce large conformational changes, carry out interactive docking experiments, probe lipid,protein interactions and are able to sense the mechanical properties of a molecular model. Furthermore, such interactive simulations facilitate exploration of modeling parameters for method improvement. For the purpose of these simulations, we have developed a freely available software library called MDDriver. It uses the IMD protocol from NAMD and facilitates the implementation and application of interactive simulations. With MDDriver it becomes very easy to render any particle-based molecular simulation engine interactive. Here we use its implementation in the Gromacs software as an example. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2009 [source]


Surface deformation due to loading of a layered elastic half-space: a rapid numerical kernel based on a circular loading element

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
E. Pan
SUMMARY This study is motivated by a desire to develop a fast numerical algorithm for computing the surface deformation field induced by surface pressure loading on a layered, isotropic, elastic half-space. The approach that we pursue here is based on a circular loading element. That is, an arbitrary surface pressure field applied within a finite surface domain will be represented by a large number of circular loading elements, all with the same radius, in which the applied downwards pressure (normal stress) is piecewise uniform: that is, the load within each individual circle is laterally uniform. The key practical requirement associated with this approach is that we need to be able to solve for the displacement field due to a single circular load, at very large numbers of points (or ,stations'), at very low computational cost. This elemental problem is axisymmetric, and so the displacement vector field consists of radial and vertical components both of which are functions only of the radial coordinate r. We achieve high computational speeds using a novel two-stage approach that we call the sparse evaluation and massive interpolation (SEMI) method. First, we use a high accuracy but computationally expensive method to compute the displacement vectors at a limited number of r values (called control points or knots), and then we use a variety of fast interpolation methods to determine the displacements at much larger numbers of intervening points. The accurate solutions achieved at the control points are framed in terms of cylindrical vector functions, Hankel transforms and propagator matrices. Adaptive Gauss quadrature is used to handle the oscillatory nature of the integrands in an optimal manner. To extend these exact solutions via interpolation we divide the r -axis into three zones, and employ a different interpolation algorithm in each zone. The magnitude of the errors associated with the interpolation is controlled by the number, M, of control points. For M= 54, the maximum RMS relative error associated with the SEMI method is less than 0.2 per cent, and it is possible to evaluate the displacement field at 100 000 stations about 1200 times faster than if the direct (exact) solution was evaluated at each station; for M= 99 which corresponds to a maximum RMS relative error less than 0.03 per cent, the SEMI method is about 700 times faster than the direct solution. [source]