Algorithms Used (algorithms + used)

Distribution by Scientific Domains


Selected Abstracts


Scene Graph and Frame Update Algorithms for Smooth and Scalable 3D Visualization of Simulated Construction Operations

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2002
Vineet R. Kamat
One of the prime reasons inhibiting the widespread use of discrete-event simulation in construction planning is the absence of appropriate visual communication tools. Visualizing modeled operations in 3D is arguably the best form of communicating the logic and the inner working of simulation models and can be of immense help in establishing the credibility of analyses. New software development technologies emerge at incredible rates that allow engineers and scientists to create novel, domain-specific applications. The authors capitalized on a computer graphics technology based on the concept of the scene graph to design and implement a general-purpose 3D visualization system that is simulation and CAD-software independent. This system, the Dynamic Construction Visualizer, enables realistic visualization of modeled construction operations and the resulting products and can be used in conjunction with a wide variety of simulation tools. This paper describes the scene graph architecture and the frame updating algorithms used in designing the Dynamic Construction Visualizer. [source]


Seine: a dynamic geometry-based shared-space interaction framework for parallel scientific applications

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2006
L. Zhang
Abstract While large-scale parallel/distributed simulations are rapidly becoming critical research modalities in academia and industry, their efficient and scalable implementations continue to present many challenges. A key challenge is that the dynamic and complex communication/coordination required by these applications (dependent on the state of the phenomenon being modeled) are determined by the specific numerical formulation, the domain decomposition and/or sub-domain refinement algorithms used, etc. and are known only at runtime. This paper presents Seine, a dynamic geometry-based shared-space interaction framework for scientific applications. The framework provides the flexibility of shared-space-based models and supports extremely dynamic communication/coordination patterns, while still enabling scalable implementations. The design and prototype implementation of Seine are presented. Seine complements and can be used in conjunction with existing parallel programming systems such as MPI and OpenMP. An experimental evaluation using an adaptive multi-block oil-reservoir simulation is used to demonstrate the performance and scalability of applications using Seine. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Functional source separation applied to induced visual gamma activity

HUMAN BRAIN MAPPING, Issue 2 2008
Giulia Barbati
Abstract Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20,70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data. Hum Brain Mapp 29:131,141, 2008. © 2007 Wiley-Liss, Inc. [source]


Shared challenges in object perception for robots and infants

INFANT AND CHILD DEVELOPMENT, Issue 1 2008
Paul Fitzpatrick
Abstract Robots and humans receive partial, fragmentary hints about the world's state through their respective sensors. These hints,tiny patches of light intensity, frequency components of sound, etc.,are far removed from the world of objects which we feel and perceive so effortlessly around us. The study of infant development and the construction of robots are both deeply concerned with how this apparent gap between the world and our experience of it is bridged. In this paper, we focus on some fundamental problems in perception which have attracted the attention of researchers in both robotics and infant development. Our goal was to identify points of contact already existing between the two fields, and also important questions identified in one field that could fruitfully be addressed in the other. We start with the problem of object segregation: how do infants and robots determine visually where one object ends and another begins? For object segregation, both the fields have examined the idea of using ,key events' where perception is in some way simplified and the infant or robot acquires knowledge that can be exploited at other times. We propose that the identification of the key events themselves constitutes a point of contact between the fields. Although the specific algorithms used in robots do not necessarily map directly to infant strategies, the overall ,algorithmic skeleton' formed by the set of algorithms needed to identify and exploit key events may in fact form the basis for mutual dialogue. We then look more broadly at the role of embodiment in humans and robots, and see the opportunities it affords for development. Copyright © 2008 John Wiley & Sons, Ltd. [source]


An efficient out-of-core multifrontal solver for large-scale unsymmetric element problems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2009
J. K. Reid
Abstract In many applications where the efficient solution of large sparse linear systems of equations is required, a direct method is frequently the method of choice. Unfortunately, direct methods have a potentially severe limitation: as the problem size grows, the memory needed generally increases rapidly. However, the in-core memory requirements can be limited by storing the matrix and its factors externally, allowing the solver to be used for very large problems. We have designed a new out-of-core package for the large sparse unsymmetric systems that arise from finite-element problems. The code, which is called HSL_MA78, implements a multifrontal algorithm and achieves efficiency through the use of specially designed code for handling the input/output operations and efficient dense linear algebra kernels. These kernels, which are available as a separate package called HSL_MA74, use high-level BLAS to perform the partial factorization of the frontal matrices and offer both threshold partial and rook pivoting. In this paper, we describe the design of HSL_MA78 and explain its user interface and the options it offers. We also describe the algorithms used by HSL_MA74 and illustrate the performance of our new codes using problems from a range of practical applications. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A robust methodology for RANS simulations of highly underexpanded jets

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2008
G. Lehnasch
Abstract This work aims at developing/combining numerical tools adapted to the simulation of the near field of highly underexpanded jets. An overview of the challenging numerical problems related to the complex shock/expansion structure encountered in these flows is given and an efficient and low-cost numerical strategy is proposed to overcome these, even on short computational domains. Based on common upwinding algorithms used on unstructured meshes in a mixed finite-volume/finite-element approach, it relies on an appropriate utilization of zonal anisotropic remeshing algorithms. This methodology is validated for the whole near field of cold air jets issuing from axisymmetric convergent nozzles and yielding various underexpansion ratios. In addition, the most usual corrections of the k,, model used to take into account the compressibility effects on turbulence are precisely assessed. Copyright © 2007 John Wiley & Sons, Ltd. [source]


What are daily maximum and minimum temperatures in observed climatology?

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 3 2008
X. Lin
Abstract Instrumental daily maximum and minimum temperatures are reported and archived from various surface thermometers along with different average algorithms in historical and current U.S. surface climate networks. An instrumental bias in daily maximum and minimum temperatures caused by surface temperature sensors due to the different sampling rates, average algorithms, and sensor's time constants was examined using a Gaussian-distributed function of surface air temperature fluctuations in simulation. In this study, the field observations were also included to examine the effects of average algorithms used in reporting daily maximum and minimum temperatures. Compared to the longest-recorded and standard liquid-in-glass maximum and minimum thermometers, some surface climate networks produced a systematic warming (cooling) bias in daily maximum (minimum) temperature observations, thus, resulting biases made the diurnal temperature range (DTR) more biased in extreme climate studies. Our study clarified the ambiguous concepts on daily maximum and minimum temperature observations defined by the World Meteorological Organization (WMO) in terms of sensor's time constants and average lengths and an accurate description of daily maximum and minimum temperatures is recommended to avoid the uncertainties occurred in the observed climatology. Copyright © 2007 Royal Meteorological Society [source]


Performance evaluation of adaptive routing algorithms in packet-switched intersatellite link networks

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 2 2002
Mihael Mohor
Abstract This paper addresses the performance evaluation of adaptive routing algorithms in non-geostationary packet-switched satellite communication systems. The dynamic topology of satellite networks and variable traffic load in satellite coverage areas, due to the motion of satellites in their orbit planes, pose stringent requirements to routing algorithms. We have limited the scope of our interest to routing in the intersatellite link (ISL) segment. In order to analyse the applicability of different routing algorithms used in terrestrial networks, and to evaluate the performance of new algorithms designed for satellite networks, we have built a simulation model of a satellite communication system with intersatellite links. In the paper, we present simulation results considering a network-uniform source/destination distribution model and a uniform source,destination traffic flow, thus showing the inherent routing characteristics of a selected Celestri-like LEO satellite constellation. The updates of the routing tables are centrally calculated according to the Dijkstra shortest path algorithm. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Guidelines for assessment of bone microstructure in rodents using micro,computed tomography

JOURNAL OF BONE AND MINERAL RESEARCH, Issue 7 2010
Mary L Bouxsein
Abstract Use of high-resolution micro,computed tomography (µCT) imaging to assess trabecular and cortical bone morphology has grown immensely. There are several commercially available µCT systems, each with different approaches to image acquisition, evaluation, and reporting of outcomes. This lack of consistency makes it difficult to interpret reported results and to compare findings across different studies. This article addresses this critical need for standardized terminology and consistent reporting of parameters related to image acquisition and analysis, and key outcome assessments, particularly with respect to ex vivo analysis of rodent specimens. Thus the guidelines herein provide recommendations regarding (1) standardized terminology and units, (2) information to be included in describing the methods for a given experiment, and (3) a minimal set of outcome variables that should be reported. Whereas the specific research objective will determine the experimental design, these guidelines are intended to ensure accurate and consistent reporting of µCT-derived bone morphometry and density measurements. In particular, the methods section for papers that present µCT-based outcomes must include details of the following scan aspects: (1) image acquisition, including the scanning medium, X-ray tube potential, and voxel size, as well as clear descriptions of the size and location of the volume of interest and the method used to delineate trabecular and cortical bone regions, and (2) image processing, including the algorithms used for image filtration and the approach used for image segmentation. Morphometric analyses should be based on 3D algorithms that do not rely on assumptions about the underlying structure whenever possible. When reporting µCT results, the minimal set of variables that should be used to describe trabecular bone morphometry includes bone volume fraction and trabecular number, thickness, and separation. The minimal set of variables that should be used to describe cortical bone morphometry includes total cross-sectional area, cortical bone area, cortical bone area fraction, and cortical thickness. Other variables also may be appropriate depending on the research question and technical quality of the scan. Standard nomenclature, outlined in this article, should be followed for reporting of results. © 2010 American Society for Bone and Mineral Research [source]


Comparison Insight Bone Measurements by Histomorphometry and ,CT,

JOURNAL OF BONE AND MINERAL RESEARCH, Issue 7 2005
Daniel Chappard MD
Abstract Morphometric analysis of 70 bone biopsies was done in parallel by ,CT and histomorphometry. ,CT provided higher results for trabecular thickness and separation because of the 3D shape of these anatomical objects. Introduction: Bone histomorphometry is used to explore the various metabolic bone diseases. The technique is done on microscopic 2D sections, and several methods have been proposed to extrapolate 2D measurements to the 3D dimension. X-ray ,CT is a recently developed imaging tool to appreciate 3D architecture. Recently the use of 2D histomorphometric measurements have been shown to provide discordant results compared with 3D values obtained directly. Material and Methods: Seventy human bone biopsies were removed from patients presenting with metabolic bone diseases. Complete bone biopsies were examined by ,CT. Bone volume (BV/TV), Tb.Th, and Tb.Sp were measured on the 3D models. Tb.Th and Tb.Sp were measured by a method based on the sphere algorithm. In addition, six images were resliced and transferred to an image analyzer: bone volume and trabecular characteristics were measured after thresholding of the images. Bone cores were embedded undecalcified; histological sections were prepared and measured by routine histomorphometric methods providing another set of values for bone volume and trabecular characteristics. Comparison between the different methods was done by using regression analysis, Bland-Altman, Passing-Bablock, and Mountain plots. Results: Correlations between all parameters were highly significant, but ,CT overestimated bone volume. The osteoid volume had no influence in this series. Overestimation may have been caused by a double threshold used in ,CT, giving trabecular boundaries less well defined than on histological sections. Correlations between Tb.Th and Tb.Sp values obtained by 3D or 2D measurements were lower, and 3D analysis always overestimated thickness by ,50%. These increases could be attributed to the 3D shape of the object because the number of nodes and the size of the marrow cavities were correlated with 3D values. Conclusion: In clinical practice, ,CT seems to be an interesting method providing reliable morphometric results in less time than conventional histomorphometry. The correlation coefficient is not sufficient to study the agreement between techniques in histomorphometry. The architectural descriptors are influenced by the algorithms used in 3D. [source]


Protein,protein docking dealing with the unknown

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 2 2010
Irina S. Moreira
Abstract Protein,protein binding is one of the critical events in biology, and knowledge of proteic complexes three-dimensional structures is of fundamental importance for the biochemical study of pharmacologic compounds. In the past two decades there was an emergence of a large variety of algorithms designed to predict the structures of protein,protein complexes,a procedure named docking. Computational methods, if accurate and reliable, could play an important role, both to infer functional properties and to guide new experiments. Despite the outstanding progress of the methodologies developed in this area, a few problems still prevent protein,protein docking to be a widespread practice in the structural study of proteins. In this review we focus our attention on the principles that govern docking, namely the algorithms used for searching and scoring, which are usually referred as the docking problem. We also focus our attention on the use of a flexible description of the proteins under study and the use of biological information as the localization of the hot spots, the important residues for protein,protein binding. The most common docking softwares are described too. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]


On singular behaviors of impedance-based repeatable control for redundant robots

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 4 2001
Chau-Chang Wang
This article addresses the association between the unstiffening phenomena in structural mechanics and the algorithmic singularities encountered in the impedance-based repeatable control algorithms used to command redundant manipulators. It is well known that velocity control schemes such as the pseudoinverse control schemes do not guarantee repeatability for redundant manipulators. In other words, for a closed end-effector trajectory, the joints do not, in general, exhibit a closed trajectory. One way to overcome this problem is to model each joint with compliance and incorporate a second-order correction term for the pseudoinverse. With this model, the joint configuration adopted by the manipulator at a given point in task space is one which minimizes the artificial potential energy of the system and is locally unique. In terms of statics, this is equivalent to saying that the elastic structure reaches its static equilibrium under external load. Keep this analogy in mind. We know that the impedance control commands the manipulator to mimic the behavior of an elastic articulated chain. For any phenomena observable on a real elastic structure, we should be able to find its counterpart embedded in the impedance control. In this article, we analyze the performance of such repeatable control algorithms from the point of view of structure mechanics. Singularities in the algorithm are examined and their significance in mechanics are also discussed. © 2001 John Wiley & Sons, Inc. [source]


Linear instability of ideal flows on a sphere

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 3 2009
Yuri N. Skiba
Abstract A unified approach to the normal mode instability study of steady solutions to the vorticity equation governing the motion of an ideal incompressible fluid on a rotating sphere is considered. The four types of well-known solutions are considered, namely, the Legendre-polynomial (LP) flows, Rossby,Haurwitz (RH) waves, Wu,Verkley (WV) waves and modons. A conservation law for disturbances to each solution is derived and used to obtain a necessary condition for its exponential instability. By these conditions, Fjörtoft's (Tellus 1953; 5:225,230) average spectral number of the amplitude of an unstable mode must be equal to a special value. In the case of LP flows or RH waves, this value is related only with the basic flow degree. For the WV waves and modons, it depends both on the basic flow degree and on the spectral distribution of the mode energy in the inner and outer regions of the flow. Peculiarities of the instability conditions for different types of modons are discussed. The new instability conditions specify the spectral structure of growing disturbances localizing them in the phase space. For the LP flows, this condition complements the well-known Rayleigh,Kuo and Fjörtoft conditions related to the zonal flow profile. Some analytical and numerical examples are considered. The maximum growth rate of unstable modes is also estimated, and the orthogonality of any unstable, decaying and non-stationary mode to the basic flow is shown in the energy inner product. The analytical instability results obtained here can also be applied for testing the accuracy of computational programs and algorithms used for the numerical stability study. It should be stressed that Fjörtoft's spectral number appearing both in the instability conditions and in the maximum growth rate estimates is the parameter of paramount importance in the linear instability problem of ideal flows on a sphere. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Are Mechanistic and Statistical QSAR Approaches Really Different?

MOLECULAR INFORMATICS, Issue 6-7 2010
MLR Studies on 158 Cycloalkyl-Pyranones
Abstract Two parallel approaches for quantitative structure-activity relationships (QSAR) are predominant in literature, one guided by mechanistic methods (including read-across) and another by the use of statistical methods. To bridge the gap between these two approaches and to verify their main differences, a comparative study of mechanistically relevant and statistically relevant QSAR models, developed on a case study of 158 cycloalkyl-pyranones, biologically active on inhibition (Ki) of HIV protease, was performed. Firstly, Multiple Linear Regression (MLR) based models were developed starting from a limited amount of molecular descriptors which were widely proven to have mechanistic interpretation. Then robust and predictive MLR models were developed on the same set using two different statistical approaches unbiased of input descriptors. Development of models based on Statistical I method was guided by stepwise addition of descriptors while Genetic Algorithm based selection of descriptors was used for the Statistical II. Internal validation, the standard error of the estimate, and Fisher's significance test were performed for both the statistical models. In addition, external validation was performed for Statistical II model, and Applicability Domain was verified as normally practiced in this approach. The relationships between the activity and the important descriptors selected in all the models were analyzed and compared. It is concluded that, despite the different type and number of input descriptors, and the applied descriptor selection tools or the algorithms used for developing the final model, the mechanistical and statistical approach are comparable to each other in terms of quality and also for mechanistic interpretability of modelling descriptors. Agreement can be observed between these two approaches and the better result could be a consensus prediction from both the models. [source]


Data and Graph Mining in Chemical Space for ADME and Activity Data Sets

MOLECULAR INFORMATICS, Issue 3 2006

Abstract We present a classification method, which is based on a coordinate-free chemical space. Thus, it does not depend on descriptor values commonly used by coordinate-based chemical space methods. In our method the molecular similarity of chemical structures is evaluated by a generalized maximum common graph isomorphism, which supports the usage of numerical physicochemical atom property labels in addition to discrete-atom-type labels. The Maximum Common Substructure (MCS) algorithm applies the Highest Scoring Common Substructure (HSCS) ranking of Sheridan and co-workers, which penalizes discontinuous fragments. For all compared classification algorithms used in this work we analyze their usefulness based on two objectives. First, we are interested in highly accurate and general hypotheses and second, the interpretation ability is highly important to increase our structural knowledge for the ADME data sets and the activity data set investigated in this work. [source]


The evolution of substructure in galaxy, group and cluster haloes , III.

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2005
Comparison with simulations
ABSTRACT In a previous paper, we described a new method for including detailed information about substructure in semi-analytic models of halo formation based on merger trees. In this paper, we compare the predictions of our model with results from self-consistent numerical simulations. We find that in general the two methods agree extremely well, particularly once numerical effects and selection effects in the choice of haloes are taken into account. As expected from the original analyses of the simulations, we see some evidence for artificial overmerging in the innermost regions of the simulated haloes, either because substructure is being disrupted artificially or because the group-finding algorithms used to identify substructure are not detecting all the bound clumps in the highest-density regions. Our analytic results suggest that greater mass and force resolution may be required before numerical overmerging becomes negligible in all current applications. We discuss the implications of this result for observational and experimental tests of halo substructure, such as the analysis of discrepant magnification ratios in strongly lensed systems, terrestrial experiments to detect dark matter particles directly or indirect detection experiments searching for positrons, gamma-rays, neutrinos or other dark matter decay or annihilation products. [source]


Analysis of a circulant based preconditioner for a class of lower rank extracted systems

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2005
S. Salapaka
Abstract This paper proposes and studies the performance of a preconditioner suitable for solving a class of symmetric positive definite systems, Apx=b, which we call lower rank extracted systems (LRES), by the preconditioned conjugate gradient method. These systems correspond to integral equations with convolution kernels defined on a union of many line segments in contrast to only one line segment in the case of Toeplitz systems. The p × p matrix, Ap, is shown to be a principal submatrix of a larger N × N Toeplitz matrix, AN. The preconditioner is provided in terms of the inverse of a 2N × 2N circulant matrix constructed from the elements of AN. The preconditioner is shown to yield clustering in the spectrum of the preconditioned matrix similar to the clustering results for iterative algorithms used to solve Toeplitz systems. The analysis also demonstrates that the computational expense to solve LRE systems is reduced to O(N log N). Copyright © 2004 John Wiley & Sons, Ltd. [source]


Mid-domain models as predictors of species diversity patterns: bathymetric diversity gradients in the deep sea

OIKOS, Issue 3 2005
Craig R. McClain
Geometric constraints represent a class of null models that describe how species diversity may vary between hard boundaries that limit geographic distributions. Recent studies have suggested that a number of large scale biogeographic patterns of diversity (e.g. latitude, altitude, depth) may reflect boundary constraints. However, few studies have rigorously tested the degree to which mid-domain null predictions match empirical patterns or how sensitive the null models are to various assumptions. We explore how variation in the assumptions of these models alter null depth ranges and consequently bathymetric variation in diversity, and test the extent to which bathymetric patterns of species diversity in deep sea gastropods, bivalves, and polychaetes match null predictions based on geometric constraints. Range,size distributions and geographic patterns of diversity produced by these null models are sensitive to the relative position of the hard boundaries, the specific algorithms used to generate range sizes, and whether species are continuously or patchily distributed between range end points. How well empirical patterns support null expectations is highly dependent on these assumptions. Bathymetric patterns of species diversity for gastropods, bivalves and polychaetes differ substantially from null expectations suggesting that geometric constraints do not account for diversity,depth patterns in the deep sea benthos. [source]