Able

Distribution by Scientific Domains

Kinds of Able

  • antibody able
  • being able
  • best able
  • cell able
  • compound able
  • culture able
  • good able
  • method able
  • model able
  • only able
  • species able
  • strain able
  • system able
  • technique able


  • Selected Abstracts


    High-Pressure Polymerization of Ethylene in Tubular Reactors: A Rigorous Dynamic Model Able to Predict the Full Molecular Weight Distribution

    MACROMOLECULAR REACTION ENGINEERING, Issue 7 2009
    Mariano Asteasuain
    Abstract A rigorous dynamic model of the high-pressure polymerization of ethylene in tubular reactors is presented. The model is capable of predicting the full molecular weight distribution (MWD), average branching indexes, monomer conversion and average molecular weights as function of time and reactor length. The probability generating function method is applied to model the MWD. This technique allows easy and efficient calculation of the MWD, in spite of the complex mathematical description of the process. The reactor model is used to analyze the dynamic responses of MWD and other process variables under different transition policies, as well as to predict the effects of process perturbations. The influence of the material recycle on the process dynamics is also shown. [source]


    PLANNING IN REACTIVE ENVIRONMENTS

    COMPUTATIONAL INTELLIGENCE, Issue 4 2007
    A. Milani
    The diffusion of domotic and ambient intelligence systems have introduced a new vision in which autonomous deliberative agents operate in environments where reactive responses of devices can be cooperatively exploited to fulfill the agent's goals. In this article a model for automated planning in reactive environments, based on numerical planning, is introduced. A planner system, based on mixed integer linear programming techniques, which implements the model, is also presented. The planner is able to reason about the dynamic features of the environment and to produce solution plans, which take into account reactive devices and their causal relations with agent's goals by exploitation and avoidance techniques, to reach a given goal state. The introduction of reactive domains in planning poses some issues concerning reasoning patterns which are briefly depicted. Experiments of planning in reactive domains are also discussed. [source]


    MEMORY ORGANIZATION AS THE MISSING LINK BETWEEN CASE-BASED REASONING AND INFORMATION RETRIEVAL IN BIOMEDICINE

    COMPUTATIONAL INTELLIGENCE, Issue 3-4 2006
    Isabelle Bichindaritz
    Mémoire proposes a general framework for reasoning from cases in biology and medicine. Part of this project is to propose a memory organization capable of handling large cases and case bases as occur in biomedical domains. This article presents the essential principles for an efficient memory organization based on pertinent work in information retrieval (IR). IR systems have been able to scale up to terabytes of data taking advantage of large databases research to build Internet search engines. They search for pertinent documents to answer a query using term-based ranking and/or global ranking schemes. Similarly, case-based reasoning (CBR) systems search for pertinent cases using a scoring function for ranking the cases. Mémoire proposes a memory organization based on inverted indexes which may be powered by databases to search and rank efficiently through large case bases. It can be seen as a first step toward large-scale CBR systems, and in addition provides a framework for tight cooperation between CBR and IR. [source]


    Augmented reality agents for user interface adaptation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008
    István Barakonyi
    Abstract Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi-user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Curve skeleton skinning for human and creature characters

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
    Xiaosong Yang
    Abstract The skeleton driven skinning technique is still the most popular method for animating deformable human and creature characters. Albeit an industry de facto due to its computational performance and intuitiveness, it suffers from problems like collapsing elbow and candy wrapper joint. To remedy these problems, one needs to formulate the non-linear relationship between the skeleton and the skin shape of a character properly, which however proves mathematically very challenging. Placing additional joints where the skin bends increases the sampling rate and is an ad hoc way of approximating this non-linear relationship. In this paper, we propose a method that is able to accommodate the inherent non-linear relationships between the movement of the skeleton and the skin shape. We use the so-called curve skeletons along with the joint-based skeletons to animate the skin shape. Since the deformation follows the tangent of the curve skeleton and also due to higher sampling rates received from the curve points, collapsing skin and other undesirable skin deformation problems are avoided. The curve skeleton retains the advantages of the current skeleton driven skinning. It is easy to use and allows full control over the animation process. As a further enhancement, it is also fairly simple to build realistic muscle and fat bulge effect. A practical implementation in the form of a Maya plug-in is created to demonstrate the viability of the technique. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Multi-resolution collision handling for cloth-like simulations

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005
    Nitin Jain
    Abstract We present a novel multi-resolution algorithm for simulation of complex cloth-like deforming meshes. Our algorithm precomputes a multi-resolution hierarchy by using a combination of ,chromatic decomposition'1 and polygonal simplification of the underlying mesh. At runtime we selectively refine or coarsen the mesh based on the collision proximity of the mesh primitives with non-adjacent primitives. Our algorithm handles all kind of contacts, including self collisions among mesh primitives. The multi-resolution hierarchy is used to compute simplification of contact manifolds and to accelerate collision detection and response computations. We have implemented our algorithm on a high-end PC and applied it to complex simulations with tens of thousands of polygons. In practice, our algorithm is able to achieve interactive performance, while maintaining good visual fidelity. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Rendering natural waters taking fluorescence into account

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2004
    By E. Cerezo
    Abstract The aim of the work presented here is to generalize a system, developed to treat general participating media, to make it capable of considering volumetric inelastic processes such as fluorescence. Our system, based on the discrete ordinates method, is adequate to treat a complex participating medium such as natural waters as it is prepared to deal with not only anisotropic but also highly peaked phase functions, as well as to consider the spectral behaviour of the medium's characteristic parameters. It is also able to generate detailed quantitative illumination information, such as the amount of light that reaches the medium boundaries or the amount of light absorbed in each of the medium voxels. First, we present an extended form of the radiative transfer equation to incorporate inelastic volumetric phenomena. Then, we discuss the necessary changes in the general calculation scheme to include inelastic scattering. We have applied all this to consider the most common inelastic effect in natural waters: fluorescence in chlorophyll-a. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Image modification for immersive projection display based on pseudo-projection models

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2003
    Toshio Moriya
    Abstract This paper describes a practical method that enables actual images to be converted so that they can be projected onto an immersive projection display (IPD) screen. IPD screens are particularly unique in that their angle of view is extremely wide; therefore, the images projected onto them need to be taken on a special format. In practice, however, it is generally very difficult to shoot images that completely satisfy the specifications of the targeting IPD environment due to cost, technical problems or other reasons. To overcome these problems, we developed a method to modify the images by abandoning geometrical consistency. We were able to utilize this method by assuming that the given image was shot according to a special projection model. Because this model differed from the actual projection model with which the image was taken, we termed it the pseudo-projection model. Since our method uses simple geometry, and can easily be expressed by a parametric function, the degree of modification or the time sequence for modification can readily be adjusted according to the features of each type of content. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Improving realism of a surgery simulator: linear anisotropic elasticity, complex interactions and force extrapolation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3 2002
    Guillaume Picinbono
    Abstract In this article, we describe the latest developments of the minimally invasive hepatic surgery simulator prototype developed at INRIA. The goal of this simulator is to provide a realistic training test bed to perform laparoscopic procedures. Therefore, its main functionality is to simulate the action of virtual laparoscopic surgical instruments for deforming and cutting tridimensional anatomical models. Throughout this paper, we present the general features of this simulator including the implementation of several biomechanical models and the integration of two force-feedback devices in the simulation platform. More precisely, we describe three new important developments that improve the overall realism of our simulator. First, we have developed biomechanical models, based on linear elasticity and finite element theory, that include the notion of anisotropic deformation. Indeed, we have generalized the linear elastic behaviour of anatomical models to ,transversally isotropic' materials, i.e. materials having a different behaviour in a given direction. We have also added to the volumetric model an external elastic membrane representing the ,liver capsule', a rather stiff skin surrounding the liver, which creates a kind of ,surface anisotropy'. Second, we have developed new contact models between surgical instruments and soft tissue models. For instance, after detecting a contact with an instrument, we define specific boundary constraints on deformable models to represent various forms of interactions with a surgical tool, such as sliding, gripping, cutting or burning. In addition, we compute the reaction forces that should be felt by the user manipulating the force-feedback devices. The last improvement is related to the problem of haptic rendering. Currently, we are able to achieve a simulation frequency of 25,Hz (visual real time) with anatomical models of complex geometry and behaviour. But to achieve a good haptic feedback requires a frequency update of applied forces typically above 300,Hz (haptic real time). Thus, we propose a force extrapolation algorithm in order to reach haptic real time. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A real-time computer-controlled simulator: For control systems

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2008
    I. H. Altas
    Abstract A real-time simulator to accompany automatic control system courses is introduced. The design and realization methods and processes are discussed. The simulator is basically a computer-controlled system that implements the developed user friendly virtual interface software to control the speed of a small size DC motor. The virtual interface includes digital implementation models of classical proportional, integral, derivative, and all combinations of them as well as a fuzzy logic controller. The user is able to select and adjust the parameters of any desired controller that is defined and represented virtually. © 2008 Wiley Periodicals, Inc. Comput Appl Eng Educ 16: 115,126, 2008; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20130 [source]


    Möbius Transformations For Global Intrinsic Symmetry Analysis

    COMPUTER GRAPHICS FORUM, Issue 5 2010
    Vladimir G. Kim
    The goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self-isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self-mappings. During experiments with a benchmark set of meshes augmented with human-specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry. [source]


    A Hybrid Approach to Multiple Fluid Simulation using Volume Fractions

    COMPUTER GRAPHICS FORUM, Issue 2 2010
    Nahyup Kang
    Abstract This paper presents a hybrid approach to multiple fluid simulation that can handle miscible and immiscible fluids, simultaneously. We combine distance functions and volume fractions to capture not only the discontinuous interface between immiscible fluids but also the smooth transition between miscible fluids. Our approach consists of four steps: velocity field computation, volume fraction advection, miscible fluid diffusion, and visualization. By providing a combining scheme between volume fractions and level set functions, we are able to take advantages of both representation schemes of fluids. From the system point of view, our work is the first approach to Eulerian grid-based multiple fluid simulation including both miscible and immiscible fluids. From the technical point of view, our approach addresses the issues arising from variable density and viscosity together with material diffusion. We show that the effectiveness of our approach to handle multiple miscible and immiscible fluids through experiments. [source]


    Hierarchical Vortex Regions in Swirling Flow

    COMPUTER GRAPHICS FORUM, Issue 3 2009
    Christoph Petz
    Abstract We propose a new criterion to characterize hierarchical two-dimensional vortex regions induced by swirling motion. Central to the definition are closed loops that intersect the flow field at a constant angle. The union of loops belonging to the same area of swirling motion defines a vortex region. These regions are disjunct but may be nested, thus introducing a spatial hierarchy of vortex regions. We present a parameter free algorithm for the identification of these regions. Since they are not restricted to star- or convex-shaped geometries, we are able to identify also intricate regions, e.g., of elongated vortices. Computing an integrated value for each loop and mapping these values to a vortex region, introduces new ways for visualizing or filtering the vortex regions. Exemplary, an application based on the Rankine vortex model is presented. We apply our method to several CFD datasets and compare our results to existing approaches. [source]


    Fast GPU-based Adaptive Tessellation with CUDA

    COMPUTER GRAPHICS FORUM, Issue 2 2009
    Michael Schwarz
    Abstract Compact surface descriptions like higher-order surfaces are popular representations for both modeling and animation. However, for fast graphics-hardware-assisted rendering, they usually need to be converted to triangle meshes. In this paper, we introduce a new framework for performing on-the-fly crack-free adaptive tessellation of surface primitives completely on the GPU. Utilizing CUDA and its flexible memory write capabilities, we parallelize the tessellation task at the level of single surface primitives. We are hence able to derive tessellation factors, perform surface evaluation as well as generate the tessellation topology in real-time even for large collections of primitives. We demonstrate the power of our framework by exemplarily applying it to both bicubic rational Bézier patches and PN triangles. [source]


    Lighting and Occlusion in a Wave-Based Framework

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Remo Ziegler
    Abstract We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex-valued wave-based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically-based light interaction exhibiting depth-of-field, diffraction, and glare effects. Fur-thermore, an efficient and flexible evaluation of lit objects on a full-parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo-grams. In this paper we propose a novel method for real-time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave-based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering. [source]


    Layered Performance Animation with Correlation Maps

    COMPUTER GRAPHICS FORUM, Issue 3 2007
    Michael Neff
    Abstract Performance has a spontaneity and "aliveness" that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances. [source]


    Texture Synthesis using Exact Neighborhood Matching

    COMPUTER GRAPHICS FORUM, Issue 2 2007
    M. Sabha
    Abstract In this paper we present an elegant pixel-based texture synthesis technique that is able to generate visually pleasing results from source textures of both stochastic and structured nature. Inspired by the observation that the most common artifacts that occur when synthesizing textures are high-frequency discontinuities, our technique tries to avoid these artifacts by forcing at least one of the direct neighboring pixels in each causal neighborhood to match within a predetermined threshold. This does not only avoid deterioration of the visual quality, but also results in faster synthesis timings. We demonstrate our technique on a variety of stochastic and structured textures. [source]


    DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Avneesh Sud
    We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


    Confidence Interval Calculation Methods Are Infrequently Reported in Emergency-medicine Literature

    ACADEMIC EMERGENCY MEDICINE, Issue 1 2007
    Amy Marr MD
    Abstract Background There are many different confidence interval calculation methods, each providing different as well as in some cases inadequate interval estimates. Readers who know which method is used are better able to understand potentially significant limitations in study reports. Objectives To quantify how often confidence interval calculation methods are disclosed by authors in four peer-reviewed North American emergency-medicine journals. Methods The authors independently performed searches of four journals for all studies in which comparisons were made between means, medians, proportions, odds ratios, or relative risks. Case reports, editorials, subject reviews, and letters were excluded. Using a standardized abstraction form developed on a spreadsheet, the authors evaluated each article for the reporting of confidence intervals and evaluated the description of methodology used to calculate the confidence intervals. Results A total of 212 articles met the inclusion criteria. Confidence intervals were reported in 123 articles (58%; 95% CI = 51% to 64%); of these, a description of methodology was reported in 12 (9.8%; 95% CI = 5.7% to 16%). Conclusions Confidence interval methods of calculation are disclosed infrequently in emergency medicine literature. [source]


    Mobile Agent Computing Paradigm for Building a Flexible Structural Health Monitoring Sensor Network

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2010
    Bo Chen
    While sensor network approach is a feasible solution for structural health monitoring, the design of wireless sensor networks presents a number of challenges, such as adaptability and the limited communication bandwidth. To address these challenges, we explore the mobile agent approach to enhance the flexibility and reduce raw data transmission in wireless structural health monitoring sensor networks. An integrated wireless sensor network consisting of a mobile agent-based network middleware and distributed high computational power sensor nodes is developed. These embedded computer-based high computational power sensor nodes include Linux operating system, integrate with open source numerical libraries, and connect to multimodality sensors to support both active and passive sensing. The mobile agent middleware is built on a mobile agent system called Mobile-C. The mobile agent middleware allows a sensor network moving computational programs to the data source. With mobile agent middleware, a sensor network is able to adopt newly developed diagnosis algorithms and make adjustment in response to operational or task changes. The presented mobile agent approach has been validated for structural damage diagnosis using a scaled steel bridge. [source]


    Grammatical Inference Techniques and Their Application in Ground Investigation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2008
    Ian Morrey
    The data obtained from trial pits can be coded into a form that can be used as sample observations for input to a grammatical inference machine. A grammatical inference machine is a black box, which when presented with a sample of observations of some unknown source language, produces a grammar which is compatible with the sample. This article presents a heuristic model for a grammatical inference machine, which takes as data sentences and non-sentences identified as such, and is capable of inferring grammars in the class of context-free grammars expressed in Chomsky Normal Form. An algorithm and its corresponding software implementation have been developed based on this model. The software takes, as input, coded representations of ground investigation data, and produces as output a grammar which describes and classifies the geotechnical data observed in the area, and also promises the possibility of being able to predict the likely configuration of strata across the site. [source]


    Semi-Automatic 3D Reconstruction of Urban Areas Using Epipolar Geometry and Template Matching

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2006
    José Miguel Sales Dias
    The main challenge is to compute the relevant information,building's height and volume, roof's description, and texture,algorithmically, because it is very time consuming and thus expensive to produce it manually for large urban areas. The algorithm requires some initial calibration input and is able to compute the above-mentioned building characteristics from the stereo pair and the availability of the 2D CAD and the digital elevation model of the same area, with no knowledge of the camera pose or its intrinsic parameters. To achieve this, we have used epipolar geometry, homography computation, automatic feature extraction and we have solved the feature correspondence problem in the stereo pair, by using template matching. [source]


    Modeling the Dynamics of an Infrastructure Project

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2005
    Long Duy Nguyen
    These problems result in low project performance and poor project outcome. A dynamic simulation model is proposed to capture the dynamics of construction projects in the construction phase. Eight key feedback structures from previous models of project dynamics and the unique characteristics of construction projects are identified as dynamic hypotheses. They include the structures of labor, equipment, material, labor and equipment interaction, schedule, rework, safety, and quality. Subsequently, a formal simulation model is mathematically formulated in terms of stock and flow diagrams. The model is then calibrated into a real project under construction. Part of testing indicates that the simulated behavior of the model and the actual behavior of the project are similar. This implies that the model is able to simulate the dynamics of the project and, consequently, to enhance project monitoring and control. [source]


    Feature Extraction for Traffic Incident Detection Using Wavelet Transform and Linear Discriminant Analysis

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
    A. Samant
    To eliminate false alarms, an effective traffic incident detection algorithm must be able to extract incident-related features from the traffic patterns. A robust feature-extraction algorithm also helps reduce the dimension of the input space for a neural network model without any significant loss of related traffic information, resulting in a substantial reduction in the network size, the effect of random traffic fluctuations, the number of required training samples, and the computational resources required to train the neural network. This article presents an effective traffic feature-extraction model using discrete wavelet transform (DWT) and linear discriminant analysis (LDA). The DWT is first applied to raw traffic data, and the finest resolution coefficients representing the random fluctuations of traffic are discarded. Next, LDA is employed to the filtered signal for further feature extraction and reducing the dimensionality of the problem. The results of LDA are used as input to a neural network model for traffic incident detection. [source]


    Magnetic susceptibility: Further insights into macroscopic and microscopic fields and the sphere of Lorentz

    CONCEPTS IN MAGNETIC RESONANCE, Issue 1 2003
    C.J. Durrant
    Abstract To make certain quantitative interpretations of spectra from NMR experiments carried out on heterogeneous samples, such as cells and tissues, we must be able to estimate the magnetic and electric fields experienced by the resonant nuclei of atoms in the sample. Here, we analyze the relationships between these fields and the fields obtained by solving the Maxwell equations that describe the bulk properties of the materials present. This analysis separates the contribution to these fields of the molecule in which the atom in question is bonded, the "host" fields, from the contribution of all the other molecules in the system, the "external" fields. We discuss the circumstances under which the latter can be found by determining the macroscopic fields in the sample and then removing the averaged contribution of the host molecule. We demonstrate that the results produced by the, so-called, "sphere of Lorentz" construction are of general validity in both static and time-varying cases. This analytic construct, however, is not "mystical" and its justification rests not on any sphericity in the system but on the local uniformity and isotropy, i.e., spherical symmetry, of the medium when averaged over random microscopic configurations. This local averaging is precisely that which defines the equations that describe the macroscopic fields. Hence, the external microscopic fields, in a suitably averaged sense, can be estimated from the macroscopic fields. We then discuss the calculation of the external fields and that of the resonant nucleus in NMR experiments. © 2003 Wiley Periodicals, Inc. Concepts Magn Reson Part A 18A: 72,95, 2003 [source]


    Adaptive structured parallelism for distributed heterogeneous architectures: a methodological approach with pipelines and farms

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2010
    Horacio González-Vélez
    Abstract Algorithmic skeletons abstract commonly used patterns of parallel computation, communication, and interaction. Based on the algorithmic skeleton concept, structured parallelism provides a high-level parallel programming technique that allows the conceptual description of parallel programs while fostering platform independence and algorithm abstraction. This work presents a methodology to improve skeletal parallel programming in heterogeneous distributed systems by introducing adaptivity through resource awareness. As we hypothesise that a skeletal program should be able to adapt to the dynamic resource conditions over time using its structural forecasting information, we have developed adaptive structured parallelism (ASPARA). ASPARA is a generic methodology to incorporate structural information at compilation into a parallel program, which will help it to adapt at execution. ASPARA comprises four phases: programming, compilation, calibration, and execution. We illustrate the feasibility of this approach and its associated performance improvements using independent case studies based on two algorithmic skeletons,the task farm and the pipeline,evaluated in a non-dedicated heterogeneous multi-cluster system. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Formation of virtual organizations in grids: a game-theoretic approach

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010
    Thomas E. Carroll
    Abstract Applications require the composition of resources to execute in a grid computing environment. The grid service providers (GSPs), the owners of the computational resources, must form virtual organizations (VOs) to be able to provide the composite resource. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. Using game theory, we formulate the resource composition among GSPs as a coalition formation problem and propose a framework to model and solve it. Using this framework, we propose a resource management system that supports the VO formation among GSPs in a grid computing system. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Parallel heterogeneous CBIR system for efficient hyperspectral image retrieval using spectral mixture analysis

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2010
    Antonio J. Plaza
    Abstract The purpose of content-based image retrieval (CBIR) is to retrieve, from real data stored in a database, information that is relevant to a query. In remote sensing applications, the wealth of spectral information provided by latest-generation (hyperspectral) instruments has quickly introduced the need for parallel CBIR systems able to effectively retrieve features of interest from ever-growing data archives. To address this need, this paper develops a new parallel CBIR system that has been specifically designed to be run on heterogeneous networks of computers (HNOCs). These platforms have soon become a standard computing architecture in remote sensing missions due to the distributed nature of data repositories. The proposed heterogeneous system first extracts an image feature vector able to characterize image content with sub-pixel precision using spectral mixture analysis concepts, and then uses the obtained feature as a search reference. The system is validated using a complex hyperspectral image database, and implemented on several networks of workstations and a Beowulf cluster at NASA's Goddard Space Flight Center. Our experimental results indicate that the proposed parallel system can efficiently retrieve hyperspectral images from complex image databases by efficiently adapting to the underlying parallel platform on which it is run, regardless of the heterogeneity in the compute nodes and communication links that form such parallel platform. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Visualizing massively multithreaded applications with ThreadScope

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2010
    Kyle B. Wheeler
    Abstract As highly parallel multicore machines become commonplace, programs must exhibit more concurrency to exploit the available hardware. Many multithreaded programming models already encourage programmers to create hundreds or thousands of short-lived threads that interact in complex ways. Programmers need to be able to analyze, tune, and troubleshoot these large-scale multithreaded programs. To address this problem, we present ThreadScope: a tool for tracing, visualizing, and analyzing massively multithreaded programs. ThreadScope extracts the machine-independent program structure from execution trace data from a variety of tracing tools and displays it as a graph of dependent execution blocks and memory objects, enabling identification of synchronization and structural problems, even if they did not occur in the traced run. It also uses graph-based analysis to identify potential problems. We demonstrate the use of ThreadScope to view program structure, memory access patterns, and synchronization problems in three programming environments and seven applications. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Factors affecting the performance of parallel mining of minimal unique itemsets on diverse architectures

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2009
    D. J. Haglin
    Abstract Three parallel implementations of a divide-and-conquer search algorithm (called SUDA2) for finding minimal unique itemsets (MUIs) are compared in this paper. The identification of MUIs is used by national statistics agencies for statistical disclosure assessment. The first parallel implementation adapts SUDA2 to a symmetric multi-processor cluster using the message passing interface (MPI), which we call an MPI cluster; the second optimizes the code for the Cray MTA2 (a shared-memory, multi-threaded architecture) and the third uses a heterogeneous ,group' of workstations connected by LAN. Each implementation considers the parallel structure of SUDA2, and how the subsearch computation times and sequence of subsearches affect load balancing. All three approaches scale with the number of processors, enabling SUDA2 to handle larger problems than before. For example, the MPI implementation is able to achieve nearly two orders of magnitude improvement with 132 processors. Performance results are given for a number of data sets. Copyright © 2009 John Wiley & Sons, Ltd. [source]