Computational Resources (computational + resource)

Distribution by Scientific Domains

Selected Abstracts

Anatomics: the intersection of anatomy and bioinformatics

Jonathan B. L. Bard
Abstract Computational resources are now using the tissue names of the major model organisms so that tissue-associated data can be archived in and retrieved from databases on the basis of developing and adult anatomy. For this to be done, the set of tissues in that organism (its anatome) has to be organized in a way that is computer-comprehensible. Indeed, such formalization is a necessary part of what is becoming known as systems biology, in which explanations of high-level biological phenomena are not only sought in terms of lower-level events, but are articulated within a computational framework. Lists of tissue names alone, however, turn out to be inadequate for this formalization because tissue organization is essentially hierarchical and thus cannot easily be put into tables, the natural format of relational databases. The solution now adopted is to organize the anatomy of each organism as a hierarchy of tissue names and linking relationships (e.g. the tibia is PART OF the leg, the tibia IS-A bone) within what are known as ontologies. In these, a unique ID is assigned to each tissue and this can be used within, for example, gene-expression databases to link data to tissue organization, and also used to query other data sources (interoperability), while inferences about the anatomy can be made within the ontology on the basis of the relationships. There are now about 15 such anatomical ontologies, many of which are linked to organism databases; these ontologies are now publicly available at the Open Biological Ontologies website ( from where they can be freely downloaded and viewed using standard tools. This review considers how anatomy is formalized within ontologies, together with the problems that have had to be solved for this to be done. It is suggested that the appropriate term for the analysis, computer formulation and use of the anatome is anatomics. [source]

Implementation, performance, and science results from a 30.7 TFLOPS IBM BladeCenter cluster

Craig A. Stewart
Abstract This paper describes Indiana University's implementation, performance testing, and use of a large high performance computing system. IU's Big Red, a 20.48 TFLOPS IBM e1350 BladeCenter cluster, appeared in the 27th Top500 list as the 23rd fastest supercomputer in the world in June 2006. In spring 2007, this computer was upgraded to 30.72 TFLOPS. The e1350 BladeCenter architecture, including two internal networks accessible to users and user applications and two networks used exclusively for system management, has enabled the system to provide good scalability on many important applications while being well manageable. Implementing a system based on the JS21 Blade and PowerPC 970MP processor within the US TeraGrid presented certain challenges, given that Intel-compatible processors dominate the TeraGrid. However, the particular characteristics of the PowerPC have enabled it to be highly popular among certain application communities, particularly users of molecular dynamics and weather forecasting codes. A critical aspect of Big Red's implementation has been a focus on Science Gateways, which provide graphical interfaces to systems supporting end-to-end scientific workflows. Several Science Gateways have been implemented that access Big Red as a computational resource,some via the TeraGrid, some not affiliated with the TeraGrid. In summary, Big Red has been successfully integrated with the TeraGrid, and is used by many researchers locally at IU via grids and Science Gateways. It has been a success in terms of enabling scientific discoveries at IU and, via the TeraGrid, across the US. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Tom Carchrae
This paper addresses the question of allocating computational resources among a set of algorithms to achieve the best performance on scheduling problems. Our primary motivation in addressing this problem is to reduce the expertise needed to apply optimization technology. Therefore, we investigate algorithm control techniques that make decisions based only on observations of the improvement in solution quality achieved by each algorithm. We call our approach "low knowledge" since it does not rely on complex prediction models, either of the problem domain or of algorithm behavior. We show that a low-knowledge approach results in a system that achieves significantly better performance than all of the pure algorithms without requiring additional human expertise. Furthermore the low-knowledge approach achieves performance equivalent to a perfect high-knowledge classification approach. [source]

TouchTone: Interactive Local Image Adjustment Using Point-and-Swipe

Chia-Kai Liang
Recent proliferation of camera phones, photo sharing and social network services has significantly changed how we process our photos. Instead of going through the traditional download-edit-share cycle using desktop editors, an increasing number of photos are taken with camera phones and published through cellular networks. The immediacy of the sharing process means that on-device image editing, if needed, should be quick and intuitive. However, due to the limited computational resources and vastly different user interaction model on small screens, most traditional local selection methods can not be directly adapted to mobile devices. To address this issue, we present TouchTone, a new method for edge-aware image adjustment using simple finger gestures. Our method enables users to select regions within the image and adjust their corresponding photographic attributes simultaneously through a simple point-and-swipe interaction. To enable fast interaction, we develop a memory- and computation-efficient algorithm which samples a collection of 1D paths from the image, computes the adjustment solution along these paths, and interpolates the solutions to entire image through bilateral filtering. Our system is intuitive to use, and can support several local editing tasks, such as brightness, contrast, and color balance adjustments, within a minute on a mobile device. [source]

Feature Extraction for Traffic Incident Detection Using Wavelet Transform and Linear Discriminant Analysis

A. Samant
To eliminate false alarms, an effective traffic incident detection algorithm must be able to extract incident-related features from the traffic patterns. A robust feature-extraction algorithm also helps reduce the dimension of the input space for a neural network model without any significant loss of related traffic information, resulting in a substantial reduction in the network size, the effect of random traffic fluctuations, the number of required training samples, and the computational resources required to train the neural network. This article presents an effective traffic feature-extraction model using discrete wavelet transform (DWT) and linear discriminant analysis (LDA). The DWT is first applied to raw traffic data, and the finest resolution coefficients representing the random fluctuations of traffic are discarded. Next, LDA is employed to the filtered signal for further feature extraction and reducing the dimensionality of the problem. The results of LDA are used as input to a neural network model for traffic incident detection. [source]

Formation of virtual organizations in grids: a game-theoretic approach

Thomas E. Carroll
Abstract Applications require the composition of resources to execute in a grid computing environment. The grid service providers (GSPs), the owners of the computational resources, must form virtual organizations (VOs) to be able to provide the composite resource. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. Using game theory, we formulate the resource composition among GSPs as a coalition formation problem and propose a framework to model and solve it. Using this framework, we propose a resource management system that supports the VO formation among GSPs in a grid computing system. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Adaptive workflow processing and execution in Pegasus

Kevin Lee
Abstract Workflows are widely used in applications that require coordinated use of computational resources. Workflow definition languages typically abstract over some aspects of the way in which a workflow is to be executed, such as the level of parallelism to be used or the physical resources to be deployed. As a result, a workflow management system has the responsibility of establishing how best to execute a workflow given the available resources. The Pegasus workflow management system compiles abstract workflows into concrete execution plans, and has been widely used in large-scale e-Science applications. This paper describes an extension to Pegasus whereby resource allocation decisions are revised during workflow evaluation, in the light of feedback on the performance of jobs at runtime. The contributions of this paper include: (i) a description of how adaptive processing has been retrofitted to an existing workflow management system; (ii) a scheduling algorithm that allocates resources based on runtime performance; and (iii) an experimental evaluation of the resulting infrastructure using grid middleware over clusters. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Parallel tiled QR factorization for multicore architectures

Alfredo Buttari
Abstract As multicore systems continue to gain ground in the high-performance computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine-grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data (referred to as ,tiles'). These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out-of-order execution of the tasks that will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithm for QR factorization where parallelism can be exploited only at the level of the BLAS operations and with vendor implementations. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Optimization of integrated Earth System Model components using Grid-enabled data management and computation

A. R. Price
Abstract In this paper, we present the Grid enabled data management system that has been deployed for the Grid ENabled Integrated Earth system model (GENIE) project. The database system is an augmented version of the Geodise Database Toolbox and provides a repository for scripts, binaries and output data in the GENIE framework. By exploiting the functionality available in the Geodise toolboxes we demonstrate how the database can be employed to tune parameters of coupled GENIE Earth System Model components to improve their match with observational data. A Matlab client provides a common environment for the project Virtual Organization and allows the scripting of bespoke tuning studies that can exploit multiple heterogeneous computational resources. We present the results of a number of tuning exercises performed on GENIE model components using multi-dimensional optimization methods. In particular, we find that it is possible to successfully tune models with up to 30 free parameters using Kriging and Genetic Algorithm methods. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Incentive-based scheduling in Grid computing

Yanmin Zhu
Abstract With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, Grid computing has emerged as an attractive computing paradigm. In typical Grid environments, there are two distinct parties, resource consumers and resource providers. Enabling an effective interaction between the two parties (i.e. scheduling jobs of consumers across the resources of providers) is particularly challenging due to the distributed ownership of Grid resources. In this paper, we propose an incentive-based peer-to-peer (P2P) scheduling for Grid computing, with the goal of building a practical and robust computational economy. The goal is realized by building a computational market supporting fair and healthy competition among consumers and providers. Each participant in the market competes actively and behaves independently for its own benefit. A market is said to be healthy if every player in the market gets sufficient incentive for joining the market. To build the healthy computational market, we propose the P2P scheduling infrastructure, which takes the advantages of P2P networks to efficiently support the scheduling. The proposed incentive-based algorithms are designed for consumers and providers, respectively, to ensure every participant gets sufficient incentive. Simulation results show that our approach is successful in building a healthy and scalable computational economy. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Neuroscience instrumentation and distributed analysis of brain activity data: a case for eScience on global Grids

Rajkumar Buyya
Abstract The distribution of knowledge (by scientists) and data sources (advanced scientific instruments), and the need for large-scale computational resources for analyzing massive scientific data are two major problems commonly observed in scientific disciplines. Two popular scientific disciplines of this nature are brain science and high-energy physics. The analysis of brain-activity data gathered from the MEG (magnetoencephalography) instrument is an important research topic in medical science since it helps doctors in identifying symptoms of diseases. The data needs to be analyzed exhaustively to efficiently diagnose and analyze brain functions and requires access to large-scale computational resources. The potential platform for solving such resource intensive applications is the Grid. This paper presents the design and development of MEG data analysis system by leveraging Grid technologies, primarily Nimrod-G, Gridbus, and Globus. It describes the composition of the neuroscience (brain-activity analysis) application as parameter-sweep application and its on-demand deployment on global Grids for distributed execution. The results of economic-based scheduling of analysis jobs for three different optimizations scenarios on the world-wide Grid testbed resources are presented along with their graphical visualization. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Hemispheric asymmetries in children's perception of nonlinguistic human affective sounds

Seth D. Pollak
In the present work, we developed a database of nonlinguistic sounds that mirror prosodic characteristics typical of language and thus carry affective information, but do not convey linguistic information. In a dichotic-listening task, we used these novel stimuli as a means of disambiguating the relative contributions of linguistic and affective processing across the hemispheres. This method was applied to both children and adults with the goal of investigating the role of developing cognitive resource capacity on affective processing. Results suggest that children's limited computational resources influence how they process affective information and rule out attentional biases as a factor in children's perceptual asymmetries for nonlinguistic affective sounds. These data further suggest that investigation of perception of nonlinguistic affective sounds is a valuable tool in assessing interhemispheric asymmetries in affective processing, especially in parceling out linguistic contributions to hemispheric differences. [source]

Scaling of water flow through porous media and soils

K. Roth
Summary Scaling of fluid flow in general is outlined and contrasted to the scaling of water flow in porous media. It is then applied to deduce Darcy's law, thereby demonstrating that stationarity of the flow field at the scale of the representative elementary volume (REV) is a critical prerequisite. The focus is on the implications of the requirement of stationarity, or local equilibrium, in particular on the validity of the Richards equation for the description of water movement through soils. Failure to satisfy this essential requirement may occur at the scale of the REV or, particularly in numerical simulations, at the scale of the model discretization. The latter can be alleviated by allocation of more computational resources and by working on a finer-grained representation. In contrast, the former is fundamental and leads to an irrevocable failure of the Richards equation as is observed with infiltration instabilities that lead to fingered flow. [source]

A comparison of two spectral approaches for computing the Earth response to surface loads

E. Le Meur
Summary When predicting the deformation of the Earth under surface loads, most models follow the same methodology, consisting of producing a unit response that is then con-volved with the appropriate surface forcing. These models take into account the whole Earth, and are generally spherical, computing a unit response in terms of its spherical harmonic representation through the use of load Love numbers. From these Love numbers, the spatial pattern of the bedrock response to any particular scenario can be obtained. Two different methods are discussed here. The first, which is related to the convolution in the classical sense, appears to be very sensitive to the total number of degrees used when summing these Love numbers in the harmonic series in order to obtain the corresponding Green's function. We will see from the spectral properties of these Love numbers how to compute these series correctly and how consequently to eliminate in practice the sensitivity to the number of degrees (Gibbs Phenomena). The second method relies on a preliminary harmonic decomposition of the load, which reduces the convolution to a simple product within Fourier space. The convergence properties of the resulting Fourier series make this approach less sensitive to any harmonic cut-off. However, this method can be more or less computationally expensive depending on the loading characteristics. This paper describes these two methods, how to eliminate Gibbs phenomena in the Green's function method, and shows how the load characteristics as well as the available computational resources can be determining factors in selecting one approach. [source]

Dynamics of unsaturated soils using various finite element formulations

Nadarajah Ravichandran
Abstract Unsaturated soils are three-phase porous media consisting of a solid skeleton, pore liquid, and pore gas. The coupled mathematical equations representing the dynamics of unsaturated soils can be derived based on the theory of mixtures. Solution of these fully coupled governing equations for unsaturated soils requires tremendous computational resources because three individual phases and interactions between them have to be taken into account. The fully coupled equations governing the dynamics of unsaturated soils are first presented and then two finite element formulations of the governing equations are presented and implemented within a finite element framework. The finite element implementation of all the terms in the governing equations results in the complete formulation and is solved for the first time in this paper. A computationally efficient reduced formulation is obtained by neglecting the relative accelerations and velocities of liquid and gas in the governing equations to investigate the effects of fluid flow in the overall behavior. These two formulations are used to simulate the behavior of an unsaturated silty soil embankment subjected to base shaking and compared with the results from another commonly used partially reduced formulation that neglects the relative accelerations, but takes into account the relative velocities. The stress,strain response of the solid skeleton is modeled as both elastic and elastoplastic in all three analyses. In the elastic analyses no permanent deformations are predicted and the displacements of the partially reduced formulation are in between those of the reduced and complete formulations. The frequency of vibration of the complete formulation in the elastic analysis is closer to the predominant frequency of the base motion and smaller than the frequencies of vibration of the other two analyses. Proper consideration of damping due to fluid flows in the complete formulation is the likely reason for this difference. Permanent deformations are predicted by all three formulations for the elastoplastic analyses. The complete formulation, however, predicts reductions in pore fluid pressures following strong shaking resulting in somewhat smaller displacements than the reduced formulation. The results from complete and reduced formulations are otherwise comparable for elastoplastic analyses. For the elastoplastic analysis, the partially reduced formulation leads to stiffer response than the other two formulations. The likely reason for this stiffer response in the elastoplastic analysis is the interpolation scheme (linear displacement and linear pore fluid pressures) used in the finite element implementation of the partially reduced formulation. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Robust adaptive remeshing strategy for large deformation, transient impact simulations

Tobias Erhart
Abstract In this paper, an adaptive approach, with remeshing as essential ingredient, towards robust and efficient simulation techniques for fast transient, highly non-linear processes including contact is discussed. The necessity for remeshing stems from two sources: the capability to deal with large deformations that might even require topological changes of the mesh and the desire for an error driven distribution of computational resources. The overall computational approach is sketched, the adaptive remeshing strategy is presented and the crucial aspect, the choice of suitable error indicator(s), is discussed in more detail. Several numerical examples demonstrate the performance of the approach. Copyright © 2005 John Wiley & Sons, Ltd. [source]

A spectral projection method for the analysis of autocorrelation functions and projection errors in discrete particle simulation

André Kaufmann
Abstract Discrete particle simulation is a well-established tool for the simulation of particles and droplets suspended in turbulent flows of academic and industrial applications. The study of some properties such as the preferential concentration of inertial particles in regions of high shear and low vorticity requires the computation of autocorrelation functions. This can be a tedious task as the discrete point particles need to be projected in some manner to obtain the continuous autocorrelation functions. Projection of particle properties on to a computational grid, for instance, the grid of the carrier phase, is furthermore an issue when quantities such as particle concentrations are to be computed or source terms between the carrier phase and the particles are exchanged. The errors committed by commonly used projection methods are often unknown and are difficult to analyse. Grid and sampling size limit the possibilities in terms of precision per computational cost. Here, we present a spectral projection method that is not affected by sampling issues and addresses all of the above issues. The technique is only limited by computational resources and is easy to parallelize. The only visible drawback is the limitation to simple geometries and therefore limited to academic applications. The spectral projection method consists of a discrete Fourier-transform of the particle locations. The Fourier-transformed particle number density and momentum fields can then be used to compute the autocorrelation functions and the continuous physical space fields for the evaluation of the projection methods error. The number of Fourier components used to discretize the projector kernel can be chosen such that the corresponding characteristic length scale is as small as needed. This allows to study the phenomena of particle motion, for example, in a region of preferential concentration that may be smaller than the cell size of the carrier phase grid. The precision of the spectral projection method depends, therefore, only on the number of Fourier modes considered. Copyright © 2008 John Wiley & Sons, Ltd. [source]

A cyberenvironment for crystallography and materials science and an integrated user interface to the Crystallography Open Database and Predicted Crystallography Open Database

Jacob R. Fennick
With the advent and subsequent evolution of the Internet the ways in which computational crystallographic research is conducted have dramatically changed. Consequently, secure, robust and efficient means of accessing remote data and computational resources have become a necessity. At present scientists in computational crystallography access remote data and resources via separate technologies, namely SSH and Web services. Computational Science and Engineering Online (CSE-Online) combines these two methods into a single seamless environment while simultaneously addressing issues such as stability with regard to Internet interruption. Presently CSE-Online contains several applications which are useful to crystallographers; however, continued development of new tools is necessary. Toward this end a Java application capable of running in CSE-Online, namely the Crystallography Open Database User Interface (CODUI), has been developed, which allows users to search for crystal structures stored in the Crystallography Open Database and Predicted Crystallography Open Database, to export structural data for visualization, or to input structural data in other CSE-Online applications. [source]

A computationally inexpensive modification of the point dipole electrostatic polarization model for molecular simulations

George A. Kaminski
Abstract We present an approximation, which allows reduction of computational resources needed to explicitly incorporate electrostatic polarization into molecular simulations utilizing empirical force fields. The proposed method is employed to compute three-body energies of molecular complexes with dipolar electrostatic probes, gas-phase dimerization energies, and pure liquid properties for five systems that are important in biophysical and organic simulations,water, methanol, methylamine, methanethiol, and acetamide. In all the cases, the three-body energies agreed with high level ab initio data within 0.07 kcal/mol, dimerization energies,within 0.43 kcal/mol (except for the special case of the CH3SH), and computed heats of vaporization and densities differed from the experimental results by less than 2%. Moreover, because the presented method allows a significant reduction in computational cost, we were able to carry out the liquid-state calculations with Monte Carlo technique. Comparison with the full-scale point dipole method showed that the computational time was reduced by 3.5 to more than 20 times, depending on the system in hand and on the desired level of the full-scale model accuracy, while the difference in energetic results between the full-scale and the presented approximate model was not great in the most cases. Comparison with the nonpolarizable OPLS-AA force field for all the substances involved and with the polarizable POL3 and q90 models for water and methanol, respectively, demonstrates that the presented technique allows reduction of computational cost with no sacrifice of accuracy. We hope that the proposed method will be of benefit to research employing molecular modeling technique in the biophysical and physical organic chemistry areas. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 267,276, 2003 [source]

Targeted driving using visual tracking on Mars: From research to flight

Won S. Kim
This paper presents the development, validation, and deployment of the visual target tracking capability onto the Mars Exploration Rover (MER) mission. Visual target tracking enables targeted driving, in which the rover approaches a designated target in a closed visual feedback loop, increasing the target position accuracy by an order of magnitude and resulting in fewer ground-in-the-loop cycles. As a result of an extensive validation, we developed a reliable normalized cross-correlation visual tracker. To enable tracking with the limited computational resources of a planetary rover, the tracker uses the vehicle motion estimation to scale and roll the template image, compensating for large image changes between rover steps. The validation showed that a designated target can be reliably tracked within several pixels or a few centimeters of accuracy over a 10-m traverse using a rover step size of 10% of the target distance in any direction. It also showed that the target is not required to have conspicuous features and can be selected anywhere on natural rock surfaces excluding rock boundary and shadowed regions. The tracker was successfully executed on the Opportunity rover near Victoria Crater on four distinct runs, including a single-sol instrument placement. We present the flight experiment data of the tracking performance and execution time. © 2009 Wiley Periodicals, Inc. [source]

Optimal operation of GaN thin film epitaxy employing control vector parametrization

AICHE JOURNAL, Issue 4 2006
Amit Varshney
Abstract An approach that links nonlinear model reduction techniques with control vector parametrization-based schemes is presented, to efficiently solve dynamic constraint optimization problems arising in the context of spatially-distributed processes governed by highly-dissipative nonlinear partial-differential equations (PDEs), utilizing standard nonlinear programming techniques. The method of weighted residuals with empirical eigenfunctions (obtained via Karhunen-Ločve expansion) as basis functions is employed for spatial discretization together with control vector parametrization formulation for temporal discretization. The stimulus for the earlier approach is provided by the presence of low order dominant dynamics in the case of highly dissipative parabolic PDEs. Spatial discretization based on these few dominant modes (which are elegantly captured by empirical eigenfunctions) takes into account the actual spatiotemporal behavior of the PDE which cannot be captured using finite difference or finite element techniques with a small number of discretization points/elements. The proposed approach is used to compute the optimal operating profile of a metallorganic vapor-phase epitaxy process for the production of GaN thin films, with the objective to minimize the spatial nonuniformity of the deposited film across the substrate surface by adequately manipulating the spatiotemporal concentration profiles of Ga and N precursors at the reactor inlet. It is demonstrated that the reduced order optimization problem thus formulated using the proposed approach for nonlinear order reduction results in considerable savings of computational resources and is simultaneously accurate. It is demonstrated that by optimally changing the precursor concentration across the reactor inlet it is possible to reduce the thickness nonuniformity of the deposited film from a nominal 33% to 3.1%. © 2005 American Institute of Chemical Engineers AIChE J, 2006 [source]

Deprotonation and radicalization of glycine neutral structures

Gang Yang
Abstract Ab initio calculations at MP2/6-311++G(d,p) theoretical level were performed to study the deprotonation and radicalization processes of 13 glycine neutral structures (A. G. Császár, J. Am. Chem. Soc. 1992; 114: 9568). The deprotonation processes to glycine neutral structures take place at the carboxylic sites instead of , -C or amido sites. Two carboxylic deprotonated structures were obtained with the deprotonation energies calculated within the range of 1413.27,1460.03,kJ,·,mol,1, which are consistent with the experimental results. However, the radicalization processes will take place at the , -C rather than carboxylic O or amido sites, agreeing with the experimental results. Seven , -C radicals were obtained with the radical stabilization energies calculated within the range of 44.87,111.78,kJ,·,mol,1. The population analyses revealed that the main conformations of the neutral or radical state are constituted by several stable structures, that is, the other structures can be excluded from the future considerations and thus save computational resources. Copyright © 2007 John Wiley & Sons, Ltd. [source]

Real-time accelerated interactive MRI with adaptive TSENSE and UNFOLD,

Michael A. Guttman
Abstract Reduced field-of-view (FOV) acceleration using time-adaptive sensitivity encoding (TSENSE) or unaliasing by Fourier encoding the overlaps using the temporal dimension (UNFOLD) can improve the depiction of motion in real-time MRI. However, increased computational resources are required to maintain a high frame rate and low latency in image reconstruction and display. A high-performance software system has been implemented to perform TSENSE and UNFOLD reconstructions for real-time MRI with interactive, on-line display. Images were displayed in the scanner room to investigate image-guided procedures. Examples are shown for normal volunteers and cardiac interventional experiments in animals using a steady-state free precession (SSFP) sequence. In order to maintain adequate image quality for interventional procedures, the imaging rate was limited to seven frames per second after an acceleration factor of 2 with a voxel size of 1.8 × 3.5 × 8 mm. Initial experiences suggest that TSENSE and UNFOLD can each improve the compromise between spatial and temporal resolution in real-time imaging, and can function well in interactive imaging. UNFOLD places no additional constraints on receiver coils, and is therefore more flexible than SENSE methods; however, the temporal image filtering can blur motion and reduce the effective acceleration. Methods are proposed to overcome the challenges presented by the use of TSENSE in interactive imaging. TSENSE may be temporarily disabled after changing the imaging plane to avoid transient artifacts as the sensitivity coefficients adapt. For imaging with a combination of surface and interventional coils, a hybrid reconstruction approach is proposed whereby UNFOLD is used for the interventional coils, and TSENSE with or without UNFOLD is used for the surface coils. Magn Reson Med 50:315,321, 2003. Published 2003 Wiley-Liss, Inc. [source]

Simple models for predicting transmission properties of photonic crystal fibers

Rachad Albandakji
Abstract Simple, fast, and efficient 1D models for evaluating the transmission properties of photonic crystal fibers are proposed. Using these models, axial propagation constant, chromatic dispersion, effective area, and leakage loss can be predicted with a reasonable accuracy but much faster than often time-consuming 2D analytical and numerical techniques and with much less computational resources. It is shown that the results are in good agreement with the published data available in the literature. © 2006 Wiley Periodicals, Inc. Microwave Opt Technol Lett 48: 1286,1290, 2006; Published online in Wiley InterScience (www. DOI 10.1002/mop.21624 [source]

Doppler broadening of annihilation radiation spectroscopy study using Richardson-Lucy, Maximum Entropy and Huber methods

D. P. Yu
Abstract The Richardson-Lucy, Maximum Entropy and Huber regularization methods are popularly used in solving ill-posed inverse problems. This paper considers the use of these three methods in the deconvoluting DBARS (Doppler Broadening of Annihilation Radiation Spectroscopy) data. As DBARS data have a constant background on the high-energy side and a long exponential tail on the low-energy side, we check the different deconvolution schemes paying specific attention to the quality of the deconvolution at the peak and tail positions. Comparison of the three methods is made by testing on Monte-Carlo simulated data both in terms of the deconvoluted quality and computational resources required. Finally, we apply these methods to experimental DBARS data taken on polycrystalline metal samples. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Predicting deflagration to detonation transition in hydrogen explosions

Prankul Middha
Abstract Because of the development in computational resources, Computational Fluid Dynamics (CFD) has assumed increasing importance in recent years as a tool for predicting the consequences of accidents in petrochemical and process industries. CFD has also been used more and more for explosion predictions for input to risk assessments and design load specifications. The CFD software FLACS has been developed and experimentally validated continuously for more than 25 years. As a result, it is established as a tool for simulating hydrocarbon gas deflagrations with reasonable precision and is widely used in petrochemical industry and elsewhere. In recent years the focus on predicting hydrogen explosions has increased, and with the latest release the validation status for hydrogen deflagrations is considered good. However, in many of these scenarios, especially involving reactive gases such as hydrogen, deflagration to detonation transition (DDT) may be a significant threat. In previous work, FLACS was extended to identify whether DDT is likely in a given scenario and indicate the regions where it might occur. The likelihood of DDT has been expressed in terms of spatial pressure gradients across the flame front. This parameter is able to visualize when the flame front captures the pressure front, which is the case in situations when fast deflagrations transition to detonation. Reasonable agreement was obtained with experimental observations in terms of explosion pressures, transition times, and flame speeds. The DDT model has now been extended to develop a more meaningful criterion for estimating the likelihood of DDT by comparison of the geometric dimensions with the detonation cell size. This article discusses the new models to predict DDT, and compare predictions with relevant experiments. © 2007 American Institute of Chemical Engineers Process Saf Prog 2008 [source]

A series of molecular dynamics and homology modeling computer labs for an undergraduate molecular modeling course

Donald E. Elmore
Abstract As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations, and homology modeling. These labs were created as part of a one-semester course on the molecular modeling of biochemical systems. Students who completed these activities felt that they were an effective component of the course, reporting improved comfort with the conceptual background and practical implementation of the computational methods. Although created as a component of a larger course, these activities could be readily adapted for a variety of other educational contexts. As well, all of these labs utilize software that is freely available in an academic environment and can be run on fairly common computer hardware, making them accessible to teaching environments without extensive computational resources. [source]

Recent progress in engineering ,/, hydrolase-fold family members

Zhen Qian
Abstract The members of the ,/, hydrolase-fold family represent a functionally versatile group of enzymes with many important applications in biocatalysis. Given the technical significance of ,/, hydrolases in processes ranging from the kinetic resolution of enantiomeric precursors for pharmaceutical compounds to bulk products such as laundry detergent, optimizing and tailoring enzymes for these applications presents an ongoing challenge to chemists, biochemists, and engineers alike. A review of the recent literature on ,/, hydrolase engineering suggests that the early successes of "random processes" such as directed evolution are now being slowly replaced by more hypothesis-driven, focused library approaches. These developments reflect a better understanding of the enzymes' structure-function relationship and improved computational resources, which allow for more sophisticated search and prediction algorithms, as well as, in a very practical sense, the realization that bigger is not always better. [source]

Genome Sequencing and Comparative Genomics of Tropical Disease Pathogens

Jane M. Carlton
Summary The sequencing of eukaryotic genomes has lagged behind sequencing of organisms in the other domains of life, archae and bacteria, primarily due to their greater size and complexity. With recent advances in ,high-throughput ,technologies ,such ,as ,robotics and improved computational resources, the number of eukaryotic genome sequencing projects has in-creased significantly. Among these are a number of sequencing projects of tropical pathogens of medical and veterinary importance, many of which are responsible for causing widespread morbidity and mortality in peoples of developing countries. Uncovering the complete gene complement of these organisms is proving to be of immense value in the develop-ment of novel methods of parasite control, such as antiparasitic drugs and vaccines, as well as the development of new diagnostic tools. Combining pathogen genome sequences with the host and vector genome sequences is promising to be a robust method for the identification of host,pathogen interactions. Finally, comparative sequencing of related species, especially of organisms used as model systems in the study of the disease, is beginning to realize its potential in the identification of genes, and the evolutionary forces that shape the genes, that are involved in evasion of the host immune response. [source]