Computing Power (computing + power)

Distribution by Scientific Domains


Selected Abstracts


Checkpointing BSP parallel applications on the InteGrade Grid middleware

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2006
Raphael Y. de Camargo
Abstract InteGrade is a Grid middleware infrastructure that enables the use of idle computing power from user workstations. One of its goals is to support the execution of long-running parallel applications that present a considerable amount of communication among application nodes. However, in an environment composed of shared user workstations spread across many different LANs, machines may fail, become inaccessible, or may switch from idle to busy very rapidly, compromising the execution of the parallel application in some of its nodes. Thus, to provide some mechanism for fault tolerance becomes a major requirement for such a system. In this paper, we describe the support for checkpoint-based rollback recovery of Bulk Synchronous Parallel applications running over the InteGrade middleware. This mechanism consists of periodically saving application state to permit the application to restart its execution from an intermediate execution point in case of failure. A precompiler automatically instruments the source code of a C/C++ application, adding code for saving and recovering application state. A failure detector monitors the application execution. In case of failure, the application is restarted from the last saved global checkpoint. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Using Data from Hospital Information Systems to Improve Emergency Department Care

ACADEMIC EMERGENCY MEDICINE, Issue 11 2004
Gregg Husk MD
Abstract The ubiquity of computerized hospital information systems, and of inexpensive computing power, has led to an unprecedented opportunity to use electronic data for quality improvement projects and for research. Although hospitals and emergency departments vary widely in their degree of integration of information technology into clinical operations, most have computer systems that manage emergency department registration, admission,discharge,transfer information, billing, and laboratory and radiology data. These systems are designed for specific tasks, but contain a wealth of detail that can be used to educate staff and improve the quality of care emergency physicians offer their patients. In this article, the authors describe five such projects that they have performed and use these examples as a basis for discussion of some of the methods and logistical challenges of undertaking such projects. [source]


Evaluation of model complexity and space,time resolution on the prediction of long-term soil salinity dynamics, western San Joaquin Valley, California

HYDROLOGICAL PROCESSES, Issue 13 2006
G. Schoups
Abstract The numerical simulation of long-term large-scale (field to regional) variably saturated subsurface flow and transport remains a computational challenge, even with today's computing power. Therefore, it is appropriate to develop and use simplified models that focus on the main processes operating at the pertinent time and space scales, as long as the error introduced by the simpler model is small relative to the uncertainties associated with the spatial and temporal variation of boundary conditions and parameter values. This study investigates the effects of various model simplifications on the prediction of long-term soil salinity and salt transport in irrigated soils. Average root-zone salinity and cumulative annual drainage salt load were predicted for a 10-year period using a one-dimensional numerical flow and transport model (i.e. UNSATCHEM) that accounts for solute advection, dispersion and diffusion, and complex salt chemistry. The model uses daily values for rainfall, irrigation, and potential evapotranspiration rates. Model simulations consist of benchmark scenarios for different hypothetical cases that include shallow and deep water tables, different leaching fractions and soil gypsum content, and shallow groundwater salinity, with and without soil chemical reactions. These hypothetical benchmark simulations are compared with the results of various model simplifications that considered (i) annual average boundary conditions, (ii) coarser spatial discretization, and (iii) reducing the complexity of the salt-soil reaction system. Based on the 10-year simulation results, we conclude that salt transport modelling does not require daily boundary conditions, a fine spatial resolution, or complex salt chemistry. Instead, if the focus is on long-term salinity, then a simplified modelling approach can be used, using annually averaged boundary conditions, a coarse spatial discretization, and inclusion of soil chemistry that only accounts for cation exchange and gypsum dissolution,precipitation. We also demonstrate that prediction errors due to these model simplifications may be small, when compared with effects of parameter uncertainty on model predictions. The proposed model simplifications lead to larger time steps and reduced computer simulation times by a factor of 1000. Copyright © 2006 John Wiley & Sons, Ltd. [source]


The evolution of mathematical immunology

IMMUNOLOGICAL REVIEWS, Issue 1 2007
Yoram Louzoun
Summary:, The types of mathematical models used in immunology and their scope have changed drastically in the past 10 years. Classical models were based on ordinary differential equations (ODEs), difference equations, and cellular automata. These models focused on the ,simple' dynamics obtained between a small number of reagent types (e.g. one type of receptor and one type of antigen or two T-cell populations). With the advent of high-throughput methods, genomic data, and unlimited computing power, immunological modeling shifted toward the informatics side. Many current applications of mathematical models in immunology are now focused around the concepts of high-throughput measurements and system immunology (immunomics), as well as the bioinformatics analysis of molecular immunology. The types of models have shifted from mainly ODEs of simple systems to the extensive use of Monte Carlo simulations. The transition to a more molecular and more computer-based attitude is similar to the one occurring over all the fields of complex systems analysis. An interesting additional aspect in theoretical immunology is the transition from an extreme focus on the adaptive immune system (that was considered more interesting from a theoretical point of view) to a more balanced focus taking into account the innate immune system also. We here review the origin and evolution of mathematical modeling in immunology and the contribution of such models to many important immunological concepts. [source]


Numerical methods for large-eddy simulation in general co-ordinates

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2004
Gefeng Tang
Abstract Large scale unsteady motions in many practical engineering flows play a very important role and it is very unlikely that these unsteady flow features can be captured within the framework of Reynolds averaged Navier,Stokes approach. Large-eddy simulation (LES) has become, arguably, the only practical numerical tool for predicting those flows more accurately since it is still not realistic to apply DNS to practical engineering flows with the current and near future available computing power. Numerical methods for the LES of turbulent flows in complex geometry have been developed and applied to predict practical engineering flows successfully. The method is based on body-fitted curvilinear co-ordinates with the contravariant velocity components of the general Navier,Stokes equations discretized on a staggered orthogonal mesh. For incompressible flow simulations the main source of computational expense is due to the solution of a Poisson equation for pressure. This is especially true for flows in complex geometry. A multigrid 3D pressure solver is developed to speed up the solution. In addition, the Poisson equation for pressure takes a simpler form with no cross-derivatives when orthogonal mesh is used and hence resulting in increased convergence rate and producing more accurate solutions. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A method for fast simulation of multiple catastrophic faults in analogue circuits

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2010
Micha, Tadeusiewicz
Abstract The paper offers an efficient method for simulation of multiple catastrophic faults in linear AC circuits. The faulty elements are either open circuits or short circuits. The method exploits the well-known Householder formula in matrix theory to find the node voltages deviations due to the perturbations of some circuit elements. The main achievement of the paper is a systematic method for performing the simulation of all combinations of the multiple catastrophic faults. The method includes two new procedures enabling us to find very efficiently the node impedance matrix of the nominal circuit and inverses of some matrices corresponding to different fault combinations. The procedures are the crucial point of this approach and make it very efficient. Consequently, the amount of the computing power needed to carry out all the simulations is significantly reduced. Numerical examples illustrating the proposed approach are provided. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Artificial intelligence advancements applied in off-the-shelf controllers

PROCESS SAFETY PROGRESS, Issue 2 2002
Edward M. Marszal P.E.
Since the earliest process units were built, CPI engineers have employed artificial intelligence to prevent losses. The expanding use of computer-based systems for process control has allowed the amount of intelligence applied in these expert systems to drastically increase. Standard methods for performing Expert System tasks are being formalized by numerous researchers in industry and academia. Work products from these groups include designs that present process hazards knowledge in a structured, hierarchical, and modular manner. Advancements in programmable logic controller (PLC) technology have created systems with substantial computing power that are robust and fault tolerant enough to be used in safety critical applications. In addition, IEC 1131-3 standardized the programming languages available in virtually every new controller. The function block language defined in IEC 1131-3 is particularly well suited to performing modular tasks, which makes it an ideal platform for representing knowledge. This paper begins by describing some of the advancements in knowledge-based systems for loss prevention applications. It then explores how standard IEC 1131-3 programming techniques can be used to build function blocks that represent knowledge of the hazards posed by equipment items. The paper goes on to develop a sample function block that represents the hazards of a pressure vessel, using knowledge developed in the API 14-C standard. [source]


Anwendung von massiv paralleler Berechnung mit Grafikkarten (GPGPU) für CFD-Methoden im Brandschutz

BAUPHYSIK, Issue 4 2009
Hendrik C. Belaschk Dipl.-Ing.
Berechnungsverfahren; Brandschutz; calculation methods; fire protection engineering Abstract Der Einsatz von Brandsimulationsprogrammen, die auf den Methoden der Computational Fluid Dynamics (CFD) beruhen, wird in der Praxis immer breiter. Infolge der Zunahme von verfügbarer Rechenleistung in der Computertechnik können heute die Auswirkungen möglicher Brandszenarien nachgebildet und daraus nützliche Informationen für den Anwendungsfall gewonnen werden (z. B. Nachweis der Zuverlässigkeit von Brandschutzkonzepten). Trotz der erzielten Fortschritte reicht die Leistung von heute verfügbaren Computern bei weitem nicht aus, um einen Gebäudebrand mit allen beteiligten physikalischen und chemischen Prozessen mit der höchstmöglichen Genauigkeit zu simulieren. Die in den Computerprogrammen zur Berechnung der Brand- und Rauchausbreitung implementierten Modelle stellen daher immer einen Kompromiss zwischen der praktischen Recheneffizienz und dem Detailgrad der Modellierung dar. Im folgenden Aufsatz wird gezeigt, worin die Ursachen für den hohen Rechenbedarf der CFD-Methoden liegen und welche Problemstellungen und möglichen Fehlerquellen sich aus den getroffenen Modellvereinfachungen für den Ingenieur ergeben. Darüber hinaus wird ein neuer Technologieansatz vorgestellt, der die Rechenleistung eines Personalcomputers unter Verwendung spezieller Software und handelsüblicher 3D-Grafikkarten massiv erhöht. Hierzu wird am Beispiel des Fire Dynamics Simulator (FDS) demonstriert, dass sich die erforderliche Berechnungszeit für eine Brandsimulation auf einem Personalcomputer um den Faktor 20 und mehr verringern lässt. Application of general-purpose computing on graphics processing units (GPGPU) in CFD techniques for fire safety simulations. The use of fire simulation programs based on computational fluid dynamics (CFD) techniques is becoming more and more widespread in practice. The increase in available computing power enables the effects of possible fire scenarios to be modelled in order to derive useful information for practical applications (e.g. analysis of the reliability of fire protection concepts). However, despite the progress in computing power the performance of currently available computers is inadequate for simulating a building fire including all relevant physical and chemical processes with maximum accuracy. The models for calculating the spread of fire and smoke implemented in the computer programs therefore always represent a compromise between practical computing efficiency and level of modelling detail. This paper illustrates the reasons for the high computing power demand of CFD techniques and describes potential problems and sources of error resulting from simplifications applied in the models. In addition, the paper presents a new technology approach that significantly increases the computing power of a PC using special software and standard 3D graphics cards. The Fire Dynamics Simulator (FDS) is used as an example to demonstrate how the required calculation time for a fire simulation on a PC can be reduced by a factor of 20 and more. [source]