Home About us Contact | |||
Scientific Computing (scientific + computing)
Selected AbstractsModeling with Data: Tools and Techniques for Scientific Computing by Ben KlemensINTERNATIONAL STATISTICAL REVIEW, Issue 1 2009Antony Unwin No abstract is available for this article. [source] A nearly optimal preconditioner for the Navier,Stokes equationsNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2001Lina Hemmingsson-Frändén Abstract We present a preconditioner for the linearized Navier,Stokes equations which is based on the combination of a fast transform approximation of an advection diffusion problem together with the recently introduced ,BFBTT' preconditioner of Elman (SIAM Journal of Scientific Computing, 1999; 20:1299,1316). The resulting preconditioner when combined with an appropriate Krylov subspace iteration method yields the solution in a number of iterations which appears to be independent of the Reynolds number provided a mesh Péclet number restriction holds, and depends only mildly on the mesh size. The preconditioner is particularly appropriate for problems involving a primary flow direction. Copyright © 2001 John Wiley & Sons, Ltd. [source] High-speed network and Grid computing for high-end computation: application in geodynamics ensemble simulationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2007S. Zhou Abstract High-speed network and Grid computing have been actively investigated, and their capabilities are being demonstrated. However, their application to high-end scientific computing and modeling is still to be explored. In this paper we discuss the related issues and present our prototype work on applying XCAT3 framework technology to geomagnetic data assimilation development with distributed computers, connected through an up to 10 Gigabit Ethernet network. Copyright © 2006 John Wiley & Sons, Ltd. [source] Bridging the language gap in scientific computing: the Chasm approachCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2006C. E. Rasmussen Abstract Chasm is a toolkit providing seamless language interoperability between Fortran 95 and C++. Language interoperability is important to scientific programmers because scientific applications are predominantly written in Fortran, while software tools are mostly written in C++. Two design features differentiate Chasm from other related tools. First, we avoid the common-denominator type systems and programming models found in most Interface Definition Language (IDL)-based interoperability systems. Chasm uses the intermediate representation generated by a compiler front-end for each supported language as its source of interface information instead of an IDL. Second, bridging code is generated for each pairwise language binding, removing the need for a common intermediate data representation and multiple levels of indirection between the caller and callee. These features make Chasm a simple system that performs well, requires minimal user intervention and, in most instances, bridging code generation can be performed automatically. Chasm is also easily extensible and highly portable. Copyright © 2005 John Wiley & Sons, Ltd. [source] Compiling data-parallel programs for clusters of SMPsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2-3 2004Siegfried Benkner Abstract Clusters of shared-memory multiprocessors (SMPs) have become the most promising parallel computing platforms for scientific computing. However, SMP clusters significantly increase the complexity of user application development when using the low-level application programming interfaces MPI and OpenMP, forcing users to deal with both distributed-memory and shared-memory parallelization details. In this paper we present extensions of High Performance Fortran (HPF) for SMP clusters which enable the compiler to adopt a hybrid parallelization strategy, efficiently combining distributed-memory with shared-memory parallelism. By means of a small set of new language features, the hierarchical structure of SMP clusters may be specified. This information is utilized by the compiler to derive inter-node data mappings for controlling distributed-memory parallelization across the nodes of a cluster and intra-node data mappings for extracting shared-memory parallelism within nodes. Additional mechanisms are proposed for specifying inter- and intra-node data mappings explicitly, for controlling specific shared-memory parallelization issues and for integrating OpenMP routines in HPF applications. The proposed features have been realized within the ADAPTOR and VFC compilers. The parallelization strategy for clusters of SMPs adopted by these compilers is discussed as well as a hybrid-parallel execution model based on a combination of MPI and OpenMP. Experimental results indicate the effectiveness of the proposed features. Copyright © 2004 John Wiley & Sons, Ltd. [source] Component-based, problem-solving environments for large-scale scientific computingCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13-15 2002Chris Johnson Abstract In this paper we discuss three scientific computing problem solving environments: SCIRun, BioPSE, and Uintah. We begin with an overview of the systems, describe their underlying software architectures, discuss implementation issues, and give examples of their use in computational science and engineering applications. We conclude by discussing future research and development plans for the three problem solving environments. Copyright © 2002 John Wiley & Sons, Ltd. [source] A review of reliable numerical models for three-dimensional linear parabolic problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2007I. Faragó Abstract The preservation of characteristic qualitative properties of different phenomena is a more and more important requirement in the construction of reliable numerical models. For phenomena that can be mathematically described by linear partial differential equations of parabolic type (such as the heat conduction, the diffusion, the pricing of options, etc.), the most important qualitative properties are: the maximum,minimum principle, the non-negativity preservation and the maximum norm contractivity. In this paper, we analyse the discrete analogues of the above properties for finite difference and finite element models, and we give a systematic overview of conditions that guarantee the required properties a priori. We have chosen the heat conduction process to illustrate the main concepts, but engineers and scientists involved in scientific computing can easily reformulate the results for other problems too. Copyright © 2006 John Wiley & Sons, Ltd. [source] LARGE-SCALE SIMULATION OF THE HUMAN ARTERIAL TREECLINICAL AND EXPERIMENTAL PHARMACOLOGY AND PHYSIOLOGY, Issue 2 2009L Grinberg SUMMARY 1Full-scale simulations of the virtual physiological human (VPH) will require significant advances in modelling, multiscale mathematics, scientific computing and further advances in medical imaging. Herein, we review some of the main issues that need to be resolved in order to make three-dimensional (3D) simulations of blood flow in the human arterial tree feasible in the near future. 2A straightforward approach is computationally prohibitive even on the emerging petaflop supercomputers, so a three-level hierarchical approach based on vessel size is required, consisting of: (i) a macrovascular network (MaN); (ii) a mesovascular network (MeN); and (iii) a microvascular network (MiN). We present recent simulations of MaN obtained by solving the 3D Navier,Stokes equations on arterial networks with tens of arteries and bifurcations and accounting for the neglected dynamics through proper boundary conditions. 3A multiscale simulation coupling MaN,MeN,MiN and running on hundreds of thousands of processors on petaflop computers will require no more than a few CPU hours per cardiac cycle within the next 5 years. The rapidly growing capacity of supercomputing centres opens up the possibility of simulation studies of cardiovascular diseases, drug delivery, perfusion in the brain and other pathologies. [source] |