Computational Grids (computational + grid)

Distribution by Scientific Domains


Selected Abstracts


Resource discovery and management in computational GRID environments

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2006
Alan Bradley
Abstract Corporations are currently using computational GRIDs to improve their operations. Future GRIDs will allow an organization to take advantage of computational GRIDs without having to develop a custom in-house solution. GRID resource providers (GRPs) make resources available on the GRID so that others may subscribe and use these resources. GRPs will allow companies to make use of a range of resources such as processing power or mass storage. However, simply providing resources is not enough to ensure the success of a computational GRID: Access to these resources must be controlled otherwise computational GRIDs will simply evolve to become a victim of their own success, unable to offer a suitable quality of service (QoS) to any user. The task of providing a standard querying mechanism for computational GRID environments (CGE) has already witnessed considerable work from groups such as the Globus project who have delivered the Metacomputing Directory Service (MDS), which provides a means to query devices attached to the GRID. This paper presents a review of existing resource discovery mechanisms within CGE. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Towards an integrated GIS-based coastal forecast workflow

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2008
Gabrielle Allen
Abstract The SURA Coastal Ocean Observing and Prediction (SCOOP) program is using geographical information system (GIS) technologies to visualize and integrate distributed data sources from across the United States and Canada. Hydrodynamic models are run at different sites on a developing multi-institutional computational Grid. Some of these predictive simulations of storm surge and wind waves are triggered by tropical and subtropical cyclones in the Atlantic and the Gulf of Mexico. Model predictions and observational data need to be merged and visualized in a geospatial context for a variety of analyses and applications. A data archive at LSU aggregates the model outputs from multiple sources, and a data-driven workflow triggers remotely performed conversion of a subset of model predictions to georeferenced data sets, which are then delivered to a Web Map Service located at Texas A&M University. Other nodes in the distributed system aggregate the observational data. This paper describes the use of GIS within the SCOOP program for the 2005 hurricane season, along with details of the data-driven distributed dataflow and workflow, which results in geospatial products. We also focus on future plans related to the complimentary use of GIS and Grid technologies in the SCOOP program, through which we hope to provide a wider range of tools that can enhance the tools and capabilities of earth science research and hazard planning. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Job completion prediction using case-based reasoning for Grid computing environments

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007
Lilian Noronha Nassif
Abstract One of the main focuses of Grid computing is solving resource-sharing problems in multi-institutional virtual organizations. In such heterogeneous and distributed environments, selecting the best resource to run a job is a complex task. The solutions currently employed still present numerous challenges and one of them is how to let users know when a job will finish. Consequently, reserve in advance remains unavailable. This article presents a new approach, which makes predictions for job execution time in Grid by applying the case-based reasoning paradigm. The work includes the development of a new case retrieval algorithm involving relevance sequence and similarity degree calculations. The prediction model is part of a multi-agent system that selects the best resource of a computational Grid to run a job. Agents representing candidate resources for job execution make predictions in a distributed and parallel manner. The technique presented here can be used in Grid environments at operation time to assist users with batch job submissions. Experimental results validate the prediction accuracy of the proposed mechanisms, and the performance of our case retrieval algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A peer-to-peer decentralized strategy for resource management in computational Grids

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007
Antonella Di Stefano
Abstract This paper presents a peer-to-peer (P2P) approach for the management, in a computational Grid, of those resources that are featured by numerical quantity and thus characterized by a coefficient of utilization, such as percentage of CPU time, disk space, memory space, etc. The proposed approach exploits spatial computing concepts and models a Grid by means of a flat P2P architecture consisting of nodes connected by an overlay network; such a network topology, together with the quantity of resource available in each node, forms a three-dimensional surface, where valleys correspond to nodes with a large quantity of available resource. In this scenario, this paper proposes an algorithm for resource discovery that is based on navigating such a surface, in search of the deepest valley (global minimum, that is, the best node). The algorithm, which aims at fairly distributing among nodes the quantity of leased resource, is based on some heuristics that mimic the laws of kinematics. Experimental results show the effectiveness of the algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Studying protein folding on the Grid: experiences using CHARMM on NPACI resources under Legion

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2004
Anand Natrajan
Abstract One benefit of a computational Grid is the ability to run high-performance applications over distributed resources simply and securely. We demonstrated this benefit with an experiment in which we studied the protein-folding process with the CHARMM molecular simulation package over a Grid managed by Legion, a Grid operating system. High-performance applications can take advantage of Grid resources if the Grid operating system provides both low-level functionality as well as high-level services. We describe the nature of services provided by Legion for high-performance applications. Our experiences indicate that human factors continue to play a crucial role in the configuration of Grid resources, underlying resources can be problematic, Grid services must tolerate underlying problems or inform the user, and high-level services must continue to evolve to meet user requirements. Our experiment not only helped a scientist perform an important study, but also showed the viability of an integrated approach such as Legion's for managing a Grid. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A peer-to-peer decentralized strategy for resource management in computational Grids

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007
Antonella Di Stefano
Abstract This paper presents a peer-to-peer (P2P) approach for the management, in a computational Grid, of those resources that are featured by numerical quantity and thus characterized by a coefficient of utilization, such as percentage of CPU time, disk space, memory space, etc. The proposed approach exploits spatial computing concepts and models a Grid by means of a flat P2P architecture consisting of nodes connected by an overlay network; such a network topology, together with the quantity of resource available in each node, forms a three-dimensional surface, where valleys correspond to nodes with a large quantity of available resource. In this scenario, this paper proposes an algorithm for resource discovery that is based on navigating such a surface, in search of the deepest valley (global minimum, that is, the best node). The algorithm, which aims at fairly distributing among nodes the quantity of leased resource, is based on some heuristics that mimic the laws of kinematics. Experimental results show the effectiveness of the algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Performance of computationally intensive parameter sweep applications on Internet-based Grids of computers: the mapping of molecular potential energy hypersurfaces

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007
S. Reyes
Abstract This work focuses on the use of computational Grids for processing the large set of jobs arising in parameter sweep applications. In particular, we tackle the mapping of molecular potential energy hypersurfaces. For computationally intensive parameter sweep problems, performance models are developed to compare the parallel computation in a multiprocessor system with the computation on an Internet-based Grid of computers. We find that the relative performance of the Grid approach increases with the number of processors, being independent of the number of jobs. The experimental data, obtained using electronic structure calculations, fit the proposed performance expressions accurately. To automate the mapping of potential energy hypersurfaces, an application based on GRID superscalar is developed. It is tested on the prototypical case of the internal dynamics of acetone. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An EasyGrid portal for scheduling system-aware applications on computational Grids

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2006
C. Boeres
Abstract One of the objectives of computational Grids is to offer applications the collective computational power of distributed but typically shared heterogeneous resources. Unfortunately, efficiently harnessing the performance potential of such systems (i.e. how and where applications should execute on the Grid) is a challenging endeavor due principally to the very distributed, shared and heterogeneous nature of the resources involved. A crucial step towards solving this problem is the need to identify both an appropriate scheduling model and scheduling algorithm(s). This paper presents a tool to aid the design and evaluation of scheduling policies suitable for efficient execution of system-aware parallel applications on computational Grids. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Ibis: a flexible and efficient Java-based Grid programming environment

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7-8 2005
Rob V. van Nieuwpoort
Abstract In computational Grids, performance-hungry applications need to simultaneously tap the computational power of multiple, dynamically available sites. The crux of designing Grid programming environments stems exactly from the dynamic availability of compute cycles: Grid programming environments (a) need to be portable to run on as many sites as possible, (b) they need to be flexible to cope with different network protocols and dynamically changing groups of compute nodes, while (c) they need to provide efficient (local) communication that enables high-performance computing in the first place. Existing programming environments are either portable (Java), or flexible (Jini, Java Remote Method Invocation or (RMI)), or they are highly efficient (Message Passing Interface). No system combines all three properties that are necessary for Grid computing. In this paper, we present Ibis, a new programming environment that combines Java's ,run everywhere' portability both with flexible treatment of dynamically available networks and processor pools, and with highly efficient, object-based communication. Ibis can transfer Java objects very efficiently by combining streaming object serialization with a zero-copy protocol. Using RMI as a simple test case, we show that Ibis outperforms existing RMI implementations, achieving up to nine times higher throughputs with trees of objects. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A spectral projection method for the analysis of autocorrelation functions and projection errors in discrete particle simulation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2008
André Kaufmann
Abstract Discrete particle simulation is a well-established tool for the simulation of particles and droplets suspended in turbulent flows of academic and industrial applications. The study of some properties such as the preferential concentration of inertial particles in regions of high shear and low vorticity requires the computation of autocorrelation functions. This can be a tedious task as the discrete point particles need to be projected in some manner to obtain the continuous autocorrelation functions. Projection of particle properties on to a computational grid, for instance, the grid of the carrier phase, is furthermore an issue when quantities such as particle concentrations are to be computed or source terms between the carrier phase and the particles are exchanged. The errors committed by commonly used projection methods are often unknown and are difficult to analyse. Grid and sampling size limit the possibilities in terms of precision per computational cost. Here, we present a spectral projection method that is not affected by sampling issues and addresses all of the above issues. The technique is only limited by computational resources and is easy to parallelize. The only visible drawback is the limitation to simple geometries and therefore limited to academic applications. The spectral projection method consists of a discrete Fourier-transform of the particle locations. The Fourier-transformed particle number density and momentum fields can then be used to compute the autocorrelation functions and the continuous physical space fields for the evaluation of the projection methods error. The number of Fourier components used to discretize the projector kernel can be chosen such that the corresponding characteristic length scale is as small as needed. This allows to study the phenomena of particle motion, for example, in a region of preferential concentration that may be smaller than the cell size of the carrier phase grid. The precision of the spectral projection method depends, therefore, only on the number of Fourier modes considered. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Reparallelization techniques for migrating OpenMP codes in computational grids

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2009
Michael Klemm
Typical computational grid users target only a single cluster and have to estimate the runtime of their jobs. Job schedulers prefer short-running jobs to maintain a high system utilization. If the user underestimates the runtime, premature termination causes computation loss; overestimation is penalized by long queue times. As a solution, we present an automatic reparallelization and migration of OpenMP applications. A reparallelization is dynamically computed for an OpenMP work distribution when the number of CPUs changes. The application can be migrated between clusters when an allocated time slice is exceeded. Migration is based on a coordinated, heterogeneous checkpointing algorithm. Both reparallelization and migration enable the user to freely use computing time at more than a single point of the grid. Our demo applications successfully adapt to the changed CPU setting and smoothly migrate between, for example, clusters in Erlangen, Germany, and Amsterdam, the Netherlands, that use different kinds and numbers of processors. Benchmarks show that reparallelization and migration impose average overheads of about 4 and 2%, respectively. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Statistical downscaling of daily precipitation from observed and modelled atmospheric fields

HYDROLOGICAL PROCESSES, Issue 8 2004
Stephen P. Charles
Abstract Statistical downscaling techniques have been developed to address the spatial scale disparity between the horizontal computational grids of general circulation models (GCMs), typically 300,500 km, and point-scale meteorological observations. This has been driven, predominantly, by the need to determine how enhanced greenhouse projections of future climate may impact at regional and local scales. As point-scale precipitation is a common input to hydrological models, there is a need for techniques that reproduce the characteristics of multi-site, daily gauge precipitation. This paper investigates the ability of the extended nonhomogeneous hidden Markov model (extended-NHMM) to reproduce observed interannual and interdecadal precipitation variability when driven by observed and modelled atmospheric fields. Previous studies have shown that the extended-NHMM can successfully reproduce the at-site and intersite statistics of daily gauge precipitation, such as the frequency characteristics of wet days, dry- and wet-spell length distributions, amount distributions, and intersite correlations in occurrence and amounts. Here, the extended-NHMM, as fitted to 1978,92 observed ,winter' (May,October) daily precipitation and atmospheric data for 30 rain gauge sites in southwest Western Australia, is driven by atmospheric predictor sets extracted from National Centers for Environmental Prediction,National Center for Atmospheric Research reanalysis data for 1958,98 and an atmospheric GCM hindcast run forced by observed 1955,91 sea-surface temperatures (SSTs). Downscaling from the reanalysis-derived predictors reproduces the 1958,98 interannual and interdecadal variability of winter precipitation. Downscaling from the SST-forced GCM hindcast only reproduces the precipitation probabilities of the recent 1978,91 period, with poor performance for earlier periods attributed to inadequacies in the forcing SST data. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Parallel computing of high-speed compressible flows using a node-based finite-element method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2003
T. Fujisawa
Abstract An efficient parallel computing method for high-speed compressible flows is presented. The numerical analysis of flows with shocks requires very fine computational grids and grid generation requires a great deal of time. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed seamlessly in parallel in terms of nodes. Local finite-element mesh is generated robustly around each node, even for severe boundary shapes such as cracks. The algorithm and the data structure of finite-element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. The inter-processor communication is minimized by renumbering the nodal identification number using ParMETIS. The numerical scheme for high-speed compressible flows is based on the two-step Taylor,Galerkin method. The proposed method is implemented on distributed memory systems, such as an Alpha PC cluster, and a parallel supercomputer, Hitachi SR8000. The performance of the method is illustrated by the computation of supersonic flows over a forward facing step. The numerical examples show that crisp shocks are effectively computed on multiprocessors at high efficiency. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Performance of very-high-order upwind schemes for DNS of compressible wall-turbulence

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2010
G. A. Gerolymos
Abstract The purpose of the present paper is to evaluate very-high-order upwind schemes for the direct numerical simulation (DNS) of compressible wall-turbulence. We study upwind-biased (UW) and weighted essentially nonoscillatory (WENO) schemes of increasingly higher order-of-accuracy (J. Comp. Phys. 2000; 160:405,452), extended up to WENO17 (AIAA Paper 2009-1612, 2009). Analysis of the advection,diffusion equation, both as ,x,0 (consistency), and for fixed finite cell-Reynolds-number Re,x (grid-resolution), indicates that the very-high-order upwind schemes have satisfactory resolution in terms of points-per-wavelength (PPW). Computational results for compressible channel flow (Re,[180, 230]; M,CL,[0.35, 1.5]) are examined to assess the influence of the spatial order of accuracy and the computational grid-resolution on predicted turbulence statistics, by comparison with existing compressible and incompressible DNS databases. Despite the use of baseline O(,t2) time-integration and O(,x2) discretization of the viscous terms, comparative studies of various orders-of-accuracy for the convective terms demonstrate that very-high-order upwind schemes can reproduce all the DNS details obtained by pseudospectral schemes, on computational grids of only slightly higher density. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Influence of reaction mechanisms, grid spacing, and inflow conditions on the numerical simulation of lifted supersonic flames

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2010
P. Gerlinger
Abstract The simulation of supersonic combustion requires finite-rate chemistry because chemical and fluid mechanical time scales may be of the same order of magnitude. The size of the chosen reaction mechanism (number of species and reactions involved) has a strong influence on the computational time and thus should be chosen carefully. This paper investigates several hydrogen/air reaction mechanisms frequently used in supersonic combustion. It is shown that at low flight Mach numbers of a supersonic combustion ramjet (scramjet), some kinetic schemes can cause highly erroneous results. Moreover, extremely fine computational grids are required in the lift-off region of supersonic flames to obtain grid-independent solutions. The fully turbulent Mach 2 combustion experiment of Cheng et al. (Comb. Flame 1994; 99: 157,173) is chosen to investigate the influences of different reaction mechanisms, grid spacing, and inflow conditions (contaminations caused by precombustion). A detailed analysis of the experiment will be given and errors of previous simulations are identified. Thus, the paper provides important information for an accurate simulation of the Cheng et al. experiment. The importance of this experiment results from the fact that it is the only supersonic combustion test case where temperature and species fluctuations have been measured simultaneously. Such data are needed for the validation of probability density function methods. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Comparative study of the continuous phase flow in a cyclone separator using different turbulence models,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
H. Shalaby
Abstract Numerical calculations were carried out at the apex cone and various axial positions of a gas cyclone separator for industrial applications. Two different NS-solvers (a commercial one (CFX 4.4 ANSYS GmbH, Munich, Germany, CFX Solver Documentation, 1998), and a research code (Post-doctoral Thesis, Technical University of Chemnitz, Germany, September, 2002)) based on a pressure correction algorithm of the SIMPLE method have been applied to predict the flow behaviour. The flow was assumed as unsteady, incompressible and isothermal. A k,, turbulence model has been applied first using the commercial code to investigate the gas flow. Due to the nature of cyclone flows, which exhibit highly curved streamlines and anisotropic turbulence, advanced turbulence models such as Reynolds stress model (RSM) and large eddy simulation (LES) have been used as well. The RSM simulation was performed using the commercial package activating the Launder et al.'s (J. Fluid. Mech. 1975; 68(3):537,566) approach, while for the LES calculations the research code has been applied utilizing the Smagorinsky model. It was found that the k,, model cannot predict flow phenomena inside the cyclone properly due to the strong curvature of the streamlines. The RSM results are comparable with LES results in the area of the apex cone plane. However, the application of the LES reveals qualitative agreement with the experimental data, but requires higher computer capacity and longer running times than RSM. This paper is organized into five sections. The first section consists of an introduction and a summary of previous work. Section 2 deals with turbulence modelling including the governing equations and the three turbulence models used. In Section 3, computational parameters are discussed such as computational grids, boundary conditions and the solution algorithm with respect to the use of MISTRAL/PartFlow-3D. In Section 4, prediction profiles of the gas flow at axial and apex cone positions are presented and discussed. Section 5 summarizes and concludes the paper. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Employment of the second-moment turbulence closure on arbitrary unstructured grids

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2004
B. BasaraArticle first published online: 12 JAN 200
Abstract The paper presents a finite-volume calculation procedure using a second-moment turbulence closure. The proposed method is based on a collocated variable arrangement and especially adopted for unstructured grids consisting of ,polyhedral' calculation volumes. An inclusion of 23k in the pressure is analysed and the impact of such an approach on the employment of the constant static pressure boundary is addressed. It is shown that this approach allows a removal of a standard but cumbersome velocity,pressure ,Reynolds stress coupling procedure known as an extension of Rhie-Chow method (AIAA J. 1983; 21: 1525,1532) for the Reynolds stresses. A novel wall treatment for the Reynolds-stress equations and ,polyhedral' calculation volumes is presented. Important issues related to treatments of diffusion terms in momentum and Reynolds-stress equations are also discussed and a new approach is proposed. Special interpolation practices implemented in a deferred-correction fashion and related to all equations, are explained in detail. Computational results are compared with available experimental data for four very different applications: the flow in a two-dimensional 180o turned U-bend, the vortex shedding flow around a square cylinder, the flow around Ahmed Body and in-cylinder engine flow. Additionally, the performance of the methodology is assessed by applying it to different computational grids. For all test cases, predictions with the second-moment closure are compared to those of the k,,model. The second-moment turbulence closure always achieves closer agreement with the measurements. A moderate increase in computing time is required for the calculations with the second-moment closure. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Fibonacci grids: A novel approach to global modelling

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 619 2006
Richard Swinbank
Abstract Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude,longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges); (iv) ease of parallelization. One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as ,Fibonacci grids'. These grids possess virtually uniform and isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of this type of grid in shallow-water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional weather prediction or climate models. © Crown copyright, 2006. Royal Meteorological Society [source]