Home About us Contact | |||
Grid
Kinds of Grid Terms modified by Grid Selected AbstractsSEMANTICS-ASSISTED PROBLEM SOLVING ON THE SEMANTIC GRIDCOMPUTATIONAL INTELLIGENCE, Issue 2 2005Liming Chen In this paper we propose a distributed knowledge management framework for semantics and knowledge creation, population, and reuse on the grid. Its objective is to evolve the Grid toward the Semantic Grid with the ultimate purpose of facilitating problem solving in e-Science. The framework uses ontology as the conceptual backbone and adopts the service-oriented computing paradigm for information- and knowledge-level computation. We further present a semantics-based approach to problem solving, which exploits the rich semantic information of grid resource descriptions for resource discovery, instantiation, and composition. The framework and approach has been applied to a UK e-Science project,Grid Enabled Engineering Design Search and Optimisation in Engineering (GEODISE). An ontology-enabled problem solving environment (PSE) has been developed in GEODISE to leverage the semantic content of GEODISE resources and the Semantic Grid infrastructure for engineering design. Implementation and initial experimental results are reported. [source] BUILDING A DATA-MINING GRID FOR MULTIPLE HUMAN BRAIN DATA ANALYSISCOMPUTATIONAL INTELLIGENCE, Issue 2 2005Ning Zhong E-science is about global collaboration in key areas of science such as cognitive science and brain science, and the next generation of infrastructure such as the Wisdom Web and Knowledge Grids. As a case study, we investigate human multiperception mechanism by cooperatively using various psychological experiments, physiological measurements, and data mining techniques for developing artificial systems which match human ability in specific aspects. In particular, we observe fMRI (functional magnetic resonance imaging) and EEG (electroencephalogram) brain activations from the viewpoint of peculiarity oriented mining and propose a way of peculiarity oriented mining for knowledge discovery in multiple human brain data. Based on such experience and needs, we concentrate on the architectural aspect of a brain-informatics portal from the perspective of the Wisdom Web and Knowledge Grids. We describe how to build a data-mining grid on the Wisdom Web for multiaspect human brain data analysis. The proposed methodology attempts to change the perspective of cognitive scientists from a single type of experimental data analysis toward a holistic view at a long-term, global field of vision. [source] GENEALOGIES OF THE GRID: REVISITING STANISLAWSKI'S SEARCH FOR THE ORIGIN OF THE GRID,PATTERN TOWN,GEOGRAPHICAL REVIEW, Issue 1 2008REDWOOD, REUBEN S. ROSE ABSTRACT. As a spatial form, the grid pattern has influenced a range of human activities, from urban planning, architecture, and modern art to graphic design, archaeology, and cartography. Scholars from different disciplines have generally explored the role of the grid within their respective fields of inquiry. One of the earliest geographical attempts to systematically trace the origin and diffusion of the grid-pattern town was provided by Dan Stanislawski in the mid,twentieth century. In this article I critically examine the limitations of Stanislawski's theory of the grid's origin as a means of challenging the doctrine of diffusionism more generally. I then provide a selective overview of recent approaches to understanding the grid and call for a comparative genealogy of gridded spaces and places. [source] An efficient MAC protocol for multi-channel mobile ad hoc networks based on location informationINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2006Yu-Chee Tseng Abstract This paper considers the channel assignment problem in a multi-channel MANET environment. We propose a scheme called GRID, by which a mobile host can easily determine which channel to use based on its current location. In fact, following the GSM style, our GRID spends no communication cost to allocate channels to mobile hosts since channel assignment is purely determined by hosts' physical locations. We show that this can improve the channel reuse ratio. We then propose a multi-channel MAC protocol, which integrates GRID. Our protocol is characterized by the following features: (i) it follows an ,on-demand' style to access the medium and thus a mobile host will occupy a channel only when necessary, (ii) the number of channels required is independent of the network topology, and (iii) no form of clock synchronization is required. On the other hand, most existing protocols assign channels to a host statically even if it has no intention to transmit [IEEE/ACM Trans. Networks 1995; 3(4):441,449; 1993; 1(6): 668,677; IEEE J. Selected Areas Commun. 1999; 17(8):1345,1352], require a number of channels which is a function of the maximum connectivity [IEEE/ACM Trans. Networks 1995; 3(4):441,449; 1993; 1(6): 668,677; Proceedings of IEEE MILCOM'97, November 1997; IEEE J. Selected Areas Commun. 1999; 17(8):1345,1352], or necessitate a clock synchronization among all hosts in the MANET [IEEE J. Selected Areas Commun. 1999; 17(8):1345,1352; Proceedings of IEEE INFOCOM'99, October 1999]. Through simulations, we demonstrate the advantages of our protocol. Copyright © 2005 John Wiley & Sons, Ltd. [source] Resource discovery and management in computational GRID environmentsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2006Alan Bradley Abstract Corporations are currently using computational GRIDs to improve their operations. Future GRIDs will allow an organization to take advantage of computational GRIDs without having to develop a custom in-house solution. GRID resource providers (GRPs) make resources available on the GRID so that others may subscribe and use these resources. GRPs will allow companies to make use of a range of resources such as processing power or mass storage. However, simply providing resources is not enough to ensure the success of a computational GRID: Access to these resources must be controlled otherwise computational GRIDs will simply evolve to become a victim of their own success, unable to offer a suitable quality of service (QoS) to any user. The task of providing a standard querying mechanism for computational GRID environments (CGE) has already witnessed considerable work from groups such as the Globus project who have delivered the Metacomputing Directory Service (MDS), which provides a means to query devices attached to the GRID. This paper presents a review of existing resource discovery mechanisms within CGE. Copyright © 2005 John Wiley & Sons, Ltd. [source] A Three-Dimensional Quanititative Structure-Activity Relationship (3D-QSAR) Model for Predicting the Enantioselectivity of Candida antarctica Lipase BADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 9 2009Paolo Braiuca Abstract Computational techniques involving molecular modeling coupled with multivariate statistical analysis were used to evaluate and predict quantitatively the enantioselectivity of lipase B from Candida antarctica (CALB). In order to allow the mathematical and statistical processing of the experimental data largely available in the literature (namely enantiomeric ratio E), a novel class of GRID-based molecular descriptors was developed (differential molecular interaction fields or DMIFs). These descriptors proved to be efficient in providing the structural information needed for computing the regression model. Multivariate statistical methods based on PLS (partial least square , projection to latent structures), were used for the analysis of data available from the literature and for the construction of the first three-dimensional quanititative structure-activity relationship (3D-QSAR) model able to predict the enantioselectivity of CALB. Our results indicate that the model is statistically robust and predictive. [source] Interior Design at a Crossroads: Embracing Specificity through Process, Research, and Knowledge,JOURNAL OF INTERIOR DESIGN, Issue 3 2008Tiiu Poldma Ph.D. Tiiu Poldma is Vice Dean of Graduate Studies and Research in the Faculty of Environmental Design, and associate professor at the School of Industrial Design at the University of Montreal. Tiiu Poldma received a BID at Ryerson in 1982 (Toronto), MA in Culture and Values in Education in 1999 and Doctor of Philosophy in 2003, both from McGill University in Montreal, Canada. She teaches interior design studio and theory within the Bachelor of Interior Design program at the University of Montreal, and advanced research methodologies in the Masters of Science and Ph.D. programs at the Faculty of Environmental Design. She is currently the Director of the Research Group GRID(Group for Research in Illumination and Design) and heads up the Colour, Light and Form Lab (Laboratoire Forme*couleur*lumiere) at the faculty. She accredits design programs as a site evaluator for CIDAboth in Canada and the United States, and is also a member of the Editorial Board of Inderscience where she is the Regional Editor of the Journal of Design Research (JDR), and serves on the Editorial Board of Design/Science/Planning (Techne Press, Amsterdam). [source] Resource discovery and management in computational GRID environmentsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2006Alan Bradley Abstract Corporations are currently using computational GRIDs to improve their operations. Future GRIDs will allow an organization to take advantage of computational GRIDs without having to develop a custom in-house solution. GRID resource providers (GRPs) make resources available on the GRID so that others may subscribe and use these resources. GRPs will allow companies to make use of a range of resources such as processing power or mass storage. However, simply providing resources is not enough to ensure the success of a computational GRID: Access to these resources must be controlled otherwise computational GRIDs will simply evolve to become a victim of their own success, unable to offer a suitable quality of service (QoS) to any user. The task of providing a standard querying mechanism for computational GRID environments (CGE) has already witnessed considerable work from groups such as the Globus project who have delivered the Metacomputing Directory Service (MDS), which provides a means to query devices attached to the GRID. This paper presents a review of existing resource discovery mechanisms within CGE. Copyright © 2005 John Wiley & Sons, Ltd. [source] SEMANTICS-ASSISTED PROBLEM SOLVING ON THE SEMANTIC GRIDCOMPUTATIONAL INTELLIGENCE, Issue 2 2005Liming Chen In this paper we propose a distributed knowledge management framework for semantics and knowledge creation, population, and reuse on the grid. Its objective is to evolve the Grid toward the Semantic Grid with the ultimate purpose of facilitating problem solving in e-Science. The framework uses ontology as the conceptual backbone and adopts the service-oriented computing paradigm for information- and knowledge-level computation. We further present a semantics-based approach to problem solving, which exploits the rich semantic information of grid resource descriptions for resource discovery, instantiation, and composition. The framework and approach has been applied to a UK e-Science project,Grid Enabled Engineering Design Search and Optimisation in Engineering (GEODISE). An ontology-enabled problem solving environment (PSE) has been developed in GEODISE to leverage the semantic content of GEODISE resources and the Semantic Grid infrastructure for engineering design. Implementation and initial experimental results are reported. [source] Particle Level Set Advection for the Interactive Visualization of Unsteady 3D FlowCOMPUTER GRAPHICS FORUM, Issue 3 2008Nicolas Cuntz Abstract Typically, flow volumes are visualized by defining their boundary as iso-surface of a level set function. Grid-based level sets offer a good global representation but suffer from numerical diffusion of surface detail, whereas particle-based methods preserve details more accurately but introduce the problem of unequal global representation. The particle level set (PLS) method combines the advantages of both approaches by interchanging the information between the grid and the particles. Our work demonstrates that the PLS technique can be adapted to volumetric dye advection via streak volumes, and to the visualization by time surfaces and path volumes. We achieve this with a modified and extended PLS, including a model for dye injection. A new algorithmic interpretation of PLS is introduced to exploit the efficiency of the GPU, leading to interactive visualization. Finally, we demonstrate the high quality and usefulness of PLS flow visualization by providing quantitative results on volume preservation and by discussing typical applications of 3D flow visualization. [source] Maximizing revenue in Grid markets using an economically enhanced resource managerCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010M. Macías Abstract Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd. [source] Managing very large distributed data sets on a data gridCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2010Miguel Branco Abstract In this work we address the management of very large data sets, which need to be stored and processed across many computing sites. The motivation for our work is the ATLAS experiment for the Large Hadron Collider (LHC), where the authors have been involved in the development of the data management middleware. This middleware, called DQ2, has been used for the last several years by the ATLAS experiment for shipping petabytes of data to research centres and universities worldwide. We describe our experience in developing and deploying DQ2 on the Worldwide LHC computing Grid, a production Grid infrastructure formed of hundreds of computing sites. From this operational experience, we have identified an important degree of uncertainty that underlies the behaviour of large Grid infrastructures. This uncertainty is subjected to a detailed analysis, leading us to present novel modelling and simulation techniques for Data Grids. In addition, we discuss what we perceive as practical limits to the development of data distribution algorithms for Data Grids given the underlying infrastructure uncertainty, and propose future research directions. Copyright © 2009 John Wiley & Sons, Ltd. [source] A decentralized and fault-tolerant Desktop Grid system for distributed applications,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2010Heithem Abbes Abstract This paper proposes a decentralized and fault-tolerant software system for the purpose of managing Desktop Grid resources. Its main design principle is to eliminate the need for a centralized server, therefore to remove the single point of failure and bottleneck of existing Desktop Grids. Instead, each node can play alternatively the role of client or server. Our main contribution is to design the PastryGrid protocol (based on Pastry) for Desktop Grid in order to support a wider class of applications, especially the distributed application with precedence between tasks. Compared with a centralized system, we evaluate our approach over 205 machines executing 2500 tasks. The results we obtain show that our decentralized system outperforms XtremWeb-CH which is configured as a master/slave, with respect to the turnaround time. Copyright © 2009 John Wiley & Sons, Ltd. [source] Grids challenged by a Web 2.0 and multicore sandwichCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2009Geoffrey Fox Abstract We discuss the application of Web 2.0 to support scientific research (e-Science) and related ,e-more or less anything' applications. Web 2.0 offers interesting technical approaches (protocols, message formats, and programming tools) to build core e-infrastructure (cyberinfrastructure) as well as many interesting services (Facebook, YouTube, Amazon S3/EC2, and Google maps) that can add value to e-infrastructure projects. We discuss why some of the original Grid goals of linking the world's computer systems may not be so relevant today and that interoperability is needed at the data and not always at the infrastructure level. Web 2.0 may also support Parallel Programming 2.0,a better parallel computing software environment motivated by the need to run commodity applications on multicore chips. A ,Grid on the chip' will be a common use of future chips with tens or hundreds of cores. Copyright © 2008 John Wiley & Sons, Ltd. [source] Toward replication in grids for digital libraries with freshness and correctness guaranteesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2008Fuat Akal Abstract Building digital libraries (DLs) on top of data grids while facilitating data access and minimizing access overheads is challenging. To achieve this, replication in a Grid has to provide dedicated features that are only partly supported by existing Grid environments. First, it must provide transparent and consistent access to distributed data. Second, it must dynamically control the creation and maintenance of replicas. Third, it should allow higher replication granularities, i.e. beyond individual files. Fourth, users should be able to specify their freshness demands, i.e. whether they need most recent data or are satisfied with slightly outdated data. Finally, all these tasks must be performed efficiently. This paper presents an approach that will finally allow one to build a fully integrated and self-managing replication subsystem for data grids that will provide all the above features. Our approach is to start with an accepted replication protocol for database clusters, namely PDBREP, and to adapt it to the grid. Copyright © 2008 John Wiley & Sons, Ltd. [source] Security in distributed metadata cataloguesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2008Nuno Santos Abstract Catalogue services provide the discovery and location mechanisms that allow users and applications to locate data on Grids. Replication is a highly desirable feature in these services, since it provides the scalability and reliability required on large data Grids and is the basis for federating catalogues from different organizations. Grid catalogues are often used to store sensitive data and must have access control mechanisms to protect their data. Replication has to take this security policy into account, making sure that replicated information cannot be abused but allowing some flexibility such as selective replication for the sites depending on the level of trust in them. In this paper we discuss the security requirements and implications of several replication scenarios for Grid catalogues based on experiences gained within the EGEE project. Using the security infrastructure of the EGEE Grid as a basis, we then propose a security architecture for replicated Grid catalogues, which, among other features, supports partial and total replication of the security mechanisms on the master. The implementation of this architecture in the AMGA metadata catalogue of the EGEE project is then described including the application to a complex scenario in a biomedical application. Copyright © 2008 John Wiley & Sons, Ltd. [source] A context- and role-driven scientific workflow development patternCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2008Wanchun Dou Abstract Scientific workflow execution often demands data-centric and computation-intensive collaboration efforts, which is typically different from the process-centric workflow execution with fixed execution specifications. Scientific workflow execution often challenges the traditional workflow development strategy in dynamic context management and role definition. In view of this observation, application context spectrums are firstly distinguished from different profiles of scientific workflow development. Then, a role enactment strategy is proposed for enabling workflow execution in certain application context. They jointly enhance the validity of a scientific workflow development through clearly articulating the correlation between computational subjects and computational objects engaged in scientific workflow system. Furthermore, a novel context- and role-driven scientific workflow development pattern is proposed for enacting a scientific workflow system on the Grid. Finally, a case study is presented to demonstrate the generic natures of the methods in this paper. Copyright © 2008 John Wiley & Sons, Ltd. [source] The development of a geospatial data Grid by integrating OGC Web services with Globus-based Grid technologyCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2008Liping Di Abstract Geospatial science is the science and art of acquiring, archiving, manipulating, analyzing, communicating, modeling with, and utilizing spatially explicit data for understanding physical, chemical, biological, and social systems on the Earth's surface or near the surface. In order to share distributed geospatial resources and facilitate the interoperability, the Open Geospatial Consortium (OGC), an industry,government,academia consortium, has developed a set of widely accepted Web-based interoperability standards and protocols. Grid is the technology enabling resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations. Geospatial Grid is an extension and application of Grid technology in the geospatial discipline. This paper discusses problems associated with directly using Globus-based Grid technology in the geospatial disciplines, the needs for geospatial Grids, and the features of geospatial Grids. Then, the paper presents a research project that develops and deploys a geospatial Grid through integrating Web-based geospatial interoperability standards and technology developed by OGC with Globus-based Grid technology. The geospatial Grid technology developed by this project makes the interoperable, personalized, on-demand data access and services a reality at large geospatial data archives. Such a technology can significantly reduce problems associated with archiving, manipulating, analyzing, and utilizing large volumes of geospatial data at distributed locations. Copyright © 2008 John Wiley & Sons, Ltd. [source] Towards an integrated GIS-based coastal forecast workflowCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2008Gabrielle Allen Abstract The SURA Coastal Ocean Observing and Prediction (SCOOP) program is using geographical information system (GIS) technologies to visualize and integrate distributed data sources from across the United States and Canada. Hydrodynamic models are run at different sites on a developing multi-institutional computational Grid. Some of these predictive simulations of storm surge and wind waves are triggered by tropical and subtropical cyclones in the Atlantic and the Gulf of Mexico. Model predictions and observational data need to be merged and visualized in a geospatial context for a variety of analyses and applications. A data archive at LSU aggregates the model outputs from multiple sources, and a data-driven workflow triggers remotely performed conversion of a subset of model predictions to georeferenced data sets, which are then delivered to a Web Map Service located at Texas A&M University. Other nodes in the distributed system aggregate the observational data. This paper describes the use of GIS within the SCOOP program for the 2005 hurricane season, along with details of the data-driven distributed dataflow and workflow, which results in geospatial products. We also focus on future plans related to the complimentary use of GIS and Grid technologies in the SCOOP program, through which we hope to provide a wider range of tools that can enhance the tools and capabilities of earth science research and hazard planning. Copyright © 2008 John Wiley & Sons, Ltd. [source] Dynamic data replication in LCG 2008CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2008C. Nicholson Abstract To provide performance access to data from high-energy physics experiments such as the Large Hadron Collider (LHC), controlled replication of files among grid sites is required. Dynamic, automated replication in response to jobs may also be useful and has been investigated using the grid simulator OptorSim. In this paper, results are presented from simulations of the LHC Computing Grid in 2008, in a physics analysis scenario. These show, first, that dynamic replication does give improved job throughput; second, that for this complex grid system, simple replication strategies such as Least Recently Used and Least Frequently Used are as effective as more advanced economic models; third, that grid site policies that allow maximum resource sharing are more effective; and lastly, that dynamic replication is particularly effective when data access patterns include some files being accessed more often than others, such as with a Zipf-like distribution. Copyright © 2008 John Wiley & Sons, Ltd. [source] GOLD infrastructure for virtual organizationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2008P. Periorellis Abstract The paper discusses the GOLD project (Grid-based Information Models to Support the Rapid Innovation of New High Value-Added Chemicals) whose principal aim is to carry out research and development into enabling technologies to support the formation, operation and termination of virtual organizations. The paper discusses the outcome of this research, which is the GOLD Middleware infrastructure. The infrastructure has been implemented in the form of a set of Middleware components, which address issues such as trust, security, contract monitoring and enforcement, information management and coordination. We discuss all these issues in turn and more importantly we demonstrate how current WS standards can be used to implement these issues. In addition, the paper follows a top down approach starting with a brief outline on the architectural elements derived during the requirements engineering phase and demonstrates how these elements were mapped onto actual services that were implemented according to service-oriented architecture principles and related technologies. Copyright © 2008 John Wiley & Sons, Ltd. [source] Distributed end-host multicast algorithms for the Knowledge GridCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2007Wanqing Tu Abstract The Knowledge Grid built on top of the peer-to-peer (P2P) network has been studied to implement scalable, available and sematic-based querying. In order to improve the efficiency and scalability of querying, this paper studies the problem of multicasting queries in the Knowledge Grid. An m -dimensional irregular mesh is a popular overlay topology of P2P networks. We present a set of novel distributed algorithms on top of an m -dimensional irregular mesh overlay for the short delay and low network resource consumption end-host multicast services. Our end-host multicast fully utilizes the advantages of an m -dimensional mesh to construct a two-layer architecture. Compared to previous approaches, the novelty and contribution here are: (1) cluster formation that partitions the group members into clusters in the lower layer where cluster consists of a small number of members; (2) cluster core selection that searches a core with the minimum sum of overlay hops to all other cluster members for each cluster; (3) weighted shortest path tree construction that guarantees the minimum number of shortest paths to be occupied by the multicast traffic; (4) distributed multicast routing that directs the multicast messages to be efficiently distributed along the two-layer multicast architecture in parallel, without a global control; the routing scheme enables the packets to be transmitted to the remote end hosts within short delays through some common shortest paths; and (5) multicast path maintenance that restores the normal communication once the membership alteration appears. Simulation results show that our end-host multicast can distributively achieve a shorter delay and lower network resource consumption multicast services as compared with some well-known end-host multicast systems. Copyright © 2006 John Wiley & Sons, Ltd. [source] MyCoG.NET: a multi-language CoG toolkitCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2007A. Paventhan Abstract Grid application developers utilize Commodity Grid (CoG) toolkits to access Globus Grid services. Existing CoG toolkits are language-specific and have, for example, been developed for Java, Python and the Matlab scripting environment. In this paper we describe MyCoG.NET, a CoG toolkit supporting multi-language programmability under the Microsoft .NET framework. MyCoG.NET provides a set of classes and APIs to access Globus Grid services from languages supported by the .NET Common Language Runtime. We demonstrate its programmability using FORTRAN, C++, C# and Java, and discuss its performance over LAN and WAN infrastructures. We present a Grid application, in the field of experimental aerodynamics, as a case study to show how MyCoG.NET can be exploited. We demonstrate how scientists and engineers can create and use domain-specific workflow activity sets for rapid application development using Windows Workflow Foundation. We also show how users can easily extend and customize these activities. Copyright © 2006 John Wiley & Sons, Ltd. [source] Job completion prediction using case-based reasoning for Grid computing environmentsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007Lilian Noronha Nassif Abstract One of the main focuses of Grid computing is solving resource-sharing problems in multi-institutional virtual organizations. In such heterogeneous and distributed environments, selecting the best resource to run a job is a complex task. The solutions currently employed still present numerous challenges and one of them is how to let users know when a job will finish. Consequently, reserve in advance remains unavailable. This article presents a new approach, which makes predictions for job execution time in Grid by applying the case-based reasoning paradigm. The work includes the development of a new case retrieval algorithm involving relevance sequence and similarity degree calculations. The prediction model is part of a multi-agent system that selects the best resource of a computational Grid to run a job. Agents representing candidate resources for job execution make predictions in a distributed and parallel manner. The technique presented here can be used in Grid environments at operation time to assist users with batch job submissions. Experimental results validate the prediction accuracy of the proposed mechanisms, and the performance of our case retrieval algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source] A peer-to-peer decentralized strategy for resource management in computational GridsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007Antonella Di Stefano Abstract This paper presents a peer-to-peer (P2P) approach for the management, in a computational Grid, of those resources that are featured by numerical quantity and thus characterized by a coefficient of utilization, such as percentage of CPU time, disk space, memory space, etc. The proposed approach exploits spatial computing concepts and models a Grid by means of a flat P2P architecture consisting of nodes connected by an overlay network; such a network topology, together with the quantity of resource available in each node, forms a three-dimensional surface, where valleys correspond to nodes with a large quantity of available resource. In this scenario, this paper proposes an algorithm for resource discovery that is based on navigating such a surface, in search of the deepest valley (global minimum, that is, the best node). The algorithm, which aims at fairly distributing among nodes the quantity of leased resource, is based on some heuristics that mimic the laws of kinematics. Experimental results show the effectiveness of the algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source] The LEAD Portal: a TeraGrid gateway and application service architectureCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2007Marcus Christie Abstract The Linked Environments for Atmospheric Discovery (LEAD) Portal is a science application portal designed to enable effective use of Grid resources in exploring mesoscale meteorological phenomena. The aim of the LEAD Portal is to provide a more productive interface for doing experimental work by the meteorological research community, as well as bringing weather research to a wider class of users, meaning pre-college students in grades 6,12 and undergraduate college students. In this paper, we give an overview of the LEAD project and the role that LEAD portal is playing in reaching its goals. We then describe the various technologies we are using to bring powerful and complex scientific tools to educational and research users. These technologies,a fine-grained capability based authorization framework, an application service factory toolkit, and a Web services-based workflow execution engine and supporting tools,enable our team to deploy these once inaccessible, stovepipe scientific codes onto a Grid where they can be collectively utilized. Copyright © 2006 John Wiley & Sons, Ltd. [source] Science gateways made easy: the In-VIGO approachCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2007Andréa M. Matsunaga Abstract Science gateways require the easy enabling of legacy scientific applications on computing Grids and the generation of user-friendly interfaces that hide the complexity of the Grid from the user. This paper presents the In-VIGO approach to the creation and management of science gateways. First, we discuss the virtualization of machines, networks and data to facilitate the dynamic creation of secure execution environments that meet application requirements. Then we discuss the virtualization of applications, i.e. the execution on shared resources of multiple isolated application instances with customized behavior, in the context of In-VIGO. A Virtual Application Service (VAS) architecture for automatically generating, customizing, deploying, and using virtual applications as Grid services is then described. Starting with a grammar-based description of the command-line syntax, the automated process generates the VAS description and the VAS implementation (code for application encapsulation and data binding) that is deployed and made available through a Web interface. A VAS can be customized on a per-user basis by restricting the capabilities of the original application or by adding to it features such as parameter sweeping. This is a scalable approach to the integration of scientific applications as services into Grids and can be applied to any tool with an arbitrarily complex command-line syntax. Copyright © 2006 John Wiley & Sons, Ltd. [source] High-speed network and Grid computing for high-end computation: application in geodynamics ensemble simulationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2007S. Zhou Abstract High-speed network and Grid computing have been actively investigated, and their capabilities are being demonstrated. However, their application to high-end scientific computing and modeling is still to be explored. In this paper we discuss the related issues and present our prototype work on applying XCAT3 framework technology to geomagnetic data assimilation development with distributed computers, connected through an up to 10 Gigabit Ethernet network. Copyright © 2006 John Wiley & Sons, Ltd. [source] Performance of computationally intensive parameter sweep applications on Internet-based Grids of computers: the mapping of molecular potential energy hypersurfacesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007S. Reyes Abstract This work focuses on the use of computational Grids for processing the large set of jobs arising in parameter sweep applications. In particular, we tackle the mapping of molecular potential energy hypersurfaces. For computationally intensive parameter sweep problems, performance models are developed to compare the parallel computation in a multiprocessor system with the computation on an Internet-based Grid of computers. We find that the relative performance of the Grid approach increases with the number of processors, being independent of the number of jobs. The experimental data, obtained using electronic structure calculations, fit the proposed performance expressions accurately. To automate the mapping of potential energy hypersurfaces, an application based on GRID superscalar is developed. It is tested on the prototypical case of the internal dynamics of acetone. Copyright © 2006 John Wiley & Sons, Ltd. [source] Developing LHCb Grid software: experiences and advancesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2007I. Stokes-Rees Abstract The LHCb Grid software has been used for two Physics Data Challenges, with the latter producing over 98 TB of data and consuming over 650 processor-years of computing power. This paper discusses the experience of developing a Grid infrastructure, interfacing to an existing Grid (LCG) and traditional computing centres simultaneously, running LHCb experiment software and jobs on the Grid, and the integration of a number of new technologies into the Grid infrastructure. Our experience and utilization of the following core technologies will be discussed: OGSI, XML-RPC, Grid services, LCG middleware and instant messaging. Specific attention will be given to analysing the behaviour of over 100,000 jobs executed through the LCG Grid environment, providing insight into the performance, failure modes and scheduling efficiency over a period of several months for a large computational Grid incorporating over 40 sites and thousands of nodes. © Crown copyright 2006. Reproduced with the permission of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. [source] |