Home About us Contact | |||
Computing Environment (computing + environment)
Selected AbstractsImmersive Integration of Physical and Virtual EnvironmentsCOMPUTER GRAPHICS FORUM, Issue 3 2004Henry Fuchs We envision future work and play environments in which the user's computing interface is more closely integrated with the physical surroundings than today's conventional computer display screens and keyboards. We are working toward realizable versions of such environments, in which multiple video projectors and digital cameras enable every visible surface to be both measured in 3D and used for display. If the 3D surface positions were transmitted to a distant location, they may also enable distant collaborations to become more like working in adjacent offices connected by large windows. With collaborators at the University of Pennsylvania, Brown University, Advanced Network and Services, and the Pittsburgh Supercomputing Center, we at Chapel Hill have been working to bring these ideas to reality. In one system, depth maps are calculated from streams of video images and the resulting 3D surface points are displayed to the user in head-tracked stereo. Among the applications we are pursuing for this tele-presence technology, is advanced training for trauma surgeons by immersive replay of recorded procedures. Other applications display onto physical objects, to allow more natural interaction with them "painting" a dollhouse, for example. More generally, we hope to demonstrate that the principal interface of a future computing environment need not be limited to a screen the size of one or two sheets of paper. Just as a useful physical environment is all around us, so too can the increasingly ubiquitous computing environment be all around us -integrated seamlessly with our physical surroundings. [source] Formation of virtual organizations in grids: a game-theoretic approachCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010Thomas E. Carroll Abstract Applications require the composition of resources to execute in a grid computing environment. The grid service providers (GSPs), the owners of the computational resources, must form virtual organizations (VOs) to be able to provide the composite resource. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. Using game theory, we formulate the resource composition among GSPs as a coalition formation problem and propose a framework to model and solve it. Using this framework, we propose a resource management system that supports the VO formation among GSPs in a grid computing system. Copyright © 2008 John Wiley & Sons, Ltd. [source] DRIVE,Dispatching Requests Indirectly through Virtual EnvironmentCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2010Hyung Won Choi Abstract Dispatching a large number of dynamically changing requests directly to a small number of servers exposes the disparity between the requests and the machines. In this paper, we present a novel approach that dispatches requests to servers through virtual machines, called Dispatching Requests Indirectly through Virtual Environment (DRIVE). Client requests are first dispatched to virtual machines that are subsequently dispatched to actual physical machines. This buffering of requests helps to reduce the complexity involved in dispatching a large number of requests to a small number of machines. To demonstrate the effectiveness of the DRIVE framework, we set up an experimental environment consisting of a PC cluster and four benchmark suites. With the experimental results, we demonstrate that the use of virtual machines indeed abstracts away the client requests and hence helps to improve the overall performance of a dynamically changing computing environment. Copyright © 2009 John Wiley & Sons, Ltd. [source] Using Web 2.0 for scientific applications and scientific communitiesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2009Marlon E. Pierce Abstract Web 2.0 approaches are revolutionizing the Internet, blurring lines between developers and users and enabling collaboration and social networks that scale into the millions of users. As discussed in our previous work, the core technologies of Web 2.0 effectively define a comprehensive distributed computing environment that parallels many of the more complicated service-oriented systems such as Web service and Grid service architectures. In this paper we build upon this previous work to discuss the applications of Web 2.0 approaches to four different scenarios: client-side JavaScript libraries for building and composing Grid services; integrating server-side portlets with ,rich client' AJAX tools and Web services for analyzing Global Positioning System data; building and analyzing folksonomies of scientific user communities through social bookmarking; and applying microformats and GeoRSS to problems in scientific metadata description and delivery. Copyright © 2009 John Wiley & Sons, Ltd. [source] Towards workflow simulation in service-oriented architecture: an event-based approachCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2008Yanchong Zheng Abstract The emergence of service-oriented architecture (SOA) has brought about a loosely coupled computing environment that enables flexible integration and reuse of heterogeneous systems. On building a SOA for application systems, more and more research has been focused on service composition, in which workflow and simulation techniques have shown great potential. Simulation of services' interaction is important since the services ecosystem is dynamic and in continuous evolution. However, there is a lack in the research of services' simulation, especially models, methods and systems to support the simulation of interaction behavior of composite services. In this paper, an enhanced workflow simulation method with the support of interactive events mechanism is proposed to fulfill this requirement. At build time, we introduce an event sub-model in the workflow meta-model, and our simulation engine supports the event-based interaction pattern at run time. With an example simulated in the prototype system developed according to our method, the advantages of our method in model verification and QoS evaluation for service compositions are also highlighted. Copyright © 2007 John Wiley & Sons, Ltd. [source] GridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing frameworkCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005Arun KrishnanArticle first published online: 24 JUN 200 Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright © 2005 John Wiley & Sons, Ltd. [source] Simulation of resource synchronization in a dynamic real-time distributed computing environmentCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2004Chen Zhang Abstract Today, more and more distributed computer applications are being modeled and constructed using real-time principles and concepts. In 1989, the Object Management Group (OMG) formed a Real-Time Special Interest Group (RT SIG) with the goal of extending the Common Object Request Broker Architecture (CORBA) standard to include real-time specifications. This group's most recent efforts have focused on the requirements of dynamic distributed real-time systems. One open problem in this area is resource access synchronization for tasks employing dynamic priority scheduling. This paper presents two resource synchronization protocols that the authors have developed which meet the requirements of dynamic distributed real-time systems as specified by Dynamic Scheduling Real-Time CORBA (DSRT CORBA). The proposed protocols can be applied to both Earliest Deadline First (EDF) and Least Laxity First (LLF) dynamic scheduling algorithms, allow distributed nested critical sections, and avoid unnecessary runtime overhead. In order to evaluate the performance of the proposed protocols, we analyzed each protocol's schedulability. Since the schedulability of the system is affected by numerous system configuration parameters, we have designed simulation experiments to isolate and illustrate the impact of each individual system parameter. Simulation experiments show the proposed protocols have better performance than one would realize by applying a schema that utilizes dynamic priority ceiling update. Copyright © 2004 John Wiley & Sons, Ltd. [source] A performance study of job management systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2004Tarek El-Ghazawi Abstract Job Management Systems (JMSs) efficiently schedule and monitor jobs in parallel and distributed computing environments. Therefore, they are critical for improving the utilization of expensive resources in high-performance computing systems and centers, and an important component of Grid software infrastructure. With many JMSs available commercially and in the public domain, it is difficult to choose an optimum JMS for a given computing environment. In this paper, we present the results of the first empirical study of JMSs reported in the literature. Four commonly used systems, LSF, PBS Pro, Sun Grid Engine/CODINE, and Condor were considered. The study has revealed important strengths and weaknesses of these JMSs under different operational conditions. For example, LSF was shown to exhibit excellent throughput for a wide range of job types and submission rates. Alternatively, CODINE appeared to outperform other systems in terms of the average turn-around time for small jobs, and PBS appeared to excel in terms of turn-around time for relatively larger jobs. Copyright © 2004 John Wiley & Sons, Ltd. [source] Grids of agents for computer and telecommunication network managementCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2004M. D. Assunção Abstract The centralized system approach for computer and telecommunication network management has been presenting scalability problems along with the growth in the amount and diversity of managed equipment. Moreover, the increase in complexity of the services being offered through the networks also contributes to adding extra workload to the management station. The amount of data that must be handled and processed by only one administration point could lead to a situation where there is not enough processing and storage power to carry out an efficient job. In this work we present an alternative approach by creating a highly distributed computing environment through the use of Grids of autonomous agents to analyze large amounts of data, which reduce the processing costs by optimizing the load distribution and resource utilization. Copyright © 2004 John Wiley & Sons, Ltd. [source] The Polder Computing Environment: a system for interactive distributed simulationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13-15 2002K. A. Iskra Abstract The paper provides an overview of an experimental, Grid-like computing environment, Polder, and its components. Polder offers high-performance computing and interactive simulation facilities to computational science. It was successfully implemented on a wide-area cluster system, the Distributed ASCI Supercomputer. An important issue is an efficient management of resources, in particular multi-level scheduling and migration of tasks that use PVM or sockets. The system can be applied to interactive simulation, where a cluster is used for high-performance computations, while a dedicated immersive interactive environment (CAVE) offers visualization and user interaction. Design considerations for the construction of dynamic exploration environments using such a system are discussed, in particular the use of intelligent agents for coordination. A case study of simulatedabdominal vascular reconstruction is subsequently presented: the results of computed tomography or magnetic resonance imaging of a patient are displayed in CAVE, and a surgeon can evaluate the possible treatments by performing the surgeries virtually and analysing the resulting blood flow which is simulated using the lattice-Boltzmann method. Copyright © 2002 John Wiley & Sons, Ltd. [source] An analysis of VI Architecture primitives in support of parallel and distributed communicationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2002Andrew Begel Abstract We present the results of a detailed study of the Virtual Interface (VI) paradigm as a communication foundation for a distributed computing environment. Using Active Messages and the Split-C global memory model, we analyze the inherent costs of using VI primitives to implement these high-level communication abstractions. We demonstrate a minimum mapping cost (i.e. the host processing required to map one abstraction to a lower abstraction) of 5.4 ,s for both Active Messages and Split-C using four-way 550 MHz Pentium III SMPs and the Myrinet network. We break down this cost to the use of individual VI primitives in supporting flow control, buffer management and event processing and identify the completion queue as the source of the highest overhead. Bulk transfer performance plateaus at 44 Mbytes/s for both implementations are due to the addition of fragmentation requirements. Based on this analysis, we present the implications for the VI successor, Infiniband. Copyright © 2002 John Wiley & Sons, Ltd. [source] Large-scale parallel finite-element analysis using the internet: a performance studyINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2005Ryuji Shioya Abstract This paper describes a parallel finite-element system implemented using the domain decomposition method on a cluster of remote computers connected via the Internet. This technique is also readily applicable to a grid computing environment. A three-dimensional finite-element elastic analysis involving more than one million degrees of freedom was solved using this system, and a good approximate solution was obtained with high parallel efficiency of over 90% using remote computers located in three different countries. Copyright © 2005 John Wiley & Sons, Ltd. [source] On-Line Control Architecture for Enabling Real-Time Traffic System OperationsCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2004Srinivas Peeta Critical to their effectiveness are the control architectures that provide a blueprint for the efficient transmission and processing of large amounts of real-time data, and consistency-checking and fault tolerance mechanisms to ensure seamless automated functioning. However, the lack of low-cost, high-performance, and easy-to-build computing environments are key impediments to the widespread deployment of such architectures in the real-time traffic operations domain. This article proposes an Internet-based on-line control architecture that uses a Beowulf cluster as its computational backbone and provides an automated mechanism for real-time route guidance to drivers. To investigate this concept, the computationally intensive optimization modules are implemented on a low-cost 16-processor Beowulf cluster and a commercially available supercomputer, and the performance of these systems on representative computations is measured. The results highlight the effectiveness of the cluster in generating substantial computational performance scalability, and suggest that its performance is comparable to that of the more expensive supercomputer. [source] Scheduling time-critical requests for multiple data objects in on-demand broadcastCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2010Victor C. S. Lee Abstract On-demand broadcast is an effective data dissemination approach in mobile computing environments. Most of the recent studies on on-demand data broadcast assume that clients request only a single-data-object at a time. This assumption may not be practical for the increasingly sophisticated mobile applications. In this paper, we investigate the scheduling problem of time-critical requests for multiple data objects in on-demand broadcast environments and observe that existing scheduling algorithms designed for single-data-object requests perform unsatisfactorily in this new setting. Based on our analysis, we propose new algorithms to improve the system performance. Copyright © 2010 John Wiley & Sons, Ltd. [source] Job completion prediction using case-based reasoning for Grid computing environmentsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007Lilian Noronha Nassif Abstract One of the main focuses of Grid computing is solving resource-sharing problems in multi-institutional virtual organizations. In such heterogeneous and distributed environments, selecting the best resource to run a job is a complex task. The solutions currently employed still present numerous challenges and one of them is how to let users know when a job will finish. Consequently, reserve in advance remains unavailable. This article presents a new approach, which makes predictions for job execution time in Grid by applying the case-based reasoning paradigm. The work includes the development of a new case retrieval algorithm involving relevance sequence and similarity degree calculations. The prediction model is part of a multi-agent system that selects the best resource of a computational Grid to run a job. Agents representing candidate resources for job execution make predictions in a distributed and parallel manner. The technique presented here can be used in Grid environments at operation time to assist users with batch job submissions. Experimental results validate the prediction accuracy of the proposed mechanisms, and the performance of our case retrieval algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source] Middleware for real-time distributed simulationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2004Thom McLean Abstract Distributed simulation applications often rely on middleware to provide services to support their execution over distributed computing environments. Such middleware spans many levels, ranging from low-level support for data transmission through object request brokers to higher level, simulation specific functionality such as time management. We discuss design alternatives for realizing such middleware for hard real-time distributed simulations such as hardware-in-the-loop applications. We present the results from tests of a prototype implementation of real-time Run-Time Infrastructure middleware. Its performance is compared with a non-real-time implementation. The context for this work is the High Level Architecture standard that has been defined by the U.S. Department of Defense. Copyright © 2004 John Wiley & Sons, Ltd. [source] A performance study of job management systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2004Tarek El-Ghazawi Abstract Job Management Systems (JMSs) efficiently schedule and monitor jobs in parallel and distributed computing environments. Therefore, they are critical for improving the utilization of expensive resources in high-performance computing systems and centers, and an important component of Grid software infrastructure. With many JMSs available commercially and in the public domain, it is difficult to choose an optimum JMS for a given computing environment. In this paper, we present the results of the first empirical study of JMSs reported in the literature. Four commonly used systems, LSF, PBS Pro, Sun Grid Engine/CODINE, and Condor were considered. The study has revealed important strengths and weaknesses of these JMSs under different operational conditions. For example, LSF was shown to exhibit excellent throughput for a wide range of job types and submission rates. Alternatively, CODINE appeared to outperform other systems in terms of the average turn-around time for small jobs, and PBS appeared to excel in terms of turn-around time for relatively larger jobs. Copyright © 2004 John Wiley & Sons, Ltd. [source] UbiXML: programmable management of ubiquitous computing resourcesINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2007Dimitris Alexopoulos XML technologies provide proven benefits for the configuration management of complex heterogeneous multi-vendor networks. These benefits have been recently manifested in numerous research, industrial and standardization efforts, including the XMLNET architecture. In this paper we present UbiXML, a system for programmable management of ubiquitous computing resources. UbiXML extends the benefits of XML technologies in the broader class of ubiquitous computing environments, which are inherently complex distributed heterogeneous and multi-vendor. In UbiXML management applications are structured as XML documents that incorporate programming constructs. Thus, UbiXML allows administrators to build sophisticated management applications with little or no programming effort. While UbiXML builds on several XMLNET concepts, it significantly augments XMLNET to handle management of sensors, perceptual components and actuating devices. Moreover, UbiXML is extensible towards additional ubiquitous computing elements. UbiXML has been exploited in implementing realistic management applications for a smart space. Copyright © 2007 John Wiley & Sons, Ltd. [source] Parallel Algorithms for Dynamic Shortest Path ProblemsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2002Ismail Chabini The development of intelligent transportation systems (ITS) and the resulting need for the solution of a variety of dynamic traffic network models and management problems require faster-than-real-time computation of shortest path problems in dynamic networks. Recently, a sequential algorithm was developed to compute shortest paths in discrete time dynamic networks from all nodes and all departure times to one destination node. The algorithm is known as algorithm DOT and has an optimal worst-case running-time complexity. This implies that no algorithm with a better worst-case computational complexity can be discovered. Consequently, in order to derive algorithms to solve all-to-one shortest path problems in dynamic networks, one would need to explore avenues other than the design of sequential solution algorithms only. The use of commercially-available high-performance computing platforms to develop parallel implementations of sequential algorithms is an example of such avenue. This paper reports on the design, implementation, and computational testing of parallel dynamic shortest path algorithms. We develop two shared-memory and two message-passing dynamic shortest path algorithm implementations, which are derived from algorithm DOT using the following parallelization strategies: decomposition by destination and decomposition by transportation network topology. The algorithms are coded using two types of parallel computing environments: a message-passing environment based on the parallel virtual machine (PVM) library and a multi-threading environment based on the SUN Microsystems Multi-Threads (MT) library. We also develop a time-based parallel version of algorithm DOT for the case of minimum time paths in FIFO networks, and a theoretical parallelization of algorithm DOT on an ,ideal' theoretical parallel machine. Performances of the implementations are analyzed and evaluated using large transportation networks, and two types of parallel computing platforms: a distributed network of Unix workstations and a SUN shared-memory machine containing eight processors. Satisfactory speed-ups in the running time of sequential algorithms are achieved, in particular for shared-memory machines. Numerical results indicate that shared-memory computers constitute the most appropriate type of parallel computing platforms for the computation of dynamic shortest paths for real-time ITS applications. [source] The seven deadly sins of comparative analysisJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 7 2009R. P. FRECKLETON Abstract Phylogenetic comparative methods are extremely commonly used in evolutionary biology. In this paper, I highlight some of the problems that are frequently encountered in comparative analyses and review how they can be fixed. In broad terms, the problems boil down to a lack of appreciation of the underlying assumptions of comparative methods, as well as problems with implementing methods in a manner akin to more familiar statistical approaches. I highlight that the advent of more flexible computing environments should improve matters and allow researchers greater scope to explore methods and data. [source] |