Scheduling

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Scheduling

  • packet scheduling

  • Terms modified by Scheduling

  • scheduling algorithm
  • scheduling algorithms
  • scheduling approach
  • scheduling method
  • scheduling policy
  • scheduling problem
  • scheduling scheme
  • scheduling strategy
  • scheduling system

  • Selected Abstracts


    DRUG SCHEDULING,SCIENCE AND CULTURAL PERSPECTIVE

    ADDICTION, Issue 7 2010
    RAJAT RAY
    No abstract is available for this article. [source]


    A MARKET UTILITY-BASED MODEL FOR CAPACITY SCHEDULING IN MASS SERVICES

    PRODUCTION AND OPERATIONS MANAGEMENT, Issue 2 2003
    JOHN C. GOODALE
    Only a small set of employee scheduling articles have considered an objective of profit or contribution maximization, as opposed to the traditional objective of cost (including opportunity costs) minimization. In this article, we present one such formulation that is a market utility-based model for planning and scheduling in mass services (MUMS). MUMS is a holistic approach to market-based service capacity scheduling. The MUMS framework provides the structure for modeling the consequences of aligning competitive priorities and service attributes with an element of the firm's service infrastructure. We developed a new linear programming formulation for the shift-scheduling problem that uses market share information generated by customer preferences for service attributes. The shift-scheduling formulation within the framework of MUMS provides a business-level model that predicts the economic impact of the employee schedule. We illustrated the shift-scheduling model with empirical data, and then compared its results with models using service standard and productivity standard approaches. The result of the empirical analysis provides further justification for the development of the market-based approach. Last, we discuss implications of this methodology for future research. [source]


    Multimode Project Scheduling Based on Particle Swarm Optimization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2006
    Hong Zhang
    This article introduces a methodology for solving the MRCPSP based on particle swarm optimization (PSO) that has not been utilized for this and other construction-related problems. The framework of the PSO-based methodology is developed. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. [source]


    Work Design for Flexible Work Scheduling: Barriers and Gender Implications

    GENDER, WORK & ORGANISATION, Issue 1 2000
    Ann M. Brewer
    The purpose of this article is to examine the nature of work design in relation to flexible work scheduling (FWS), particularly in respect to participation by women and men. There is a paucity of research evidence on this topic. Work design, essentially an artefact of enterprise culture, is constructed by the social rules of place, distance and time. Work practices that assume that work tasks are only conducted in the workplace during standard work time in the proximity of co-workers and managers do not, in the main, support FWS. While there is no significant evidence in this study that women and men perceive the barriers differently when considering taking up the option to engage in FWS options, the study addresses the reasons for this using a large survey of the Australian workforce. This article concludes that it is time to redefine these critical work design dimensions, in relation to existing power structures, in order to inject real flexibility into the workplace. [source]


    Scheduling and power control for MAC layer design in multihop IR-UWB networks

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 1 2010
    Reena Pilakkat
    Recently, a number of researchers have proposed media access control (MAC) designs for ultra-wideband (UWB) networks. Among them, designs based on scheduling and power control seem to be of great promise, particularly for quality-of-service (QoS) traffic. We investigate the efficiencies of many different choices for scheduling and power allocation for QoS traffic in a multihop impulse radio (IR)-UWB network, with the objective of achieving both high spectral efficiency and low transmission power. Specifically, we compare different scheduling schemes employing a protocol interference-based contention graph as well as a physical interference-based contention graph. We propose a relative distance to determine adjacency in the protocol interference-based contention graph. Using our improved protocol interference model with graph-based scheduling, we obtained better performance than the physical interference-based approach employing link-by-link scheduling. Copyright 2009 John Wiley & Sons, Ltd. [source]


    Errors in completion of referrals among older urban adults in ambulatory care

    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2010
    Michael Weiner MD MPH
    Abstract Rationale, aims and objectives, Clinical care often requires referrals, but many referrals never result in completed evaluations. We determined the extent to which referral-based consultations were completed in a US medical institution. Factors associated with completion were identified. Method, In a cross-sectional analysis, we analysed billing records and electronic and paper-based medical records for patients aged 65 years or older receiving health care between July 2000 and June 2002 in an integrated, urban, tax-supported medical institution on an academic campus. All referrals in ambulatory care, scheduling of consultation within 180 days, and completion were assessed. We conducted a multivariate survival analysis to identify factors associated with completion. Results, We identified 6785 patients with encounters. Mean age was 72 years, and, of the participants, 66% were women, 55% were African-American and 32% were Medicaid eligible. Of the 81% with at least one primary-care visit in ambulatory care, 63% had at least one referral. About 8% of referrals required multiple orders before an appointment was scheduled. Among 7819 orders for specialty consultation in ambulatory care, 71% led to appointments, and 70% of appointments were kept (completed = 0.71*0.70 or 50%). Scheduling of consultations varied (12% to 90%) by specialty. Medicare, singular orders, location of referral and lack of hospitalization were independently significantly associated with scheduling of appointments. Conclusions, Among older adults studied, half of medical specialty referrals were not completed. Multiple process errors, including missing information, misguided referrals and faulty communications, likely contribute to these results. Information systems offer important opportunities to improve the referrals process. [source]


    Against Time: Scheduling, Momentum, and Moral Order at Wartime Los Alamos

    JOURNAL OF HISTORICAL SOCIOLOGY, Issue 1 2004
    Charles Thorpe
    As well as allowing coordination of the large and geographically dispersed sites of the atomic bomb project, the scheduling regime operated as a system of social control, suppressing opposition to the use of the weapon. The analysis suggests the importance of historical and ethnographic attention to how schedules inscribe instrumental rationality in the quotidian life of modern organizations. [source]


    Scheduling with variable time slot costs

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 2 2010
    Guohua Wan
    Abstract In this article, we study a class of new scheduling models where time slot costs have to be taken into consideration. In such models, processing a job will incur certain cost which is determined by the time slots occupied by the job in a schedule. The models apply when operational costs vary over time. The objective of the scheduling models is to minimize the total time slot costs plus a traditional scheduling performance measure. We consider the following performance measures: total completion time, maximum lateness/tardiness, total weighted number of tardy jobs, and total tardiness. We prove the intractability of the models under general parameters and provide polynomial-time algorithms for special cases with non-increasing time slot costs. 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source]


    Scheduling a maintenance activity on parallel identical machines

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2009
    Asaf Levin
    Abstract We study a problem of scheduling a maintenance activity on parallel identical machines, under the assumption that all the machines must be maintained simultaneously. One example for this setting is a situation where the entire system must be stopped for maintenance because of a required electricity shut-down. The objective is minimum flow-time. The problem is shown to be NP-hard, and moreover impossible to approximate unless P = NP. We introduce a pseudo-polynomial dynamic programming algorithm, and show how to convert it into a bicriteria FPTAS for this problem. We also present an efficient heuristic and a lower bound. Our numerical tests indicate that the heuristic provides in most cases very close-to-optimal schedules. 2008 Wiley Periodicals, Inc. Naval Research Logistics 2009 [source]


    Scheduling of depalletizing and truck loading operations in a food distribution system

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2003
    Zhi-Long Chen
    Abstract This paper studies a scheduling problem arising in a beef distribution system where pallets of various types of beef products in the warehouse are first depalletized and then individual cases are loaded via conveyors to the trucks which deliver beef products to various customers. Given each customer's demand for each type of beef, the problem is to find a depalletizing and truck loading schedule that fills all the demands at a minimum total cost. We first show that the general problem where there are multiple trucks and each truck covers multiple customers is strongly NP-hard. Then we propose polynomial-time algorithms for the case where there are multiple trucks, each covering only one customer, and the case where there is only one truck covering multiple customers. We also develop an optimal dynamic programming algorithm and a heuristic for solving the general problem. By comparing to the optimal solutions generated by the dynamic programming algorithm, the heuristic is shown to be capable of generating near optimal solutions quickly. 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2003 [source]


    Evolution of the reverse link of CDMA-based systems to support high-speed data

    BELL LABS TECHNICAL JOURNAL, Issue 3 2002
    Nandu Gopalakrishnan
    Development of an upcoming release of the CDMA2000* family of standards is expected to focus on enhancing the reverse link (RL) operation to support high-speed packet data applications. The challenge is to design a system that yields substantial throughput gain while causing only minimal perturbations to the existing standard. We are proposing a system that evolves features already present in the CDMA2000 Release B and IS-856 (1xEV-DO) standards and reuses concepts and capabilities that have been introduced for high-speed packet data support on the forward link (FL) in Release C of the CDMA2000 standard. The RL of Release C of the CDMA2000 standard supports a relatively slow scheduled operation of this link using signaling messages. Scheduling with shorter latencies can be achieved by moving this functionality to the physical layer. Concurrently, both the FL and RL channel conditions may be tracked, and users may be scheduled based on this knowledge. To further manage the power and bandwidth cost on the FL, that is, of scheduling users' transmissions on the RL, the mobile station (MS) is permitted to operate in either a scheduled mode or an autonomous mode. A capability is provided for the MS station to switch the mode of operation. Performance impact of, and gain from, some of the system features is characterized through simulation results. 2003 Lucent Technologies Inc. [source]


    SIMDE: An educational simulator of ILP architectures with dynamic and static scheduling

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2007
    I. Castilla
    Abstract This article presents SIMDE, a cycle-by-cycle simulator to support teaching of Instruction-Level Parallelism (ILP) architectures. The simulator covers dynamic and static instruction scheduling by using a shared structure for both approaches. Dynamic scheduling is illustrated by means of a simple superscalar processor based on Tomasulo's algorithm. A basic Very Long Instruction Word (VLIW) processor has been designed for static scheduling. The simulator is intended as an aid-tool for teaching theoretical contents in Computer Architecture and Organization courses. The students are provided with an easy-to-use common environment to perform different simulations and comparisons between superscalar and VLIW processors. Furthermore, the simulator has been tested by students in a Computer Architecture course in order to assess its real usefulness. 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 226,239, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20154 [source]


    Risk Modeling of Dependence among Project Task Durations

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2007
    I-Tung Yang
    The assessments, however, can be strongly influenced by the dependence between task durations. In light of the need to address the dependence, the present study proposes a computer simulation model to incorporate and augment NORTA, a method for multivariate random number generation. The proposed model allows arbitrarily specified marginal distributions for task durations (need not be members of the same distribution family) and any desired correlation structure. This level of flexibility is of great practical value when systematic data is not available and planners have to rely on experts' subjective estimation. The application of the proposed model is demonstrated through scheduling a road pavement project. The proposed model is validated by showing that the sample correlation coefficients between task durations closely match the originally specified ones. Empirical comparisons between the proposed model and two conventional approaches, PERT and conventional simulation (without correlations), are used to illustrate the usefulness of the proposed model. [source]


    A formalized approach for designing a P2P-based dynamic load balancing scheme

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2010
    Hengheng Xie
    Abstract Quality of service (QoS) is attracting more and more attention in many areas, including entertainment, emergency services, transaction services, and so on. Therefore, the study of QoS-aware systems is becoming an important research topic in the area of distributed systems. In terms of load balancing, most of the existing QoS-related load balancing algorithms focus on Routing Mechanism and Traffic Engineering. However, research on QoS-aware task scheduling and service migration is very limited. In this paper, we propose a task scheduling algorithm using dynamic QoS properties, and we develop a Genetic Algorithm-based Services Migration scheme aiming to optimize the performance of our proposed QoS-aware distributed service-based system. In order to verify the efficiency of our scheme, we implement a prototype of our algorithm using a P2P-based JXTA technique, and do an emulation test and a simulation test in order to analyze our proposed solution. We compare our service-migration-based algorithm with non-migration and non-load-balancing approaches, and find that our solution is much better than the other two in terms of QoS success rate. Furthermore, in order to provide more solid proofs of our research, we use DEVS to validate our system design. Copyright 2010 John Wiley & Sons, Ltd. [source]


    Concepts for computer center power management

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
    A. DiRienzo
    Abstract Electrical power usage contributes significantly to the operational costs of large computer systems. At the Hypersonic Missile Technology Research and Operations Center (HMT-ROC) our system usage patterns provide a significant opportunity to reduce operating costs since there are a small number of dedicated users. The relatively predictable nature of our usage patterns allows for the scheduling of computational resource availability. We take advantage of this predictability to shut down systems during periods of low usage to reduce power consumption. With interconnected computer cluster systems, reducing the number of online nodes is more than a simple matter of throwing the power switch on a portion of the cluster. The paper discusses these issues and an approach for power reduction strategies for a computational system with a heterogeneous system mix that includes a large (1560-node) Apple Xserve PowerPC supercluster. In practice, the average load on computer systems may be much less than the peak load although the infrastructure supporting the operation of large computer systems in a computer or data center must still be designed with the peak loads in mind. Given that a significant portion of the time, systems loads can be less than full peak, an opportunity exists for cost savings if idle systems can be dynamically throttled back, slept, or shut off entirely. The paper describes two separate strategies that meet the requirements for both power conservation and system availability at HMT-ROC. The first approach, for legacy systems, is not much more than a brute force approach to power management which we call Time-Driven System Management (TDSM). The second approach, which we call Dynamic-Loading System Management (DLSM), is applicable to more current systems with ,Wake-on-LAN' capability and takes a more granular approach to the management of system resources. The paper details the rule sets that we have developed and implemented in the two approaches to system power management and discusses some results with these approaches. Copyright 2009 John Wiley & Sons, Ltd. [source]


    Trust-based robust scheduling and runtime adaptation of scientific workflow

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2009
    Mingzhong Wang
    Abstract Robustness and reliability with respect to the successful completion of a schedule are crucial requirements for scheduling in scientific workflow management systems because service providers are becoming autonomous. We introduce a model to incorporate trust, which indicates the probability that a service agent will comply with its commitments to improve the predictability and stability of the schedule. To deal with exceptions during the execution of a schedule, we adapt and evolve the schedule at runtime by interleaving the processes of evaluating, scheduling, executing and monitoring in the life cycle of the workflow management. Experiments show that schedules maximizing participants' trust are more likely to survive and succeed in open and dynamic environments. The results also prove that the proposed approach of workflow evaluation can find the most robust execution flow efficiently, thus avoiding the need of scheduling every possible execution path in the workflow definition. Copyright 2009 John Wiley & Sons, Ltd. [source]


    Performance evaluation of an autonomic network-aware metascheduler for Grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2009
    A. Caminero
    Abstract Grid technologies have enabled the aggregation of geographically distributed resources in the context of a particular application. The network remains an important requirement for any Grid application, as entities involved in a Grid system (such as users, services, and data) need to communicate with each other over a network. The performance of the network must therefore be considered when carrying out tasks such as scheduling, migration or monitoring of jobs. Surprisingly, many existing quality of service efforts ignore the network and focus instead on processor workload and disk access time. Making use of the network in an efficient and fault-tolerant manner is challenging. In a previous contribution, we proposed an autonomic network-aware scheduling architecture that is capable of adapting its behavior to the current status of the environment. Now, we present a performance evaluation in which our proposal is compared with a conventional scheduling strategy. We present simulation results that show the benefits of our approach. Copyright 2009 John Wiley & Sons, Ltd. [source]


    Efficient and fair scheduling for two-level information broadcasting systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008
    Byoung-Hoon Lee
    Abstract In a ubiquitous environment, there are many applications where a server disseminates information of common interest to pervasive clients and devices. For an example, an advertisement server sends information from a broadcast server to display devices. We propose an efficient information scheduling scheme for information broadcast systems to reduce average waiting time for information access while maintaining fairness between information items. Our scheme allocates information items adaptively according to relative popularity for each local server. Simulation results show that our scheme can reduce the waiting time up to 30% compared with the round robin scheme while maintaining cost-effective fairness. Copyright 2008 John Wiley & Sons, Ltd. [source]


    Java multithreading-based parallel approximate arrow-type inverses

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2008
    George A. Gravvanis
    Abstract A new parallel shared memory Java multithreaded design and implementation of the explicit approximate inverse preconditioning, for efficiently solving arrow-type linear systems on symmetric multiprocessor systems (SMPs), is presented. A new parallel algorithm for computing a class of optimized approximate arrow-type inverse matrix is introduced. The performance on an SMP, using Java multithreading, is investigated by solving arrow-type linear systems and numerical results are given. The parallel performance of the construction of the optimized approximate inverse and the explicit preconditioned generalized conjugate gradient square scheme, using a dynamic workload scheduling, is also presented. Copyright 2007 John Wiley & Sons, Ltd. [source]


    Tunable scheduling in a GridRPC framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2008
    A. Amar
    Abstract Among existing grid middleware approaches, one simple, powerful, and flexible approach consists of using servers available in different administrative domains through the classic client,server or remote procedure call paradigm. Network Enabled Servers (NES) implement this model, also called GridRPC. Clients submit computation requests to a scheduler, whose goal is to find a server available on the grid using some performance metric. The aim of this paper is to give an overview of a NES middleware developed in the GRAAL team called distributed interactive engineering toolbox (DIET) and to describe recent developments around plug-in schedulers, workflow management, and tools. DIET is a hierarchical set of components used for the development of applications based on computational servers on the grid. Copyright 2007 John Wiley & Sons, Ltd. [source]


    Segregation and scheduling for P2P applications with the interceptor middleware system

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2008
    Cosimo Anglano
    Abstract Very large size peer-to-peer systems are often required to implement efficient and scalable services, but usually they can be built only by assembling resources contributed by many independent users. Among the guarantees that must be provided to convince these users to join the P2P system, particularly important is the ability of ensuring that P2P applications and services run on their nodes will not unacceptably degrade the performance of their own applications because of an excessive resource consumption. In this paper we present the Interceptor, a middleware-level application segregation and scheduling system, which is able to strictly enforce quantitative limitations on node resource usage and, at the same time, to make P2P applications achieve satisfactory performance even in face of these limitations. A proof-of-concept implementation has been carried out for the Linux operating system, and has been used to perform an extensive experimentation aimed at quantitatively evaluating the Interceptor. The results we obtained clearly demonstrate that the Interceptor is able to strictly enforce quantitative limitations on node resource usage, and at the same time to effectively schedule P2P applications. Copyright 2007 John Wiley & Sons, Ltd. [source]


    Incentive-based scheduling in Grid computing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2006
    Yanmin Zhu
    Abstract With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, Grid computing has emerged as an attractive computing paradigm. In typical Grid environments, there are two distinct parties, resource consumers and resource providers. Enabling an effective interaction between the two parties (i.e. scheduling jobs of consumers across the resources of providers) is particularly challenging due to the distributed ownership of Grid resources. In this paper, we propose an incentive-based peer-to-peer (P2P) scheduling for Grid computing, with the goal of building a practical and robust computational economy. The goal is realized by building a computational market supporting fair and healthy competition among consumers and providers. Each participant in the market competes actively and behaves independently for its own benefit. A market is said to be healthy if every player in the market gets sufficient incentive for joining the market. To build the healthy computational market, we propose the P2P scheduling infrastructure, which takes the advantages of P2P networks to efficiently support the scheduling. The proposed incentive-based algorithms are designed for consumers and providers, respectively, to ensure every participant gets sufficient incentive. Simulation results show that our approach is successful in building a healthy and scalable computational economy. Copyright 2006 John Wiley & Sons, Ltd. [source]


    Optimal integrated code generation for VLIW architectures

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2006
    Christoph Kessler
    Abstract We present a dynamic programming method for optimal integrated code generation for basic blocks that minimizes execution time. It can be applied to single-issue pipelined processors, in-order-issue superscalar processors, VLIW architectures with a single homogeneous register set, and clustered VLIW architectures with multiple register sets. For the case of a single register set, our method simultaneously copes with instruction selection, instruction scheduling, and register allocation. For clustered VLIW architectures, we also integrate the optimal partitioning of instructions, allocation of registers for temporary variables, and scheduling of data transfer operations between clusters. Our method is implemented in the prototype of a retargetable code generation framework for digital signal processors (DSPs), called OPTIMIST. We present results for the processors ARM9E, TI C62x, and a single-cluster variant of C62x. Our results show that the method can produce optimal solutions for small and (in the case of a single register set) medium-sized problem instances with a reasonable amount of time and space. For larger problem instances, our method can be seamlessly changed into a heuristic. Copyright 2006 John Wiley & Sons, Ltd. [source]


    Distributed loop-scheduling schemes for heterogeneous computer systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006
    Anthony T. Chronopoulos
    Abstract Distributed computing systems are a viable and less expensive alternative to parallel computers. However, a serious difficulty in concurrent programming of a distributed system is how to deal with scheduling and load balancing of such a system which may consist of heterogeneous computers. Some distributed scheduling schemes suitable for parallel loops with independent iterations on heterogeneous computer clusters have been designed in the past. In this work we study self-scheduling schemes for parallel loops with independent iterations which have been applied to multiprocessor systems in the past. We extend one important scheme of this type to a distributed version suitable for heterogeneous distributed systems. We implement our new scheme on a network of computers and make performance comparisons with other existing schemes. Copyright 2005 John Wiley & Sons, Ltd. [source]


    Neuroscience instrumentation and distributed analysis of brain activity data: a case for eScience on global Grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2005
    Rajkumar Buyya
    Abstract The distribution of knowledge (by scientists) and data sources (advanced scientific instruments), and the need for large-scale computational resources for analyzing massive scientific data are two major problems commonly observed in scientific disciplines. Two popular scientific disciplines of this nature are brain science and high-energy physics. The analysis of brain-activity data gathered from the MEG (magnetoencephalography) instrument is an important research topic in medical science since it helps doctors in identifying symptoms of diseases. The data needs to be analyzed exhaustively to efficiently diagnose and analyze brain functions and requires access to large-scale computational resources. The potential platform for solving such resource intensive applications is the Grid. This paper presents the design and development of MEG data analysis system by leveraging Grid technologies, primarily Nimrod-G, Gridbus, and Globus. It describes the composition of the neuroscience (brain-activity analysis) application as parameter-sweep application and its on-demand deployment on global Grids for distributed execution. The results of economic-based scheduling of analysis jobs for three different optimizations scenarios on the world-wide Grid testbed resources are presented along with their graphical visualization. Copyright 2005 John Wiley & Sons, Ltd. [source]


    GridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005
    Arun KrishnanArticle first published online: 24 JUN 200
    Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright 2005 John Wiley & Sons, Ltd. [source]


    Advanced eager scheduling for Java-based adaptive parallel computing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7-8 2005
    Michael O. Neary
    Abstract Javelin 3 is a software system for developing large-scale, fault-tolerant, adaptively parallel applications. When all or part of their application can be cast as a master,worker or branch-and-bound computation, Javelin 3 frees application developers from concerns about inter-processor communication and fault tolerance among networked hosts, allowing them to focus on the underlying application. The paper describes a fault-tolerant task scheduler and its performance analysis. The task scheduler integrates work stealing with an advanced form of eager scheduling. It enables dynamic task decomposition, which improves host load-balancing in the presence of tasks whose non-uniform computational load is evident only at execution time. Speedup measurements are presented of actual performance on up to 1000 hosts. We analyze the expected performance degradation due to unresponsive hosts, and measure actual performance degradation due to unresponsive hosts. Copyright 2005 John Wiley & Sons, Ltd. [source]


    Simulation of resource synchronization in a dynamic real-time distributed computing environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2004
    Chen Zhang
    Abstract Today, more and more distributed computer applications are being modeled and constructed using real-time principles and concepts. In 1989, the Object Management Group (OMG) formed a Real-Time Special Interest Group (RT SIG) with the goal of extending the Common Object Request Broker Architecture (CORBA) standard to include real-time specifications. This group's most recent efforts have focused on the requirements of dynamic distributed real-time systems. One open problem in this area is resource access synchronization for tasks employing dynamic priority scheduling. This paper presents two resource synchronization protocols that the authors have developed which meet the requirements of dynamic distributed real-time systems as specified by Dynamic Scheduling Real-Time CORBA (DSRT CORBA). The proposed protocols can be applied to both Earliest Deadline First (EDF) and Least Laxity First (LLF) dynamic scheduling algorithms, allow distributed nested critical sections, and avoid unnecessary runtime overhead. In order to evaluate the performance of the proposed protocols, we analyzed each protocol's schedulability. Since the schedulability of the system is affected by numerous system configuration parameters, we have designed simulation experiments to isolate and illustrate the impact of each individual system parameter. Simulation experiments show the proposed protocols have better performance than one would realize by applying a schema that utilizes dynamic priority ceiling update. Copyright 2004 John Wiley & Sons, Ltd. [source]


    Sequence alignment on the Cray MTA-2,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2004
    Shahid H. Bokhari
    Abstract Several variants of standard algorithms for DNA sequence alignment have been implemented on the Cray Multithreaded Architecture-2 (MTA-2). We describe the architecture of the MTA-2 and discuss how its hardware and software enable efficient implementation of parallel algorithms with little or no regard for issues of partitioning, mapping or scheduling. We describe how we ported variants of the naive algorithm for exact alignment and the dynamic programming algorithm for approximate alignment to the MTA-2 and provide detailed performance measurements. It is shown that, for the dynamic programming algorithm, the use of the MTA's ,Full/Empty' synchronization bits leads to almost perfect speedup for large problems on one to eight processors. These results illustrate the versatility of the MTA's architecture and demonstrate its potential for providing a high-productivity platform for parallel processing. Copyright 2004 John Wiley & Sons, Ltd. [source]


    Managing distributed shared arrays in a bulk-synchronous parallel programming environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2-3 2004
    Christoph W. KesslerArticle first published online: 7 JAN 200
    Abstract NestStep is a parallel programming language for the BSP (bulk-hronous parallel) programming model. In this article we describe the concept of distributed shared arrays in NestStep and its implementation on top of MPI. In particular, we present a novel method for runtime scheduling of irregular, direct remote accesses to sections of distributed shared arrays. Our method, which is fully parallelized, uses conventional two-sided message passing and thus avoids the overhead of a standard implementation of direct remote memory access based on one-sided communication. The main prerequisite is that the given program is structured in a BSP-compliant way. Copyright 2004 John Wiley & Sons, Ltd. [source]