Distributed

Distribution by Scientific Domains
Distribution within Life Sciences

Kinds of Distributed

  • heterogeneously distributed
  • homogenously distributed
  • log-normally distributed
  • plant distributed
  • plot distributed
  • population distributed
  • questionnaire distributed
  • site distributed
  • species distributed
  • survey distributed
  • ubiquitously distributed

  • Terms modified by Distributed

  • distributed amplifier
  • distributed application
  • distributed approach
  • distributed bragg reflector
  • distributed compensation
  • distributed computing
  • distributed data
  • distributed delay
  • distributed environment
  • distributed error
  • distributed feedback
  • distributed generation
  • distributed generators
  • distributed hydrological model
  • distributed lag
  • distributed lag model
  • distributed lag models
  • distributed manner
  • distributed network
  • distributed observation
  • distributed only
  • distributed parameter system
  • distributed population
  • distributed process
  • distributed questionnaire
  • distributed resource
  • distributed response
  • distributed species
  • distributed system
  • distributed team
  • distributed worldwide

  • Selected Abstracts


    The Grid Resource Broker workflow engine

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2008
    M. Cafaro
    Abstract Increasingly, complex scientific applications are structured in terms of workflows. These applications are usually computationally and/or data intensive and thus are well suited for execution in grid environments. Distributed, geographically spread computing and storage resources are made available to scientists belonging to virtual organizations sharing resources across multiple administrative domains through established service-level agreements. Grids provide an unprecedented opportunity for distributed workflow execution; indeed, many applications are well beyond the capabilities of a single computer, and partitioning the overall computation on different components whose execution may benefit from runs on different architectures could provide better performances. In this paper we describe the design and implementation of the Grid Resource Broker (GRB) workflow engine. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    ChemInform Abstract: Polyphenylene Dendrimers with Different Fluorescent Chromophores Asymmetrically Distributed at the Periphery.

    CHEMINFORM, Issue 49 2001
    Tanja Weil
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]


    A RULE-BASED APPROACH FOR SEMANTIC ANNOTATION EVOLUTION

    COMPUTATIONAL INTELLIGENCE, Issue 3 2007
    P.-H. Luong
    An approach for managing knowledge in an organization in the new infrastructure of Semantic Web consists of building a corporate semantic web (CSW). The main components of a CSW are (i) evolving resources distributed over an intranet and indexed using (ii) semantic annotations expressed with the vocabulary provided by (iii) a shared ontology. However, changes in the operating environment may lead to some inconsistencies in the system and they result in need of modifications of the CSW components. These changes need to be evolved and well managed. In this paper we present a rule-based approach allowing us to detect and correct semantic annotation inconsistencies. This rule-based approach is implemented in the CoSWEM system enabling to manage the evolution of such a CSW, especially to address the evolution of semantic annotations when its underlying ontologies change. [source]


    Virtual laboratory: A distributed collaborative environment

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2004
    Tiranee Achalakul
    Abstract This article proposes the design framework of a distributed, real-time collaborative architecture. The architecture concept allows information to be fused, disseminated, and interpreted collaboratively among researchers who live across continents in real-time. The architecture is designed based on the distributed object technology, DCOM. In our framework, every module can be viewed as an object. Each of these objects communicates and passes data with one another via a set of interfaces and connection points. We constructed the virtual laboratory based on the proposed architecture. The laboratory allows multiple analysts to collaboratively work through a standard web-browser using a set of tools, namely, chat, whiteboard, audio/video exchange, file transfer and application sharing. Several existing technologies are integrated to provide collaborative functions, such as NetMeeting. Finally, the virtual laboratory quality evaluation is described with an example application of remote collaboration in satellite image fusion and analysis. © 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 12: 44,53, 2004; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20008 [source]


    Manifold Homotopy via the Flow Complex

    COMPUTER GRAPHICS FORUM, Issue 5 2009
    Bardia Sadri
    Abstract It is known that the critical points of the distance function induced by a dense sample P of a submanifold , of ,n are distributed into two groups, one lying close to , itself, called the shallow, and the other close to medial axis of ,, called deep critical points. We prove that under (uniform) sampling assumption, the union of stable manifolds of the shallow critical points have the same homotopy type as , itself and the union of the stable manifolds of the deep critical points have the homotopy type of the complement of ,. The separation of critical points under uniform sampling entails a separation in terms of distance of critical points to the sample. This means that if a given sample is dense enough with respect to two or more submanifolds of ,n, the homotopy types of all such submanifolds together with those of their complements are captured as unions of stable manifolds of shallow versus those of deep critical points, in a filtration of the flow complex based on the distance of critical points to the sample. This results in an algorithm for homotopic manifold reconstruction when the target dimension is unknown. [source]


    Out-of-Core and Dynamic Programming for Data Distribution on a Volume Visualization Cluster

    COMPUTER GRAPHICS FORUM, Issue 1 2009
    S. Frank
    I.3.2 [Computer Graphics]: Distributed/network graphics; C.2.4 [Distributed Systems]: Distributed applications Abstract Ray directed volume-rendering algorithms are well suited for parallel implementation in a distributed cluster environment. For distributed ray casting, the scene must be partitioned between nodes for good load balancing, and a strict view-dependent priority order is required for image composition. In this paper, we define the load balanced network distribution (LBND) problem and map it to the NP-complete precedence constrained job-shop scheduling problem. We introduce a kd-tree solution and a dynamic programming solution. To process a massive data set, either a parallel or an out-of-core approach is required. Parallel preprocessing is performed by render nodes on data, which are allocated using a static data structure. Volumetric data sets often contain a large portion of voxels that will never be rendered, or empty space. Parallel preprocessing fails to take advantage of this. Our slab-projection slice, introduced in this paper, tracks empty space across consecutive slices of data to reduce the amount of data distributed and rendered. It is used to facilitate out-of-core bricking and kd-tree partitioning. Load balancing using each of our approaches is compared with traditional methods using several segmented regions of the Visible Korean data set. [source]


    Interactive Visualization with Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Thomas Ertl
    One of the main scientific goals of visualization is the development of algorithms and appropriate data models which facilitate interactive visual analysis and direct manipulation of the increasingly large data sets which result from simulations running on massive parallel computer systems, from measurements employing fast high-resolution sensors, or from large databases and hierarchical information spaces. This task can only be achieved with the optimization of all stages of the visualization pipeline: filtering, compression, and feature extraction of the raw data sets, adaptive visualization mappings which allow the users to choose between speed and accuracy, and exploiting new graphics hardware features for fast and high-quality rendering. The recent introduction of advanced programmability in widely available graphics hardware has already led to impressive progress in the area of volume visualization. However, besides the acceleration of the final rendering, flexible graphics hardware is increasingly being used also for the mapping and filtering stages of the visualization pipeline, thus giving rise to new levels of interactivity in visualization applications. The talk will present recent results of applying programmable graphics hardware in various visualization algorithms covering volume data, flow data, terrains, NPR rendering, and distributed and remote applications. [source]


    An MPI Parallel Implementation of Newmark's Method

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2000
    Ali Namazifard
    The standard message-passing interface (MPI) is used to parallelize Newmark's method. The linear matrix equation encountered at each time step is solved using a preconditioned conjugate gradient algorithm. Data are distributed over the processors of a given parallel computer on a degree-of-freedom basis; this produces effective load balance between the processors and leads to a highly parallelized code. The portability of the implementation of this scheme is tested by solving some simple problems on two different machines: an SGI Origin2000 and an IBM SP2. The measured times demonstrate the efficiency of the approach and highlight the maintenance advantages that arise from using a standard parallel library such as MPI. [source]


    An optimal multimedia object allocation solution in multi-powermode storage systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2010
    Yingwei Jin
    Abstract Given a set of multimedia objects R={o1, o2, ,, ok} each of which has a set of multiple versions oi.v={Ai.0, Ai.1, ,, Ai.m}, i=1, 2, ,, k, there is a problem of distributing these objects in a server system so that user requests for accessing specified multimedia objects can be fulfilled with the minimum energy consumption and without significant degrading of the system performance. This paper considers the allocation problem of multimedia objects in multi-powermode storage systems, where the objects are distributed among multi-powermode storages based on the access pattern to the objects. We design an underlying infrastructure of storage system and propose a dynamic multimedia object allocation policy based on the designed infrastructure, which integrate and prove the optimality of the proposed policy. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A Grid-enabled problem-solving environment for advanced reservoir uncertainty analysis

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008
    Zhou Lei
    Abstract Uncertainty analysis is critical for conducting reservoir performance prediction. However, it is challenging because it relies on (1) massive modeling-related, geographically distributed, terabyte, or even petabyte scale data sets (geoscience and engineering data), (2) needs to rapidly perform hundreds or thousands of flow simulations, being identical runs with different models calculating the impacts of various uncertainty factors, (3) an integrated, secure, and easy-to-use problem-solving toolkit to assist uncertainty analysis. We leverage Grid computing technologies to address these challenges. We design and implement an integrated problem-solving environment ResGrid to effectively improve reservoir uncertainty analysis. The ResGrid consists of data management, execution management, and a Grid portal. Data Grid tools, such as metadata, replica, and transfer services, are used to meet massive size and geographically distributed characteristics of data sets. Workflow, task farming, and resource allocation are used to support large-scale computation. A Grid portal integrates the data management and the computation solution into a unified easy-to-use interface, enabling reservoir engineers to specify uncertainty factors of interest and perform large-scale reservoir studies through a web browser. The ResGrid has been used in petroleum engineering. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    On the effectiveness of runtime techniques to reduce memory sharing overheads in distributed Java implementations

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2008
    Marcelo Lobosco
    Abstract Distributed Java virtual machine (dJVM) systems enable concurrent Java applications to transparently run on clusters of commodity computers. This is achieved by supporting Java's shared-memory model over multiple JVMs distributed across the cluster's computer nodes. In this work, we describe and evaluate selective dynamic diffing and lazy home allocation, two new runtime techniques that enable dJVMs to efficiently support memory sharing across the cluster. Specifically, the two proposed techniques can contribute to reduce the overheads due to message traffic, extra memory space, and high latency of remote memory accesses that such dJVM systems require for implementing their memory-coherence protocol either in isolation or in combination. In order to evaluate the performance-related benefits of dynamic diffing and lazy home allocation, we implemented both techniques in Cooperative JVM (CoJVM), a basic dJVM system we developed in previous work. In subsequent work, we carried out performance comparisons between the basic CoJVM and modified CoJVM versions for five representative concurrent Java applications (matrix multiply, LU, Radix, fast Fourier transform, and SOR) using our proposed techniques. Our experimental results showed that dynamic diffing and lazy home allocation significantly reduced memory sharing overheads. The reduction resulted in considerable gains in CoJVM system's performance, ranging from 9% up to 20%, in four out of the five applications, with resulting speedups varying from 6.5 up to 8.1 for an 8-node cluster of computers. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Babylon: middleware for distributed, parallel, and mobile Java applications

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2008
    Willem van Heiningen
    Abstract Babylon is a collection of tools and services that provide a 100% Java-compatible environment for developing, running and managing parallel, distributed and mobile Java applications. It incorporates features such as object migration, asynchronous method invocation, and remote class loading, while providing an easy-to-use interface. Additionally, Babylon enables Java applications to seamlessly create and interact with remote objects, while protecting those objects from other applications by implementing access restrictions and separate namespaces. The implementation of Babylon centers around dynamic proxies, a feature first available in Java 1.3, that allow proxy objects to be created at runtime. Dynamic proxies play a key role in achieving the goals of Babylon. The potential cluster computing benefits of the system are demonstrated with experimental results, which show that sequential Java applications can achieve significant performance benefits from using Babylon to parallelize their work across a cluster of workstations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Design and analysis of a scalable algorithm to monitor chord-based p2p systems at runtime

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2008
    Andreas Binzenhöfer
    Abstract Peer-to-peer (p2p) systems are a highly decentralized, fault tolerant, and cost-effective alternative to the classic client,server architecture. Yet companies hesitate to use p2p algorithms to build new applications. Due to the decentralized nature of such a p2p system the carrier does not know anything about the current size, performance, and stability of its application. In this paper, we present an entirely distributed and scalable algorithm to monitor a running p2p network. The snapshot of the system enables a telecommunication carrier to gather information about the current performance parameters of the running system as well as to react to discovered errors. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    OpenUH: an optimizing, portable OpenMP compiler

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2007
    Chunhua Liao
    Abstract OpenMP has gained wide popularity as an API for parallel programming on shared memory and distributed shared memory platforms. Despite its broad availability, there remains a need for a portable, robust, open source, optimizing OpenMP compiler for C/C++/Fortran 90, especially for teaching and research, for example into its use on new target architectures, such as SMPs with chip multi-threading, as well as learning how to translate for clusters of SMPs. In this paper, we present our efforts to design and implement such an OpenMP compiler on top of Open64, an open source compiler framework, by extending its existing analysis and optimization and adopting a source-to-source translator approach where a native back end is not available. The compilation strategy we have adopted and the corresponding runtime support are described. The OpenMP validation suite is used to determine the correctness of the translation. The compiler's behavior is evaluated using benchmark tests from the EPCC microbenchmarks and the NAS parallel benchmark. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    JaMP: an implementation of OpenMP for a Java DSM

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2007
    Michael Klemm
    Abstract Although OpenMP is a widely agreed-upon standard for the C/C++ and Fortran programming languages for the semi-automatic parallelization of programs for shared memory machines, not much has been done on the binding of OpenMP to Java that targets clusters with distributed memory. This paper presents three major contributions: (1) JaMP is an adaptation of the OpenMP standard to Java that implements a large subset of the OpenMP specification with an expressiveness comparable to that of OpenMP; (2) we suggest a set of extensions that allow a better integration of OpenMP into the Java language; (3) we present our prototype implementation of JaMP in the research compiler Jackal, a software-based distributed shared memory implementation for Java. We evaluate the performance of JaMP with a set of micro-benchmarks and with OpenMP versions of the parallel Java Grande Forum (JGF) benchmarks. The micro-benchmarks show that OpenMP for Java can be implemented without much overhead. The JGF benchmarks achieve a good speed-up of 5,8 on eight nodes. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Distributed end-host multicast algorithms for the Knowledge Grid

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2007
    Wanqing Tu
    Abstract The Knowledge Grid built on top of the peer-to-peer (P2P) network has been studied to implement scalable, available and sematic-based querying. In order to improve the efficiency and scalability of querying, this paper studies the problem of multicasting queries in the Knowledge Grid. An m -dimensional irregular mesh is a popular overlay topology of P2P networks. We present a set of novel distributed algorithms on top of an m -dimensional irregular mesh overlay for the short delay and low network resource consumption end-host multicast services. Our end-host multicast fully utilizes the advantages of an m -dimensional mesh to construct a two-layer architecture. Compared to previous approaches, the novelty and contribution here are: (1) cluster formation that partitions the group members into clusters in the lower layer where cluster consists of a small number of members; (2) cluster core selection that searches a core with the minimum sum of overlay hops to all other cluster members for each cluster; (3) weighted shortest path tree construction that guarantees the minimum number of shortest paths to be occupied by the multicast traffic; (4) distributed multicast routing that directs the multicast messages to be efficiently distributed along the two-layer multicast architecture in parallel, without a global control; the routing scheme enables the packets to be transmitted to the remote end hosts within short delays through some common shortest paths; and (5) multicast path maintenance that restores the normal communication once the membership alteration appears. Simulation results show that our end-host multicast can distributively achieve a shorter delay and lower network resource consumption multicast services as compared with some well-known end-host multicast systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Job completion prediction using case-based reasoning for Grid computing environments

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007
    Lilian Noronha Nassif
    Abstract One of the main focuses of Grid computing is solving resource-sharing problems in multi-institutional virtual organizations. In such heterogeneous and distributed environments, selecting the best resource to run a job is a complex task. The solutions currently employed still present numerous challenges and one of them is how to let users know when a job will finish. Consequently, reserve in advance remains unavailable. This article presents a new approach, which makes predictions for job execution time in Grid by applying the case-based reasoning paradigm. The work includes the development of a new case retrieval algorithm involving relevance sequence and similarity degree calculations. The prediction model is part of a multi-agent system that selects the best resource of a computational Grid to run a job. Agents representing candidate resources for job execution make predictions in a distributed and parallel manner. The technique presented here can be used in Grid environments at operation time to assist users with batch job submissions. Experimental results validate the prediction accuracy of the proposed mechanisms, and the performance of our case retrieval algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    An efficient concurrent implementation of a neural network algorithm

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2006
    R. Andonie
    Abstract The focus of this study is how we can efficiently implement the neural network backpropagation algorithm on a network of computers (NOC) for concurrent execution. We assume a distributed system with heterogeneous computers and that the neural network is replicated on each computer. We propose an architecture model with efficient pattern allocation that takes into account the speed of processors and overlaps the communication with computation. The training pattern set is distributed among the heterogeneous processors with the mapping being fixed during the learning process. We provide a heuristic pattern allocation algorithm minimizing the execution time of backpropagation learning. The computations are overlapped with communications. Under the condition that each processor has to perform a task directly proportional to its speed, this allocation algorithm has polynomial-time complexity. We have implemented our model on a dedicated network of heterogeneous computers using Sejnowski's NetTalk benchmark for testing. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    An EasyGrid portal for scheduling system-aware applications on computational Grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2006
    C. Boeres
    Abstract One of the objectives of computational Grids is to offer applications the collective computational power of distributed but typically shared heterogeneous resources. Unfortunately, efficiently harnessing the performance potential of such systems (i.e. how and where applications should execute on the Grid) is a challenging endeavor due principally to the very distributed, shared and heterogeneous nature of the resources involved. A crucial step towards solving this problem is the need to identify both an appropriate scheduling model and scheduling algorithm(s). This paper presents a tool to aid the design and evaluation of scheduling policies suitable for efficient execution of system-aware parallel applications on computational Grids. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    On coordination and its significance to distributed and multi-agent systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2006
    Sascha Ossowski
    Abstract Coordination is one of those words: it appears in most science and social fields, in politics, warfare, and it is even the subject of sports talks. While the usage of the word may convey different ideas to different people, the definition of coordination in all fields is quite similar,it relates to the control, planning, and execution of activities that are performed by distributed (perhaps independent) actors. Computer scientists involved in the field of distributed systems and agents focus on the distribution aspect of this concept. They see coordination as a separate field from all the others,a field that rather complements standard fields such as the ones mentioned above. This paper focuses on explaining the term coordination in relation to distributed and multi-agent systems. Several approaches to coordination are described and put in perspective. The paper finishes with a look at what we are calling emergent coordination and its potential for efficiently handling coordination in open environments. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    GridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005
    Arun KrishnanArticle first published online: 24 JUN 200
    Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Distributed computing with Triana on the Grid

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2005
    Ian Taylor
    Abstract In this paper, we describe Triana, a distributed problem-solving environment that makes use of the Grid to enable a user to compose applications from a set of components, select resources on which the composed application can be distributed and then execute the application on those resources. We describe Triana's current pluggable architecture that can support many different modes of operation by the use of flexible writers for many popular Web service choreography languages. We further show, that the Triana architecture is middleware-independent through the use of the Grid Application Toolkit (GAT) API and demonstrate this through the use of a GAT binding to JXTA. We describe how other bindings being developed to Grid infrastructures, such as OGSA, can seamlessly be integrated within the current prototype by using the switching capability of the GAT. Finally, we outline an experiment we conducted using this prototype and discuss its current status. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Towards enabling peer-to-peer Grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7-8 2005
    Geoffrey Fox
    Abstract In this paper we propose a peer-to-peer (P2P) Grid comprising resources such as relatively static clients, high-end resources and a dynamic collection of multiple P2P subsystems. We investigate the architecture of the messaging and event service that will support such a hybrid environment. We designed a distributed publish,subscribe system NaradaBrokering for XML-specified messages. NaradaBrokering provides support for centralized, distributed and P2P (via JXTA) interactions. Here we investigate and present our strategy for the integration of JXTA into NaradaBrokering. The resultant system naturally scales with multiple Peer Groups linked by NaradaBrokering. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Simulation of resource synchronization in a dynamic real-time distributed computing environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2004
    Chen Zhang
    Abstract Today, more and more distributed computer applications are being modeled and constructed using real-time principles and concepts. In 1989, the Object Management Group (OMG) formed a Real-Time Special Interest Group (RT SIG) with the goal of extending the Common Object Request Broker Architecture (CORBA) standard to include real-time specifications. This group's most recent efforts have focused on the requirements of dynamic distributed real-time systems. One open problem in this area is resource access synchronization for tasks employing dynamic priority scheduling. This paper presents two resource synchronization protocols that the authors have developed which meet the requirements of dynamic distributed real-time systems as specified by Dynamic Scheduling Real-Time CORBA (DSRT CORBA). The proposed protocols can be applied to both Earliest Deadline First (EDF) and Least Laxity First (LLF) dynamic scheduling algorithms, allow distributed nested critical sections, and avoid unnecessary runtime overhead. In order to evaluate the performance of the proposed protocols, we analyzed each protocol's schedulability. Since the schedulability of the system is affected by numerous system configuration parameters, we have designed simulation experiments to isolate and illustrate the impact of each individual system parameter. Simulation experiments show the proposed protocols have better performance than one would realize by applying a schema that utilizes dynamic priority ceiling update. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Impact of mixed-parallelism on parallel implementations of the Strassen and Winograd matrix multiplication algorithms

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2004
    F. Desprez
    Abstract In this paper we study the impact of the simultaneous exploitation of data- and task-parallelism, so called mixed-parallelism, on the Strassen and Winograd matrix multiplication algorithms. This work takes place in the context of Grid computing and, in particular, in the Client,Agent(s),Server(s) model, where data can already be distributed on the platform. For each of those algorithms, we propose two mixed-parallel implementations. The former follows the phases of the original algorithms while the latter has been designed as the result of a list scheduling algorithm. We give a theoretical comparison, in terms of memory usage and execution time, between our algorithms and classical data-parallel implementations. This analysis is corroborated by experiments. Finally, we give some hints about heterogeneous and recursive versions of our algorithms. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    OpenMP-oriented applications for distributed shared memory architectures

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2004
    Ami Marowka
    Abstract The rapid rise of OpenMP as the preferred parallel programming paradigm for small-to-medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model-of-choice for large scale high-performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc-NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy-inside,copy-back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP-SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse-grained parallelism, although a fast copy mechanism is essential. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Managing distributed shared arrays in a bulk-synchronous parallel programming environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2-3 2004
    Christoph W. KesslerArticle first published online: 7 JAN 200
    Abstract NestStep is a parallel programming language for the BSP (bulk-hronous parallel) programming model. In this article we describe the concept of distributed shared arrays in NestStep and its implementation on top of MPI. In particular, we present a novel method for runtime scheduling of irregular, direct remote accesses to sections of distributed shared arrays. Our method, which is fully parallelized, uses conventional two-sided message passing and thus avoids the overhead of a standard implementation of direct remote memory access based on one-sided communication. The main prerequisite is that the given program is structured in a BSP-compliant way. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Features of the Java Commodity Grid Kit

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13-15 2002
    Gregor von Laszewski
    Abstract In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Lesser Bear: A lightweight process library for SMP computers,scheduling mechanism without a lock operation

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2002
    Hisashi Oguma
    Abstract We have designed and implemented a lightweight process (thread) library called ,Lesser Bear' for SMP computers. Lesser Bear has thread-level parallelism and high portability. Lesser Bear executes threads in parallel by creating UNIX processes as virtual processors and a memory-mapped file as a huge shared-memory space. To schedule thread in parallel, the shared-memory space has been divided into working spaces for each virtual processor, and a ready queue has been distributed. However the previous version of Lesser Bear sometimes requires a lock operation for dequeueing. We therefore proposed a scheduling mechanism that does not require a lock operation. To achieve this, each divided space forms a link topology through the queues, and we use a lock-free algorithm for the queue operation. This mechanism is applied to Lesser Bear and evaluated by experimental results. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A flexible framework for consistency management

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2002
    S. Weber
    Abstract Recent distributed shared memory (DSM) systems provide increasingly more support for the sharing of objects rather than portions of memory. However, like earlier DSM systems these distributed shared object systems (DSO) still force developers to use a single protocol, or a small set of given protocols, for the sharing of application objects. This limitation prevents the applications from optimizing their communication behaviour and results in unnecessary overhead. A current general trend in software systems development is towards customizable systems, for example frameworks, reflection, and aspect-oriented programming all aim to give the developer greater flexibility and control over the functionality and performance of their code. This paper describes a novel object-oriented framework that defines a DSM system in terms of a consistency model and an underlying coherency protocol. Different consistency models and coherency protocols can be used within a single application because they can be customized, by the application programmer, on a per-object basis. This allows application specific semantics to be exploited at a very fine level of granularity and with a resulting improvement in performance. The framework is implemented in JAVA and the speed-up obtained by a number of applications that use the framework is reported. Copyright © 2002 John Wiley & Sons, Ltd. [source]