Computing

Distribution by Scientific Domains

Kinds of Computing

  • distributed computing
  • parallel computing
  • quantum computing
  • scientific computing
  • ubiquitous computing

  • Terms modified by Computing

  • computing application
  • computing environment
  • computing framework
  • computing paradigm
  • computing platform
  • computing power
  • computing resource
  • computing system
  • computing techniques
  • computing technology
  • computing time

  • Selected Abstracts


    A comparative study of student performance in traditional mode and online mode of learning

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2007
    Qiping Shen
    Abstract There has been interest for many decades in comparing the effectiveness of technology-delivered instruction with traditional face-to-face teaching and measurable student outcomes have been an important indicator. Having pointed to salient aspects of the current academic environment and to some of the key literature in this area, this article analyses the performance of two groups of students studying in the traditional mode and the online mode in a masters program delivered by a Department of Computing at a university in Hong Kong. Over 2,000 students have participated in the study between 2000 and 2004. This article includes a comparison of the results between different delivery modes of study each year as well as between different classes over the 4-year period. Although traditional mode students have achieved a slightly better performance in examinations in comparison with online mode students, the article concludes that there are no significant differences in overall performance between the students. With the impact of technologies on higher education and the demands of a complex and rapidly changing society in the 21st century, this Hong Kong study contributes to the literature that finds mode of study is not a key determinant of success. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 15: 30,40, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20092 [source]


    A reference model for grid architectures and its validation

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2010
    Wil van der Aalst
    Abstract Computing and data-intensive applications in physics, medicine, biology, graphics, and business intelligence require large and distributed infrastructures to address the challenges of the present and the future. For example, process mining applications are faced with terrabytes of event data and computationally expensive algorithms. Computer grids are increasingly being used to deal with such challenges. However, grid computing is often approached in an ad hoc and engineering-like manner. Despite the availability of many software packages for grid applications, a good conceptual model of the grid is missing. This paper provides a formal description of the grid in terms of a colored Petri net (CPN). This CPN can be seen as a reference model for grids as it clarifies the basic concepts at the conceptual level. Moreover, the CPN allows for various kinds of analyses ranging from verification to performance analysis. We validate our model based on real-life experiments using a testbed grid architecture available in our group and we show how the model can be used for the estimation of throughput times for scientific workflows. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Special Issue: Grid and Cooperative Computing (GCC2004)

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2006
    Hai Jin
    First page of article [source]


    Estimating the number of alcohol-attributable deaths: methodological issues and illustration with French data for 2006

    ADDICTION, Issue 6 2010
    Grégoire Rey
    ABSTRACT Aims Computing the number of alcohol-attributable deaths requires a series of hypotheses. Using French data for 2006, the potential biases are reviewed and the sensitivity of estimates to various hypotheses evaluated. Methods Self-reported alcohol consumption data were derived from large population-based surveys. The risks of occurrence of diseases associated with alcohol consumption and relative risks for all-cause mortality were obtained through literature searches. All-cause and cause-specific population alcohol-attributable fractions (PAAFs) were calculated. In order to account for potential under-reporting, the impact of adjustment on sales data was tested. The 2006 mortality data were restricted to people aged between 15 and 75 years. Results When alcohol consumption distribution was adjusted for sales data, the estimated number of alcohol-attributable deaths, the sum of the cause-specific estimates, was 20 255. Without adjustment, the estimate fell to 7158. Using an all-cause mortality approach, the adjusted number of alcohol-attributable deaths was 15 950, while the non-adjusted estimate was a negative number. Other methodological issues, such as computation based on risk estimates for all causes for ,all countries' or only ,European countries', also influenced the results, but to a lesser extent. Discussion The estimates of the number of alcohol-attributable deaths varied greatly, depending upon the hypothesis used. The most realistic and evidence-based estimate seems to be obtained by adjusting the consumption data for national alcohol sales, and by summing the cause-specific estimates. However, interpretation of the estimates must be cautious in view of their potentially large imprecision. [source]


    Modeling with Data: Tools and Techniques for Scientific Computing by Ben Klemens

    INTERNATIONAL STATISTICAL REVIEW, Issue 1 2009
    Antony Unwin
    No abstract is available for this article. [source]


    Body Computing: How Networked Medical Devices Can Solve Problems Facing Health Care Today

    JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 12 2007
    LESLIE A. SAXON M.D.
    No abstract is available for this article. [source]


    Collaboration Online: The Example of Distributed Computing

    JOURNAL OF COMPUTER-MEDIATED COMMUNICATION, Issue 4 2005
    Anne Holohan
    Distributed Computing is a new form of online collaboration; such projects divide a large computational problem into small tasks that are sent out over the Internet to be completed on personal computers. Millions of people all over the world participate voluntarily in such projects, providing computing resources that would otherwise cost millions of dollars. However, Distributed Computing only works if many people participate. The technical challenge is to slice a problem into thousands of tiny pieces that can be solved independently, and then to reassemble the solutions. The social problem is how to find all those widely dispersed computers and persuade their owners to participate. This article examines what makes a collaborative Distributed Computing project successful. We report on data from a quantitative survey and a qualitative study of participants on several online forums, and discuss and analyze Distributed Computing using Arquilla and Ronfeldt's (2001) five-level network organization framework. [source]


    A nearly optimal preconditioner for the Navier,Stokes equations

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2001
    Lina Hemmingsson-Frändén
    Abstract We present a preconditioner for the linearized Navier,Stokes equations which is based on the combination of a fast transform approximation of an advection diffusion problem together with the recently introduced ,BFBTT' preconditioner of Elman (SIAM Journal of Scientific Computing, 1999; 20:1299,1316). The resulting preconditioner when combined with an appropriate Krylov subspace iteration method yields the solution in a number of iterations which appears to be independent of the Reynolds number provided a mesh Péclet number restriction holds, and depends only mildly on the mesh size. The preconditioner is particularly appropriate for problems involving a primary flow direction. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Essentials of Biostatistics in Public Health & Essentials of Biostatistics Workbook: Statistical Computing Using Excel

    AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 2 2009
    Article first published online: 7 APR 200
    No abstract is available for this article. [source]


    The policy and management of information technology in Jordanian schools

    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, Issue 2 2001
    Mohammad Tawalbeh
    During the past two decades the introduction of personal computers has been a major innovation in the Hashimite Kingdom of Jordan. This new technology continues to offer an exciting challenge to educationalists. This paper reviews the developments of Information Technology (IT) in Jordanian public schools in the period 1984,1998. My contention is that the highly centralised nature of the Jordanian educational system and the comprehensive policy of the Directorate of Educational Computing (DEC) have made the introduction of computers in schools less difficult. [source]


    A survey of mobile and wireless technologies for augmented reality systems

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008
    George Papagiannakis
    Abstract Recent advances in hardware and software for mobile computing have enabled a new breed of mobile augmented reality (AR) systems and applications. A new breed of computing called ,augmented ubiquitous computing' has resulted from the convergence of wearable computing, wireless networking, and mobile AR interfaces. In this paper, we provide a survey of different mobile and wireless technologies and how they have impact AR. Our goal is to place them into different categories so that it becomes easier to understand the state of art and to help identify new directions of research. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Augmented reality agents for user interface adaptation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008
    István Barakonyi
    Abstract Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi-user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Modeling human affective postures: an information theoretic characterization of posture features

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
    P. Ravindra De Silva
    One of the challenging issues in affective computing is to give a machine the ability to recognize the mood of a person. Efforts in that direction have mainly focused on facial and oral cues. Gestures have been recently considered as well, but with less success. Our aim is to fill this gap by identifying and measuring the saliency of posture features that play a role in affective expression. As a case study, we collected affective gestures from human subjects using a motion capture system. We first described these gestures with spatial features, as suggested in studies on dance. Through standard statistical techniques, we verified that there was a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. We used Discriminant Analysis to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 4 basic emotional states: angry, fear, happy, and sad. An information theoretic characterization of the models shows that the set of features discriminates well between emotions, and also that the models built over-perform the human observers. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Principles and Applications of Computer Graphics in Medicine

    COMPUTER GRAPHICS FORUM, Issue 1 2006
    F.P. Vidal
    Abstract The medical domain provides excellent opportunities for the application of computer graphics, visualization and virtual environments, with the potential to help improve healthcare and bring benefits to patients. This survey paper provides a comprehensive overview of the state-of-the-art in this exciting field. It has been written from the perspective of both computer scientists and practising clinicians and documents past and current successes together with the challenges that lie ahead. The article begins with a description of the software algorithms and techniques that allow visualization of and interaction with medical data. Example applications from research projects and commercially available products are listed, including educational tools; diagnostic aids; virtual endoscopy; planning aids; guidance aids; skills training; computer augmented reality and use of high performance computing. The final section of the paper summarizes the current issues and looks ahead to future developments. [source]


    A Polymorphic Dynamic Network Loading Model

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008
    Nie Yu (Marco)
    The polymorphism, realized through a general node-link interface and proper discretization, offers several prominent advantages. First of all, PDNL allows road facilities in the same network to be represented by different traffic flow models based on the tradeoff of efficiency and realism and/or the characteristics of the targeted problem. Second, new macroscopic link/node models can be easily plugged into the framework and compared against existing ones. Third, PDNL decouples links and nodes in network loading, and thus opens the door to parallel computing. Finally, PDNL keeps track of individual vehicular quanta of arbitrary size, which makes it possible to replicate analytical loading results as closely as desired. PDNL, thus, offers an ideal platform for studying both analytical dynamic traffic assignment problems of different kinds and macroscopic traffic simulation. [source]


    A reference model for grid architectures and its validation

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2010
    Wil van der Aalst
    Abstract Computing and data-intensive applications in physics, medicine, biology, graphics, and business intelligence require large and distributed infrastructures to address the challenges of the present and the future. For example, process mining applications are faced with terrabytes of event data and computationally expensive algorithms. Computer grids are increasingly being used to deal with such challenges. However, grid computing is often approached in an ad hoc and engineering-like manner. Despite the availability of many software packages for grid applications, a good conceptual model of the grid is missing. This paper provides a formal description of the grid in terms of a colored Petri net (CPN). This CPN can be seen as a reference model for grids as it clarifies the basic concepts at the conceptual level. Moreover, the CPN allows for various kinds of analyses ranging from verification to performance analysis. We validate our model based on real-life experiments using a testbed grid architecture available in our group and we show how the model can be used for the estimation of throughput times for scientific workflows. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A large-scale monitoring and measurement campaign for web services-based applications

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2010
    Riadh Ben Halima
    Abstract Web Services (WS) can be considered as the most influent enabling technology for the next generation of web applications. WS-based application providers will face challenging features related to nonfunctional properties in general and to performance and QoS in particular. Moreover, WS-based developers have to provide solutions to extend such applications with self-healing (SH) mechanisms as required for autonomic computing to face the complexity of interactions and to improve availability. Such solutions should be applicable when the components implementing SH mechanisms are deployed on both or only one platform on the WS providers and requesters sides depending on the deployment constraints. Associating application-specific performance requirements and monitoring-specific constraints will lead to complex configurations where fine tuning is needed to provide SH solutions. To contribute to enhancing the design and the assessment of such solutions for WS technology, we designed and implemented a monitoring and measurement framework, which is part of a larger Self-Healing Architectures (SHA) developed during the European WS-DIAMOND project. We implemented the Conference Management System (CMS), a real WS-based complex application. We achieved a large-scale experimentation campaign by deploying CMS on top of SHA on the French grid Grid5000. We experienced the problem as if we were a service provider who has to tune reconfiguration strategies. Our results are available on the web in a structured database for external use by the WS community. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Design and implementation of a high-performance CCA event service,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2009
    Ian Gorton
    Abstract Event services based on publish,subscribe architectures are well-established components of distributed computing applications. Recently, an event service has been proposed as part of the common component architecture (CCA) for high-performance computing (HPC) applications. In this paper we describe our implementation, experimental evaluation, and initial experience with a high-performance CCA event service that exploits efficient communications mechanisms commonly used on HPC platforms. We describe the CCA event service model and briefly discuss the possible implementation strategies of the model. We then present the design and implementation of the event service using the aggregate remote memory copy interface as an underlying communication layer for this mechanism. Two alternative implementations are presented and evaluated on a Cray XD-1 platform. The performance results demonstrate that event delivery latencies are low and that the event service is able to achieve high-throughput levels. Finally, we describe the use of the event service in an application for high-speed processing of data from a mass spectrometer and conclude by discussing some possible extensions to the event service for other HPC applications. Published in 2009 by John Wiley & Sons, Ltd. [source]


    Parallel programming on a high-performance application-runtime

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008
    Wojtek James Goscinski
    Abstract High-performance application development remains challenging, particularly for scientists making the transition to a heterogeneous grid environment. In general areas of computing, virtual environments such as Java and .Net have proved to be successful in fostering application development, allowing users to target and compile to a single environment, rather than a range of platforms, instruction sets and libraries. However, existing runtime environments are focused on business and desktop computing and they do not support the necessary high-performance computing (HPC) abstractions required by e-Scientists. Our work is focused on developing an application-runtime that can support these services natively. The result is a new approach to the development of an application-runtime for HPC: the Motor system has been developed by integrating a high-performance communication library directly within a virtual machine. The Motor message passing library is integrated alongside and in cooperation with other runtime libraries and services while retaining a strong message passing performance. As a result, the application developer is provided with a common environment for HPC application development. This environment supports both procedural languages, such as C, and modern object-oriented languages, such as C#. This paper describes the unique Motor architecture, presents its implementation and demonstrates its performance and use. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    The Grid Resource Broker workflow engine

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2008
    M. Cafaro
    Abstract Increasingly, complex scientific applications are structured in terms of workflows. These applications are usually computationally and/or data intensive and thus are well suited for execution in grid environments. Distributed, geographically spread computing and storage resources are made available to scientists belonging to virtual organizations sharing resources across multiple administrative domains through established service-level agreements. Grids provide an unprecedented opportunity for distributed workflow execution; indeed, many applications are well beyond the capabilities of a single computer, and partitioning the overall computation on different components whose execution may benefit from runs on different architectures could provide better performances. In this paper we describe the design and implementation of the Grid Resource Broker (GRB) workflow engine. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    APEX-Map: a parameterized scalable memory access probe for high-performance computing systems,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2007
    Erich Strohmaier
    Abstract The memory wall between the peak performance of microprocessors and their memory performance has become the prominent performance bottleneck for many scientific application codes. New benchmarks measuring data access speeds locally and globally in a variety of different ways are needed to explore the ever increasing diversity of architectures for high-performance computing. In this paper, we introduce a novel benchmark, APEX-Map, which focuses on global data movement and measures how fast global data can be fed into computational units. APEX-Map is a parameterized, synthetic performance probe and integrates concepts for temporal and spatial locality into its design. Our first parallel implementation in MPI and various results obtained with it are discussed in detail. By measuring the APEX-Map performance with parameter sweeps for a whole range of temporal and spatial localities performance surfaces can be generated. These surfaces are ideally suited to study the characteristics of the computational platforms and are useful for performance comparison. Results on a global-memory vector platform and distributed-memory superscalar platforms clearly reflect the design differences between these different architectures. Published in 2007 by John Wiley & Sons, Ltd. [source]


    Towards an autonomic approach for edge computing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2007
    Mikael Desertot
    Abstract Nowadays, one of the biggest challenges for companies is to cope with the high cost of their information technologies infrastructure. Edge computing is a new computing paradigm designed to allocate on-demand computing and storage resources. Those resources are Web cache servers scattered over the ISP backbones. We argue that this paradigm could be applied for on-demand full application hosting, helping to reduce costs. In this paper, we present a J2EE (Java Enterprise Edition) dynamic server able to deploy/host J2EE applications on demand and its autonomic manager. For this, we reengineer and experiment with JOnAS, an open-source J2EE static server. Two management policies of the autonomic manager were stressed by a simulation of a worldwide ISP network. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Parallelization and scalability of a spectral element channel flow solver for incompressible Navier,Stokes equations

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2007
    C. W. Hamman
    Abstract Direct numerical simulation (DNS) of turbulent flows is widely recognized to demand fine spatial meshes, small timesteps, and very long runtimes to properly resolve the flow field. To overcome these limitations, most DNS is performed on supercomputing machines. With the rapid development of terascale (and, eventually, petascale) computing on thousands of processors, it has become imperative to consider the development of DNS algorithms and parallelization methods that are capable of fully exploiting these massively parallel machines. A highly parallelizable algorithm for the simulation of turbulent channel flow that allows for efficient scaling on several thousand processors is presented. A model that accurately predicts the performance of the algorithm is developed and compared with experimental data. The results demonstrate that the proposed numerical algorithm is capable of scaling well on petascale computing machines and thus will allow for the development and analysis of high Reynolds number channel flows. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Job completion prediction using case-based reasoning for Grid computing environments

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2007
    Lilian Noronha Nassif
    Abstract One of the main focuses of Grid computing is solving resource-sharing problems in multi-institutional virtual organizations. In such heterogeneous and distributed environments, selecting the best resource to run a job is a complex task. The solutions currently employed still present numerous challenges and one of them is how to let users know when a job will finish. Consequently, reserve in advance remains unavailable. This article presents a new approach, which makes predictions for job execution time in Grid by applying the case-based reasoning paradigm. The work includes the development of a new case retrieval algorithm involving relevance sequence and similarity degree calculations. The prediction model is part of a multi-agent system that selects the best resource of a computational Grid to run a job. Agents representing candidate resources for job execution make predictions in a distributed and parallel manner. The technique presented here can be used in Grid environments at operation time to assist users with batch job submissions. Experimental results validate the prediction accuracy of the proposed mechanisms, and the performance of our case retrieval algorithm. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    High-speed network and Grid computing for high-end computation: application in geodynamics ensemble simulations

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2007
    S. Zhou
    Abstract High-speed network and Grid computing have been actively investigated, and their capabilities are being demonstrated. However, their application to high-end scientific computing and modeling is still to be explored. In this paper we discuss the related issues and present our prototype work on applying XCAT3 framework technology to geomagnetic data assimilation development with distributed computers, connected through an up to 10 Gigabit Ethernet network. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Developing LHCb Grid software: experiences and advances

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2007
    I. Stokes-Rees
    Abstract The LHCb Grid software has been used for two Physics Data Challenges, with the latter producing over 98 TB of data and consuming over 650 processor-years of computing power. This paper discusses the experience of developing a Grid infrastructure, interfacing to an existing Grid (LCG) and traditional computing centres simultaneously, running LHCb experiment software and jobs on the Grid, and the integration of a number of new technologies into the Grid infrastructure. Our experience and utilization of the following core technologies will be discussed: OGSI, XML-RPC, Grid services, LCG middleware and instant messaging. Specific attention will be given to analysing the behaviour of over 100,000 jobs executed through the LCG Grid environment, providing insight into the performance, failure modes and scheduling efficiency over a period of several months for a large computational Grid incorporating over 40 sites and thousands of nodes. © Crown copyright 2006. Reproduced with the permission of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. [source]


    Experimental analysis of a mass storage system

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2006
    Shahid Bokhari
    Abstract Mass storage systems (MSSs) play a key role in data-intensive parallel computing. Most contemporary MSSs are implemented as redundant arrays of independent/inexpensive disks (RAID) in which commodity disks are tied together with proprietary controller hardware. The performance of such systems can be difficult to predict because most internal details of the controller behavior are not public. We present a systematic method for empirically evaluating MSS performance by obtaining measurements on a series of RAID configurations of increasing size and complexity. We apply this methodology to a large MSS at Ohio Supercomputer Center that has 16 input/output processors, each connected to four 8 + 1 RAID5 units and provides 128 TB of storage (of which 116.8 TB are usable when formatted). Our methodology permits storage-system designers to evaluate empirically the performance of their systems with considerable confidence. Although we have carried out our experiments in the context of a specific system, our methodology is applicable to all large MSSs. The measurements obtained using our methods permit application programmers to be aware of the limits to the performance of their codes. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Incentive-based scheduling in Grid computing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2006
    Yanmin Zhu
    Abstract With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, Grid computing has emerged as an attractive computing paradigm. In typical Grid environments, there are two distinct parties, resource consumers and resource providers. Enabling an effective interaction between the two parties (i.e. scheduling jobs of consumers across the resources of providers) is particularly challenging due to the distributed ownership of Grid resources. In this paper, we propose an incentive-based peer-to-peer (P2P) scheduling for Grid computing, with the goal of building a practical and robust computational economy. The goal is realized by building a computational market supporting fair and healthy competition among consumers and providers. Each participant in the market competes actively and behaves independently for its own benefit. A market is said to be healthy if every player in the market gets sufficient incentive for joining the market. To build the healthy computational market, we propose the P2P scheduling infrastructure, which takes the advantages of P2P networks to efficiently support the scheduling. The proposed incentive-based algorithms are designed for consumers and providers, respectively, to ensure every participant gets sufficient incentive. Simulation results show that our approach is successful in building a healthy and scalable computational economy. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    GAUGE: Grid Automation and Generative Environment,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2006
    Francisco Hernández
    Abstract The Grid has proven to be a successful paradigm for distributed computing. However, constructing applications that exploit all the benefits that the Grid offers is still not optimal for both inexperienced and experienced users. Recent approaches to solving this problem employ a high-level abstract layer to ease the construction of applications for different Grid environments. These approaches help facilitate construction of Grid applications, but they are still tied to specific programming languages or platforms. A new approach is presented in this paper that uses concepts of domain-specific modeling (DSM) to build a high-level abstract layer. With this DSM-based abstract layer, the users are able to create Grid applications without knowledge of specific programming languages or being bound to specific Grid platforms. An additional benefit of DSM provides the capability to generate software artifacts for various Grid environments. This paper presents the Grid Automation and Generative Environment (GAUGE). The goal of GAUGE is to automate the generation of Grid applications to allow inexperienced users to exploit the Grid fully. At the same time, GAUGE provides an open framework in which experienced users can build upon and extend to tailor their applications to particular Grid environments or specific platforms. GAUGE employs domain-specific modeling techniques to accomplish this challenging task. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Applying fuzzy logic and genetic algorithms to enhance the efficacy of the PID controller in buffer overflow elimination for better channel response timeliness over the Internet

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006
    Wilfred W. K. Lin
    Abstract In this paper two novel intelligent buffer overflow controllers: the fuzzy logic controller (FLC) and the genetic algorithm controller (GAC) are proposed. In the FLC the extant algorithmic PID controller (PIDC) model, which combines the proportional (P), derivative (D) and integral (I) control elements, is augmented with fuzzy logic for higher control precision. The fuzzy logic divides the PIDC control domain into finer control regions. Every region is then defined either by a fuzzy rule or a ,don't care' state. The GAC combines the PIDC model with the genetic algorithm, which manipulates the parametric values of the PIDC as genes in a chromosome. The FLC and GAC operations are based on the objective function . The principle is that the controller should adaptively maintain the safety margin around the chosen reference point (represent by the ,0' of ) at runtime. The preliminary experimental results for the FLC and GAC prototypes indicate that they are both more effective and precise than the PIDC. After repeated timing analyses with the Intel's VTune Performer Analyzer, it was confirmed that the FLC can better support real-time computing than the GAC because of its shorter execution time and faster convergence without any buffer overflow. Copyright © 2005 John Wiley & Sons, Ltd. [source]