Execution

Distribution by Scientific Domains

Kinds of Execution

  • task execution

  • Terms modified by Execution

  • execution cost
  • execution delay
  • execution phase
  • execution time

  • Selected Abstracts


    THE SHORT-TERM EFFECTS OF EXECUTIONS ON HOMICIDES: DETERRENCE, DISPLACEMENT, OR BOTH?,

    CRIMINOLOGY, Issue 4 2009
    KENNETH C. LAND
    Does the death penalty save lives? In recent years, a new round of research has been using annual time-series panel data from the 50 U.S. states for 25 or so years from the 1970s to the late 1990s that claims to find many lives saved through reductions in subsequent homicide rates after executions. This research, in turn, has produced a round of critiques, which concludes that these findings are not robust enough to model even small changes in specifications that yield dramatically different results. A principal reason for this sensitivity of the findings is that few state-years exist (about 1 percent of all state-years) in which six or more executions have occurred. To provide a different perspective, we focus on Texas, a state that has used the death penalty with sufficient frequency to make possible relatively stable estimates of the homicide response to executions. In addition, we narrow the observation intervals for recording executions and homicides from the annual calendar year to monthly intervals. Based on time-series analyses and independent-validation tests, our best-fitting model shows that, from January 1994 through December 2005, evidence exists of modest, short-term reductions in homicides in Texas in the first and fourth months that follow an execution,about 2.5 fewer homicides total. Another model suggests, however, that in addition to homicide reductions, some displacement of homicides may be possible from one month to another in the months after an execution, which reduces the total reduction in homicides after an execution to about .5 during a 12-month period. Implications for additional research and the need for future analysis and replication are discussed. [source]


    Goethe, His Duke and Infanticide: New Documents and Reflections on a Controversial Execution

    GERMAN LIFE AND LETTERS, Issue 1 2008
    W. Daniel Wilson
    ABSTRACT It has been known since the 1930s that in 1783 Goethe cast his vote as a member of the governing Privy Council (,Geheimes Consilium') of Saxe-Weimar to retain the death penalty for infanticide. This decision, which followed a request by Duke Carl August for his councillors' advice on the matter, has moved to the centre of controversies over the political Goethe, since it meant that Johanna Höhn of Tannroda, who had been convicted of infanticide, was subsequently executed. The issue draws its special poignancy from Goethe's empathetic portrayal of the infanticide committed by Margarete in the earliest known version of Faust. The simultaneous publication in 2004 of two editions documenting the wider issue of infanticide and other crimes relating to sexual morality in Saxe-Weimar has re-ignited the controversy. The present article reexamines the issues, presenting new evidence that establishes the discourse on the question of the death penalty for infanticide in books that Duke Carl August and Goethe purchased, and presents the script of the public trial re-enactment (,Halsgericht') on the market square in Weimar directly preceding the execution. It concludes that this discourse ran heavily against the death penalty, and it counters attempts in recent scholarship to draw attention away from the Höhn execution. [source]


    The ,Halsgericht' for the Execution of Johanna Höhn in Weimar, 28 November 1783

    GERMAN LIFE AND LETTERS, Issue 1 2008
    W. Daniel Wilson
    ABSTRACT This previously unpublished document, found in the papers of the Weimar publisher, industrialist and court official F. J. Bertuch, represents the script for the public ceremony preceding the execution of the infanticide Johanna Catharina Höhn. Since Goethe, as a member of the powerful ,Geheimes Consilium' of Saxe-Weimar-Eisenach, had recently cast his vote to retain the death penalty for execution, the script has some significance for an evaluation of his administrative activities and his political ethos. The execution took place against a background of tension concerning its legitimacy at a time when the punishment of women who had committed infanticide was hotly contested. [source]


    Common Coding of Observation and Execution of Action in 9-Month-Old Infants

    INFANCY, Issue 1 2006
    Matthew R. Longo
    Do 9-month-old infants motorically simulate actions they perceive others perform? Two experiments tested whether action observation, like overt reaching, is sufficient to elicit the Piagetian A-not-B error. Infants recovered a toy hidden at location A or observed an experimenter recover the toy. After the toy was hidden at location B, infants in both conditions perseverated in reaching to A, demonstrating that active search by the infant is not necessary for the A-not-B error. Consistent with prior research, infants displayed an ipsilateral bias when reaching, the so-called mysterious midline barrier. A similar ipsilateral bias was also observed depending on the manner in which the experimenter reached; infants perseverated following observation of ipsi- but not contralateral reaches by the experimenter. Thus, infants perseverated only following observation of actions they themselves were able to perform, suggesting that they coded others' actions in terms of motor simulation. [source]


    The Effects of Fetal Alcohol Syndrome on Response Execution and Inhibition: An Event-Related Potential Study

    ALCOHOLISM, Issue 11 2009
    Matthew J. Burden
    Background:, Both executive function deficits and slower processing speed are characteristic of children with fetal alcohol exposure, but the temporal dynamics of neural activity underlying cognitive processing deficits in fetal alcohol spectrum disorder have rarely been studied. To this end, event-related potentials (ERPs) were used to examine the nature of alcohol-related effects on response inhibition by identifying differences in neural activation during task performance. Methods:, We recorded ERPs during a Go/No-go response inhibition task in 2 groups of children in Cape Town, South Africa (M age = 11.7 years; range = 10 to 13),one diagnosed with fetal alcohol syndrome (FAS) or partial FAS (FAS/PFAS; n = 7); the other, a control group whose mothers abstained or drank only minimally during pregnancy (n = 6). Children were instructed to press a "Go" response button to all letter stimuli presented except for the letter "X," the "No-go" stimulus, which occurred relatively infrequently. Results:, Task performance accuracy and reaction time did not differ between groups, but differences emerged for 3 ERP components,P2, N2, and P3. The FAS/PFAS group showed a slower latency to peak P2, suggesting less efficient processing of visual information at a relatively early stage (,200 ms after stimulus onset). Moreover, controls showed a larger P2 amplitude to Go versus No-go, indicating an early discrimination between conditions that was not seen in the FAS/PFAS group. Consistent with previous literature on tasks related to cognitive control, the control group showed a well-defined, larger N2 to No-go versus Go, which was not evident in the FAS/PFAS group. Both groups showed the expected larger P3 amplitude to No-go versus Go, but this condition difference persisted in a late slow wave for the FAS/PFAS group, suggesting increased cognitive effort. Conclusions:, The timing and amplitude differences in the ERP measures suggest that slower, less efficient processing characterizes the FAS/PFAS group during initial stimulus identification. Moreover, the exposed children showed less sharply defined components throughout the stimulus and response evaluation processes involved in successful response inhibition. Although both groups were able to inhibit their responses equally well, the level of neural activation in the children with FAS/PFAS was greater, suggesting more cognitive effort. The specific deficits in response inhibition processing at discrete stages of neural activation may have implications for understanding the nature of alcohol-related deficits in other cognitive domains as well. [source]


    Eligible for Execution: The Story of the Daryl Atkins Case.

    LAW & SOCIETY REVIEW, Issue 4 2009
    By Thomas G. Walker
    No abstract is available for this article. [source]


    Program Execution in Connectionist Networks

    MIND & LANGUAGE, Issue 4 2005
    Martin Roth
    This paper examines one such model and argues that it does execute a program. The argument proceeds by showing that what is essential to running a program is preserving the functional structure of the program. It has generally been assumed that this can only be done by systems possessing a certain temporal-causal organization. However, counterfactual-preserving functional architecture can be instantiated in other ways, for example geometrically, which are realizable by connectionist networks. [source]


    Death for a Terrorist: Media Coverage of the McVeigh Execution as a Case Study in Interorganizational Partnering between the Public and Private Sectors

    PUBLIC ADMINISTRATION REVIEW, Issue 5 2003
    Linda Wines Smith
    In June 2001, the Federal Bureau of Prisons helped to carry out the execution of Timothy McVeigh for his role in the infamous 1995 bombing of the Murrah Federal Building in Oklahoma City. The intense national and international media attention that the execution received was virtually unprecedented in the bureau's history, and it put the bureau in the difficult position of having to carry out two potentially conflicting responsibilities: facilitating coverage of the execution by hundreds of reporters, producers, and technicians, while maintaining the safety and security of the maximum security penitentiary in which the execution was held. Historically, the Bureau of Prisons has preferred to maintain a low media profile and had no experience managing a large-scale media event. This article examines how the bureau met this challenge by forming a partnership with the news media through the creation of a Media Advisory Group. It analyzes the goals, functions, and achievements of the Media Advisory Group by employing the Dawes model of interorganizational relationships. [source]


    Tyburn's Martyrs: Execution in England, 1675,1775 , By Andrea McKenzie

    THE HISTORIAN, Issue 4 2009
    Nick Groom
    No abstract is available for this article. [source]


    A Power Efficient Electronic Implant for a Visual Cortical Neuroprosthesis

    ARTIFICIAL ORGANS, Issue 3 2005
    Jonathan Coulombe
    Abstract:, An integrated microstimulator designed for a cortical visual prosthesis is presented, along with a pixel reordering algorithm, together minimizing the peak total current and voltage required for stimulation of large numbers of electrodes at a high rate. In order to maximize the available voltage for stimulation at a given supply voltage for generating biphasic pulses, the device uses monopolar stimulation, where the return electrode voltage is dynamically varied. Thus, the voltage available for stimulation is maximized, as opposed to the conventional fixed return voltage monopolar approach, and impedance is significantly lower than can be achieved using bipolar stimulation with microelectrodes. This enables the use of a low voltage power supply, minimizing power consumption of the device. An important constraint resulting from this stimulation strategy, however, is that current generation needs to be simultaneous and in-phase for all active parallel channels, imposing heavy stress on the wireless power recovery and regulation circuitry in large electrode count systems such as a visual prosthesis. An ordering algorithm to be implemented in the external controller of the prosthesis is then proposed. Based on the data for each frame of the video signal to be transmitted to the implant, the algorithm minimizes the total generated current standard deviation between time multiplexed stimulations by determining the most appropriate combination of parallel stimulation channels to be activated simultaneously. A stimulator prototype has been implemented in CMOS technology and successfully tested. Execution of the external controller reordering algorithm on an application specific hardware architecture has been verified using a System-On-Chip development platform. A near 75% decrease in the total stimulation current standard deviation was observed with a one-pass algorithm, whereas a recursive variation of the algorithm resulted in a greater than 95% decrease of the same variable. [source]


    Standsicherheitsuntersuchung für ein räumliches Böschungssystem , Berechnung mit einem zusammengesetzten dreidimensionalen Bruchmechanismus und Modellversuch im Gelände , Teil 2: Durchführung der Berechnung und Ergebnisse

    BAUTECHNIK, Issue 1 2004
    Michael Goldscheider Dr.-Ing.
    Für die Berechnung der Sicherheit gegen einen räumlichen Grabenbruch an einer hohen Randböschung eines Braunkohletagebaus wird ein räumlicher zusammengesetzter Starrkörperbruchmechanismus entwickelt und berechnet. Die Grundlagen und Grundgleichungen wurden in Teil 1 behandelt. Der Teil 2 beschreibt die Durchführung der Berechnung, die Optimierung des Bruchmechanismus und typische Berechnungsergebnisse. Stability analysis for a three-dimensional slope system , calculation with a composed rigid body failure mechanism and model test in an open site. Part2: Execution of the calculation and results. To determine the safety factor against a three-dimensional graben rupture on a high rim slope of an open-cast lignite mine a three-dimensional composed rigid body failure mechanism is outlined and calculated. The basic relations and equations are presented in part 1 of this paper. In the part 2 the execution of calculation, the optimization of the failure mechanism and typical results are described. [source]


    A New Fluorous/Organic Amphiphilic Ether Solvent, F-626: Execution of Fluorous and High Temperature Classical Reactions with Convenient Biphase Workup to Separate Product from High Boiling Solvent.

    CHEMINFORM, Issue 39 2002
    Miroshi Matsubara
    Abstract For Abstract see ChemInform Abstract in Full Text. [source]


    Measure for Measure and the Executions of Catholics in 1604

    ENGLISH LITERARY RENAISSANCE, Issue 1 2003
    JAMES ELLISON
    First page of article [source]


    Executions, Deterrence, and Homicide: A Tale of Two Cities

    JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 1 2010
    Franklin E. Zimring
    We compare homicide rates in two quite similar cities with vastly different execution risks. Singapore had an execution rate close to one per million per year until an explosive 20-fold increase in 1994,1995 and 1996 to a level that we show was probably the highest in the world. Then, over the next 11 years, Singapore executions dropped by about 95 percent. Hong Kong, by contrast, had no executions at all during the last generation and abolished capital punishment in 1993. Homicide levels and trends are remarkably similar in these two cities over the 35 years after 1973, with neither the surge in Singapore executions nor the more recent steep drop producing any differential impact. By comparing two closely matched places with huge contrasts in actual execution but no differences in homicide trends, we have generated a unique test of the exuberant claims of deterrence that have been produced over the past decade in the United States. [source]


    Lethal Punishment: Lynchings and Legal Executions in the South by Margaret Vandiver

    LAW & SOCIETY REVIEW, Issue 1 2007
    Timothy W. Clark
    No abstract is available for this article. [source]


    To Commit or Not to Commit: Modeling Agent Conversations for Action

    COMPUTATIONAL INTELLIGENCE, Issue 2 2002
    Roberto A. Flores
    Conversations are sequences of messages exchanged among interacting agents. For conversations to be meaningful, agents ought to follow commonly known specifications limiting the types of messages that can be exchanged at any point in the conversation. These specifications are usually implemented using conversation policies (which are rules of inference) or conversation protocols (which are predefined conversation templates). In this article we present a semantic model for specifying conversations using conversation policies. This model is based on the principles that the negotiation and uptake of shared social commitments entail the adoption of obligations to action, which indicate the actions that agents have agreed to perform. In the same way, obligations are retracted based on the negotiation to discharge their corresponding shared social commitments. Based on these principles, conversations are specified as interaction specifications that model the ideal sequencing of agent participations negotiating the execution of actions in a joint activity. These specifications not only specify the adoption and discharge of shared commitments and obligations during an activity, but also indicate the commitments and obligations that are required (as preconditions) or that outlive a joint activity (as postconditions). We model the Contract Net Protocol as an example of the specification of conversations in a joint activity. [source]


    Toward A Formalism for Conversation Protocols Using Joint Intention Theory

    COMPUTATIONAL INTELLIGENCE, Issue 2 2002
    Sanjeev Kumar
    Conversation protocols are used to achieve certain goals or to bring about certain states in the world. Therefore, one may identify the landmarks or the states that must be brought about during the goal,directed execution of a protocol. Accordingly, the landmarks, characterized by propositions that are true in the state represented by that landmark, are the most important aspect of a protocol. Families of conversation protocols can be expressed formally as partially ordered landmarks after the landmarks necessary to achieve a goal have been identified. Concrete protocols represented as joint action expressions can, then, be derived from the partially ordered landmarks and executed directly by joint intention interpreters. This approach of applying Joint Intention theory to protocols also supports flexibility in the actions used to get to landmarks, shortcutting protocol execution, automatic exception handling, and correctness criterion for protocols and protocol compositions. [source]


    A Windows-based interface for teaching image processing

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2010
    Melvin Ayala
    Abstract The use of image processing in research represents a challenge to the scientific community interested in its various applications but is not familiar with this area of expertise. In academia as well as in industry, fundamental concepts such as image transformations, filtering, noise removal, morphology, convolution/deconvolution among others require extra efforts to be understood. Additionally, algorithms for image reading and visualization in computers are not always easy to develop by inexperienced researchers. This type of environment has lead to an adverse situation where most students and researchers develop their own image processing code for operations which are already standards in image processing, a redundant process which only exacerbates the situation. The research proposed in this article, with the aim to resolve this dilemma, is to propose a user-friendly computer interface that has a dual objective which is to free students and researchers from the learning time needed for understanding/applying diverse imaging techniques but to also provide them with the option to enhance or reprogram such algorithms with direct access to the software code. The interface was thus developed with the intention to assist in understanding and performing common image processing operations through simple commands that can be performed mostly by mouse clicks. The visualization of pseudo code after each command execution makes the interface attractive, while saving time and facilitating to users the learning of such practical concepts. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 213,224, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20171 [source]


    Experimenting with a computer-mediated collaborative interaction model to support engineering courses

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2004
    David A. Fuller
    Abstract Many of the engineering education lecture courses are taught only with the support of a board or transparencies. In both cases, the students have to copy the material passed in class, including additional annotations and comments. We performed a controlled experiment to measure the impact of the insertion of a computer mediated collaborative interaction model to support the teaching/learning process in such scenarios, using a Web-based computer application. Our experiment was done during two consecutive semesters of a First Year Programming Engineering course, with 447 enrolled students where 234 students were surveyed. In this paper, we describe the design and execution of the experiment, and show the obtained results. Based on our results, we conclude that there are advantages of using a collaborative interaction model supported by a collaborative software tool in an Engineering course such as the experimented. © 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 12: 175,188, 2004; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20012 [source]


    Tactics-Based Behavioural Planning for Goal-Driven Rigid Body Control

    COMPUTER GRAPHICS FORUM, Issue 8 2009
    Stefan Zickler
    Computer Graphics [I.3.7]: Animation-Artificial Intelligence; [I.2.8]: Plan execution, formation, and generation; Computer Graphics [I.3.5]: Physically based modelling Abstract Controlling rigid body dynamic simulations can pose a difficult challenge when constraints exist on the bodies' goal states and the sequence of intermediate states in the resulting animation. Manually adjusting individual rigid body control actions (forces and torques) can become a very labour-intensive and non-trivial task, especially if the domain includes a large number of bodies or if it requires complicated chains of inter-body collisions to achieve the desired goal state. Furthermore, there are some interactive applications that rely on rigid body models where no control guidance by a human animator can be offered at runtime, such as video games. In this work, we present techniques to automatically generate intelligent control actions for rigid body simulations. We introduce sampling-based motion planning methods that allow us to model goal-driven behaviour through the use of non-deterministic,Tactics,that consist of intelligent, sampling-based control-blocks, called,Skills. We introduce and compare two variations of a Tactics-driven planning algorithm, namely behavioural Kinodynamic Rapidly Exploring Random Trees (BK-RRT) and Behavioural Kinodynamic Balanced Growth Trees (BK-BGT). We show how our planner can be applied to automatically compute the control sequences for challenging physics-based domains and that is scalable to solve control problems involving several hundred interacting bodies, each carrying unique goal constraints. [source]


    Novel software architecture for rapid development of magnetic resonance applications

    CONCEPTS IN MAGNETIC RESONANCE, Issue 3 2002
    Josef Debbins
    Abstract As the pace of clinical magnetic resonance (MR) procedures grows, the need for an MR scanner software platform on which developers can rapidly prototype, validate, and produce product applications becomes paramount. A software architecture has been developed for a commercial MR scanner that employs state of the art software technologies including Java, C++, DICOM, XML, and so forth. This system permits graphical (drag and drop) assembly of applications built on simple processing building blocks, including pulse sequences, a user interface, reconstruction and postprocessing, and database control. The application developer (researcher or commercial) can assemble these building blocks to create custom applications. The developer can also write source code directly to create new building blocks and add these to the collection of components, which can be distributed worldwide over the internet. The application software and its components are developed in Java, which assures platform portability across any host computer that supports a Java Virtual Machine. The downloaded executable portion of the application is executed in compiled C++ code, which assures mission-critical real-time execution during fast MR acquisition and data processing on dedicated embedded hardware that supports C or C++. This combination permits flexible and rapid MR application development across virtually any combination of computer configurations and operating systems, and yet it allows for very high performance execution on actual scanner hardware. Applications, including prescan, are inherently real-time enabled and can be aggregated and customized to form "superapplications," wherein one or more applications work with another to accomplish the clinical objective with a very high transition speed between applications. © 2002 Wiley Periodicals, Inc. Concepts in Magnetic Resonance (Magn Reson Engineering) 15: 216,237, 2002 [source]


    Adaptive structured parallelism for distributed heterogeneous architectures: a methodological approach with pipelines and farms

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2010
    Horacio González-Vélez
    Abstract Algorithmic skeletons abstract commonly used patterns of parallel computation, communication, and interaction. Based on the algorithmic skeleton concept, structured parallelism provides a high-level parallel programming technique that allows the conceptual description of parallel programs while fostering platform independence and algorithm abstraction. This work presents a methodology to improve skeletal parallel programming in heterogeneous distributed systems by introducing adaptivity through resource awareness. As we hypothesise that a skeletal program should be able to adapt to the dynamic resource conditions over time using its structural forecasting information, we have developed adaptive structured parallelism (ASPARA). ASPARA is a generic methodology to incorporate structural information at compilation into a parallel program, which will help it to adapt at execution. ASPARA comprises four phases: programming, compilation, calibration, and execution. We illustrate the feasibility of this approach and its associated performance improvements using independent case studies based on two algorithmic skeletons,the task farm and the pipeline,evaluated in a non-dedicated heterogeneous multi-cluster system. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Dynamic scratch-pad memory management with data pipelining for embedded systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2010
    Yanqin Yang
    Abstract In this paper, we propose an effective data pipelining technique, SPDP (Scratch-Pad Data Pipelining), for dynamic scratch-pad memory (SPM) management with DMA (Direct Memory Access). Our basic idea is to overlap the execution of CPU instructions and DMA operations. In SPDP, based on the iteration access patterns of arrays, we group multiple iterations into a block to improve the data locality of regular array accesses. We allocate the data of multiple iterations into different portions of the SPM. In this way, when the CPU executes instructions and accesses data from one portion of the SPM, DMA operations can be performed to transfer data between the off-chip memory and another portion of SPM simultaneously. We perform code transformation to insert DMA instructions to achieve the data pipelining. We have implemented our SPDP technique with the IMPACT compiler, and conduct experiments using a set of loop kernels from DSPstone, Mibench, and Mediabench on the cycle-accurate VLIW simulator of Trimaran. The experimental results show that our technique achieves performance improvement compared with the previous work. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A comparison of using Taverna and BPEL in building scientific workflows: the case of caGrid

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2010
    Wei Tan
    Abstract When the emergence of ,service-oriented science,' the need arises to orchestrate multiple services to facilitate scientific investigation,that is, to create ,science workflows.' We present here our findings in providing a workflow solution for the caGrid service-based grid infrastructure. We choose BPEL and Taverna as candidates, and compare their usability in the lifecycle of a scientific workflow, including workflow composition, execution, and result analysis. Our experience shows that BPEL as an imperative language offers a comprehensive set of modeling primitives for workflows of all flavors; whereas Taverna offers a dataflow model and a more compact set of primitives that facilitates dataflow modeling and pipelined execution. We hope that this comparison study not only helps researchers to select a language or tool that meets their specific needs, but also offers some insight into how a workflow language and tool can fulfill the requirement of the scientific community. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Scheduling dense linear algebra operations on multicore processors

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2010
    Jakub Kurzak
    Abstract State-of-the-art dense linear algebra software, such as the LAPACK and ScaLAPACK libraries, suffers performance losses on multicore processors due to their inability to fully exploit thread-level parallelism. At the same time, the coarse,grain dataflow model gains popularity as a paradigm for programming multicore architectures. This work looks at implementing classic dense linear algebra workloads, the Cholesky factorization, the QR factorization and the LU factorization, using dynamic data-driven execution. Two emerging approaches to implementing coarse,grain dataflow are examined, the model of nested parallelism, represented by the Cilk framework, and the model of parallelism expressed through an arbitrary Direct Acyclic Graph, represented by the SMP Superscalar framework. Performance and coding effort are analyzed and compared against code manually parallelized at the thread level. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    First experience of compressible gas dynamics simulation on the Los Alamos roadrunner machine

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2009
    Paul R. Woodward
    Abstract We report initial experience with gas dynamics simulation on the Los Alamos Roadrunner machine. In this initial work, we have restricted our attention to flows in which the flow Mach number is less than 2. This permits us to use a simplified version of the PPM gas dynamics algorithm that has been described in detail by Woodward (2006). We follow a multifluid volume fraction using the PPB moment-conserving advection scheme, enforcing both pressure and temperature equilibrium between two monatomic ideal gases within each grid cell. The resulting gas dynamics code has been extensively restructured for efficient multicore processing and implemented for scalable parallel execution on the Roadrunner system. The code restructuring and parallel implementation are described and performance results are discussed. For a modest grid size, sustained performance of 3.89 Gflops,1 CPU-core,1 is delivered by this code on 36 Cell processors in 9 triblade nodes of a single rack of Roadrunner hardware. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Adaptive workflow processing and execution in Pegasus

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2009
    Kevin Lee
    Abstract Workflows are widely used in applications that require coordinated use of computational resources. Workflow definition languages typically abstract over some aspects of the way in which a workflow is to be executed, such as the level of parallelism to be used or the physical resources to be deployed. As a result, a workflow management system has the responsibility of establishing how best to execute a workflow given the available resources. The Pegasus workflow management system compiles abstract workflows into concrete execution plans, and has been widely used in large-scale e-Science applications. This paper describes an extension to Pegasus whereby resource allocation decisions are revised during workflow evaluation, in the light of feedback on the performance of jobs at runtime. The contributions of this paper include: (i) a description of how adaptive processing has been retrofitted to an existing workflow management system; (ii) a scheduling algorithm that allocates resources based on runtime performance; and (iii) an experimental evaluation of the resulting infrastructure using grid middleware over clusters. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Trust-based robust scheduling and runtime adaptation of scientific workflow

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2009
    Mingzhong Wang
    Abstract Robustness and reliability with respect to the successful completion of a schedule are crucial requirements for scheduling in scientific workflow management systems because service providers are becoming autonomous. We introduce a model to incorporate trust, which indicates the probability that a service agent will comply with its commitments to improve the predictability and stability of the schedule. To deal with exceptions during the execution of a schedule, we adapt and evolve the schedule at runtime by interleaving the processes of evaluating, scheduling, executing and monitoring in the life cycle of the workflow management. Experiments show that schedules maximizing participants' trust are more likely to survive and succeed in open and dynamic environments. The results also prove that the proposed approach of workflow evaluation can find the most robust execution flow efficiently, thus avoiding the need of scheduling every possible execution path in the workflow definition. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Clock synchronization in Cell/B.E. traces

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2009
    M. Biberstein
    Abstract Cell/B.E. is a heterogeneous multicore processor that was designed for the efficient execution of parallel and vectorizable applications with high computation and memory requirements. The transition to multicores introduces the challenge of providing tools that help programmers tune the code running on these architectures. Tracing tools, in particular, often help locate performance problems related to thread and process communication. A major impediment to implementing tracing on Cell is the absence of a common clock that can be accessed at low cost from all cores. The OS clock is costly to access from the auxiliary cores and the hardware timers cannot be simultaneously set on all the cores. In this paper, we describe an offline trace analysis algorithm that assigns wall-clock time to trace records based on their thread-local time stamps and event order. Our experiments on several Cell SDK workloads show that the indeterminism in assigning wall-clock time to events is low, on average 20,40 clock ticks (translating into 1.4,2.8,µs on the system used in our experiments). We also show how various practical problems, such as the imprecision of time measurement, can be overcome. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Network-aware selective job checkpoint and migration to enhance co-allocation in multi-cluster systems,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2009
    William M. Jones
    Abstract Multi-site parallel job schedulers can improve average job turn-around time by making use of fragmented node resources available throughout the grid. By mapping jobs across potentially many clusters, jobs that would otherwise wait in the queue for local resources can begin execution much earlier; thereby improving system utilization and reducing average queue waiting time. Recent research in this area of scheduling leverages user-provided estimates of job communication characteristics to more effectively partition the job across system resources. In this paper, we address the impact of inaccuracies in these estimates on system performance and show that multi-site scheduling techniques benefit from these estimates, even in the presence of considerable inaccuracy. While these results are encouraging, there are instances where these errors result in poor job scheduling decisions that cause network over-subscription. This situation can lead to significantly degraded application performance and turnaround time. Consequently, we explore the use of job checkpointing, termination, migration, and restart (CTMR) to selectively stop offending jobs to alleviate network congestion and subsequently restart them when (and where) sufficient network resources are available. We then characterize the conditions and the extent to which the process of CTMR improves overall performance. We demonstrate that this technique is beneficial even when the overhead of doing so is costly. Copyright © 2009 John Wiley & Sons, Ltd. [source]