Computer Systems (computer + systems)

Distribution by Scientific Domains


Selected Abstracts


A software player for providing hints in problem-based learning according to a new specification

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2009
Pedro J. Muñoz-Merino
Abstract The provision of hints during problem solving has been a successful strategy in the learning process. There exist several computer systems that provide hints to students during problem solving, covering some specific issues of hinting. This article presents a novel software player module for providing hints in problem-based learning. We have implemented it into the XTutor Intelligent Tutoring System using its XDOC extension mechanism and the Python programming language. This player includes some of the functionalities that are present in different state-of-the-art systems, and also other new relevant functionalities based on our own ideas and teaching experience. The article explains each feature for providing hints and it also gives a pedagogical justification or explanation. We have created an XML binding, so any combination of the model hints functionalities can be expressed as an XML instance, enabling interoperability and reusability. The implemented player tool together with the XTutor server-side XDOC processor can interpret and run XML files according to this newly defined hints specification. Finally, the article presents several running examples of use of the tool, the subjects where it is in use, and results that lead to the conclusion of the positive impact of this hints tool in the learning process based on quantitative and qualitative analysis. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 17: 272,284, 2009; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20240 [source]


Fast and Scalable CPU/GPU Collision Detection for Rigid and Deformable Surfaces

COMPUTER GRAPHICS FORUM, Issue 5 2010
Simon Pabst
Abstract We present a new hybrid CPU/GPU collision detection technique for rigid and deformable objects based on spatial subdivision. Our approach efficiently exploits the massive computational capabilities of modern CPUs and GPUs commonly found in off-the-shelf computer systems. The algorithm is specifically tailored to be highly scalable on both the CPU and the GPU sides. We can compute discrete and continuous external and self-collisions of non-penetrating rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds on a modern PC. Our approach is orders of magnitude faster than earlier CPU-based approaches and up to twice as fast as the most recent GPU-based techniques. [source]


Interactive Visualization with Programmable Graphics Hardware

COMPUTER GRAPHICS FORUM, Issue 3 2002
Thomas Ertl
One of the main scientific goals of visualization is the development of algorithms and appropriate data models which facilitate interactive visual analysis and direct manipulation of the increasingly large data sets which result from simulations running on massive parallel computer systems, from measurements employing fast high-resolution sensors, or from large databases and hierarchical information spaces. This task can only be achieved with the optimization of all stages of the visualization pipeline: filtering, compression, and feature extraction of the raw data sets, adaptive visualization mappings which allow the users to choose between speed and accuracy, and exploiting new graphics hardware features for fast and high-quality rendering. The recent introduction of advanced programmability in widely available graphics hardware has already led to impressive progress in the area of volume visualization. However, besides the acceleration of the final rendering, flexible graphics hardware is increasingly being used also for the mapping and filtering stages of the visualization pipeline, thus giving rise to new levels of interactivity in visualization applications. The talk will present recent results of applying programmable graphics hardware in various visualization algorithms covering volume data, flow data, terrains, NPR rendering, and distributed and remote applications. [source]


The Scalasca performance toolset architecture

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2010
Markus Geimer
Abstract Scalasca is a performance toolset that has been specifically designed to analyze parallel application execution behavior on large-scale systems with many thousands of processors. It offers an incremental performance-analysis procedure that integrates runtime summaries with in-depth studies of concurrent behavior via event tracing, adopting a strategy of successively refined measurement configurations. Distinctive features are its ability to identify wait states in applications with very large numbers of processes and to combine these with efficiently summarized local measurements. In this article, we review the current toolset architecture, emphasizing its scalable design and the role of the different components in transforming raw measurement data into knowledge of application execution behavior. The scalability and effectiveness of Scalasca are then surveyed from experience measuring and analyzing real-world applications on a range of computer systems. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Concepts for computer center power management

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
A. DiRienzo
Abstract Electrical power usage contributes significantly to the operational costs of large computer systems. At the Hypersonic Missile Technology Research and Operations Center (HMT-ROC) our system usage patterns provide a significant opportunity to reduce operating costs since there are a small number of dedicated users. The relatively predictable nature of our usage patterns allows for the scheduling of computational resource availability. We take advantage of this predictability to shut down systems during periods of low usage to reduce power consumption. With interconnected computer cluster systems, reducing the number of online nodes is more than a simple matter of throwing the power switch on a portion of the cluster. The paper discusses these issues and an approach for power reduction strategies for a computational system with a heterogeneous system mix that includes a large (1560-node) Apple Xserve PowerPC supercluster. In practice, the average load on computer systems may be much less than the peak load although the infrastructure supporting the operation of large computer systems in a computer or data center must still be designed with the peak loads in mind. Given that a significant portion of the time, systems loads can be less than full peak, an opportunity exists for cost savings if idle systems can be dynamically throttled back, slept, or shut off entirely. The paper describes two separate strategies that meet the requirements for both power conservation and system availability at HMT-ROC. The first approach, for legacy systems, is not much more than a brute force approach to power management which we call Time-Driven System Management (TDSM). The second approach, which we call Dynamic-Loading System Management (DLSM), is applicable to more current systems with ,Wake-on-LAN' capability and takes a more granular approach to the management of system resources. The paper details the rule sets that we have developed and implemented in the two approaches to system power management and discusses some results with these approaches. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Grids challenged by a Web 2.0 and multicore sandwich

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2009
Geoffrey Fox
Abstract We discuss the application of Web 2.0 to support scientific research (e-Science) and related ,e-more or less anything' applications. Web 2.0 offers interesting technical approaches (protocols, message formats, and programming tools) to build core e-infrastructure (cyberinfrastructure) as well as many interesting services (Facebook, YouTube, Amazon S3/EC2, and Google maps) that can add value to e-infrastructure projects. We discuss why some of the original Grid goals of linking the world's computer systems may not be so relevant today and that interoperability is needed at the data and not always at the infrastructure level. Web 2.0 may also support Parallel Programming 2.0,a better parallel computing software environment motivated by the need to run commodity applications on multicore chips. A ,Grid on the chip' will be a common use of future chips with tens or hundreds of cores. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Parallel divide-and-conquer scheme for 2D Delaunay triangulation

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2006
Min-Bin Chen
Abstract This work describes a parallel divide-and-conquer Delaunay triangulation scheme. This algorithm finds the affected zone, which covers the triangulation and may be modified when two sub-block triangulations are merged. Finding the affected zone can reduce the amount of data required to be transmitted between processors. The time complexity of the divide-and-conquer scheme remains O(n log n), and the affected region can be located in O(n) time steps, where n denotes the number of points. The code was implemented with C, FORTRAN and MPI, making it portable to many computer systems. Experimental results on an IBM SP2 show that a parallel efficiency of 44,95% for general distributions can be attained on a 16-node distributed memory system. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Distributed loop-scheduling schemes for heterogeneous computer systems

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006
Anthony T. Chronopoulos
Abstract Distributed computing systems are a viable and less expensive alternative to parallel computers. However, a serious difficulty in concurrent programming of a distributed system is how to deal with scheduling and load balancing of such a system which may consist of heterogeneous computers. Some distributed scheduling schemes suitable for parallel loops with independent iterations on heterogeneous computer clusters have been designed in the past. In this work we study self-scheduling schemes for parallel loops with independent iterations which have been applied to multiprocessor systems in the past. We extend one important scheme of this type to a distributed version suitable for heterogeneous distributed systems. We implement our new scheme on a network of computers and make performance comparisons with other existing schemes. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Using Data from Hospital Information Systems to Improve Emergency Department Care

ACADEMIC EMERGENCY MEDICINE, Issue 11 2004
Gregg Husk MD
Abstract The ubiquity of computerized hospital information systems, and of inexpensive computing power, has led to an unprecedented opportunity to use electronic data for quality improvement projects and for research. Although hospitals and emergency departments vary widely in their degree of integration of information technology into clinical operations, most have computer systems that manage emergency department registration, admission,discharge,transfer information, billing, and laboratory and radiology data. These systems are designed for specific tasks, but contain a wealth of detail that can be used to educate staff and improve the quality of care emergency physicians offer their patients. In this article, the authors describe five such projects that they have performed and use these examples as a basis for discussion of some of the methods and logistical challenges of undertaking such projects. [source]


From generative fit to generative capacity: exploring an emerging dimension of information systems design and task performance

INFORMATION SYSTEMS JOURNAL, Issue 4 2009
Michel Avital
Abstract Information systems (IS) research has been long concerned with improving task-related performance. The concept of fit is often used to explain how system design can improve performance and overall value. So far, the literature has focused mainly on performance evaluation criteria that are based on measures of task efficiency, accuracy, or productivity. However, nowadays, productivity gain is no longer the single evaluation criterion. In many instances, computer systems are expected to enhance our creativity, reveal opportunities and open new vistas of uncharted frontiers. To address this void, we introduce the concept of generativity in the context of IS design and develop two corresponding design considerations ,,generative capacity' that refers to one's ability to produce something ingenious or at least new in a particular context, and ,generative fit' that refers to the extent to which an IT artefact is conducive to evoking and enhancing that generative capacity. We offer an extended view of the concept of fit and realign the prevailing approaches to human,computer interaction design with current leading-edge applications and users' expectations. Our findings guide systems designers who aim to enhance creative work, unstructured syntheses, serendipitous discoveries, and any other form of computer-aided tasks that involve unexplored outcomes or aim to enhance our ability to go boldly where no one has gone before. In this paper, we explore the underpinnings of ,generative capacity' and argue that it should be included in the evaluation of task-related performance. Then, we briefly explore the role of fit in IS research, position ,generative fit' in that context, explain its role and impact on performance, and provide key design considerations that enhance generative fit. Finally, we demonstrate our thesis with an illustrative vignette of good generative fit, and conclude with ideas for further research. [source]


The story of socio-technical design: reflections on its successes, failures and potential

INFORMATION SYSTEMS JOURNAL, Issue 4 2006
Enid Mumford
Abstract., This paper traces the history of socio-technical design, emphasizing the set of values it embraces, the people espousing its theory and the organizations that practise it. Its role in the implementation of computer systems and its impact in a number of different countries are stressed. It also shows its relationship with action research, as a humanistic set of principles aimed at increasing human knowledge while improving practice in work situations. Its evolution in the 1960s and 1970s evidencing improved working practices and joint agreements between workers and management are contrasted with the much harsher economic climate of the 1980s and 1990s when such principled practices, with one or two notable exceptions, gave way to lean production, downsizing and cost cutting in a global economy, partly reflecting the impact of information and communications technology. Different future scenarios are discussed where socio-technical principles might return in a different guise to humanize the potential impact of technology in a world of work where consistent organizational and economic change are the norm. [source]


ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2004
Jun Peng
Abstract This paper presents the computational procedures and solution strategy employed in ParCYCLIC, a parallel non-linear finite element program developed based on an existing serial code CYCLIC for the analysis of cyclic seismically-induced liquefaction problems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid,fluid formulation. A constitutive model developed for simulating liquefaction-induced deformations is a main component of this analysis framework. The elements of the computational strategy, designed for distributed-memory message-passing parallel computer systems, include: (a) an automatic domain decomposer to partition the finite element mesh; (b) nodal ordering strategies to minimize storage space for the matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver. Application of ParCYCLIC to simulate 3-D geotechnical experimental models is demonstrated. The computational results show excellent parallel performance and scalability of ParCYCLIC on parallel computers with a large number of processors. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An eddy current integral formulation on parallel computer systems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2005
Raffaele Fresa
Abstract In this paper, we show how an eddy current volume integral formulation can be used to analyse complex 3D conducting structures, achieving a substantial benefit from the use of a parallel computer system. To this purpose, the different steps of the numerical algorithms in view of their parallelization are outlined to enlighten the merits and the limitations of the proposed approach. Numerical examples are developed in a parallel environment to show the effectiveness of the method. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Comparison of intensity modulated radiation therapy (IMRT) treatment techniques for nasopharyngeal carcinoma

INTERNATIONAL JOURNAL OF CANCER, Issue 2 2001
Jason Chia-Hsien Cheng M.D.
Abstract We studied target volume coverage and normal tissue sparing of serial tomotherapy intensity modulated radiation therapy (IMRT) and fixed-field IMRT for nasopharyngeal carcinoma (NPC), as compared with those of conventional beam arrangements. Twelve patients with NPC (T2-4N1-3M0) at Mallinckrodt Institute of Radiology underwent computed tomography simulation. Images were then transferred to a virtual simulation workstation computer for target contouring. Target gross tumor volumes (GTV) were primary nasopharyngeal tumor (GTVNP) with a prescription of 70 Gy, grossly enlarged cervical nodes (GTVLN) with a prescription of 70 Gy, and the uninvolved cervical lymphatics [designated as the clinical tumor volume (CTV)] with a prescription of 60 Gy. Critical organs, including the parotid gland, spinal cord, brain stem, mandible, and pituitary gland, were also delineated. Conventional beam arrangements were designed following the guidelines of Intergroup (SWOG, RTOG, ECOG) NPC Study 0099 in which the dose was prescribed to the central axis and the target volumes were aimed to receive the prescribed dose ± 10%. Similar dosimetric criteria were used to assess the target volume coverage capability of IMRT. Serial tomotherapy IMRT was planned using a 0.86-cm wide multivane collimator, while a dynamic multileaf collimator system with five equally spaced fixed gantry angles was designated for fixed-beam IMRT. The fractional volume of each critical organ that received a certain predefined threshold dose was obtained from dose-volume histograms of each organ in either the three-dimensional or IMRT treatment planning computer systems. Statistical analysis (paired t -test) was used to examine statistical significance. We found that serial tomotherapy achieved similar target volume coverage as conventional techniques (97.8 ± 2.3% vs. 98.9 ± 1.3%). The static-field IMRT technique (five equally spaced fields) was inferior, with 92.1 ± 8.6% fractional GTVNP receiving 70 Gy ± 10% dose (P < 0.05). However, GTVLN coverage of 70 Gy was significantly better with both IMRT techniques (96.1 ± 3.2%, 87.7 ± 10.6%, and 42.2 ± 21% for tomotherapy, fixed-field IMRT, and conventional therapy, respectively). CTV coverage of 60 Gy was also significantly better with the IMRT techniques. Parotid gland sparing was quantified by evaluating the fractional volume of parotid gland receiving more than 30 Gy; 66.6 ± 15%, 48.3 ± 4%, and 93 ± 10% of the parotid volume received more than 30 Gy using tomotherapy, fixed-field IMRT, and conventional therapy, respectively (P < 0.05). Fixed-field IMRT technique had the best parotid-sparing effect despite less desirable target coverage. The pituitary gland, mandible, spinal cord, and brain stem were also better spared by both IMRT techniques. These encouraging dosimetric results substantiate the theoretical advantage of inverse-planning IMRT in the management of NPC. We showed that target coverage of the primary tumor was maintained and nodal coverage was improved, as compared with conventional beam arrangements. The ability of IMRT to spare the parotid glands is exciting, and a prospective clinical study is currently underway at our institution to address the optimal parotid dose-volume needs to be spared to prevent xerostomia and to improve the quality of life in patients with NPC. © 2001 Wiley-Liss, Inc. [source]


Phylogeographic information systems: putting the geography into phylogeography

JOURNAL OF BIOGEOGRAPHY, Issue 11 2006
David M. Kidd
Abstract Phylogeography is concerned with the observation, description and analysis of the spatial distribution of genotypes and the inference of historical scenarios. In the past, the discipline has concentrated on the historical ,phylo- ' component through the utilization of phylogenetic analyses. In contrast, the spatial , -geographic' component is not a prominent feature of many existing phylogenetic approaches and has often been dealt with in a relatively naive fashion. Recently, there has been a resurgence of interest in the importance of geography in evolutionary biology. Thus, we believe that it is time to assess how geographic information is currently handled and incorporated into phylogeographical analysis. Geographical information systems (GISs) are computer systems that facilitate the integration and interrelation of different geographically referenced data sets; however, so far they have been little utilized by the phylogeographical community to manage, analyse and disseminate phylogeographical data. However, the growth in individual studies and the resurgence of interest in the geographical components of genetic pattern and biodiversity should stimulate further uptake. Some advantages of GIS are the integration of disparate data sets via georeferencing, dynamic data base design and update, visualization tools and data mining. An important step in linking GIS to existing phylogeographical and historical biogeographical analysis software and the dissemination of spatial phylogenies will be the establishment of ,GeoPhylo' data standards. We hope that this paper will further stimulate the resurgence of geography as an equal partner in the symbiosis that is phylogeography as well as advertise some benefits that can be obtained from the application of GIS practices and technologies. [source]


Rigid-body dynamics in the isothermal-isobaric ensemble: A test on the accuracy and computational efficiency

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 8 2003
Wataru Shinoda
Abstract We have developed a time-reversible rigid-body (rRB) molecular dynamics algorithm in the isothermal-isobaric (NPT) ensemble. The algorithm is an extension of rigid-body dynamics [Matubayasi and Nakahara, J Chem Phys 1999, 110, 3291] to the NPT ensemble on the basis of non-Hamiltonian statistical mechanics [Martyna, G. J. et al., J Chem Phys 1994, 101, 4177]. A series of MD simulations of water as well as fully hydrated lipid bilayer systems have been undertaken to investigate the accuracy and efficiency of the algorithm. The rRB algorithm was shown to be superior to the state-of-the-art constraint-dynamics algorithm SHAKE/RATTLE/ROLL, with respect to computational efficiency. However, it was revealed that both algorithms produced accurate trajectories of molecules in the NPT as well as NVT ensembles, as long as a reasonably short time step was used. A couple of multiple time-step (MTS) integration schemes were also examined. The advantage of the rRB algorithm for computational efficiency increased when the MD simulation was carried out using MTS on parallel processing computer systems; total computer time for MTS-MD of a lipid bilayer using 64 processors was reduced by about 40% using rRB instead of SHAKE/RATTLE/ROLL. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 920,930, 2003 [source]


Multivariate Bayesian regression applied to the problem of network security

JOURNAL OF FORECASTING, Issue 8 2002
Kostas Triantafyllopoulos
Abstract An Erratum has been published for this article in Journal of Forecasting 23(6): 461 (2004). This paper examines the problem of intrusion in computer systems that causes major breaches or allows unauthorized information manipulation. A new intrusion-detection system using Bayesian multivariate regression is proposed to predict such unauthorized invasions before they occur and to take further action. We develop and use a multivariate dynamic linear model based on a unique approach leaving the unknown observational variance matrix distribution unspecified. The result is simultaneous forecasting free of the Wishart limitations that is proved faster and more reliable. Our proposed system uses software agent technology. The distributed software agent environment places an agent in each of the computer system workstations. The agent environment creates a user profile for each user. Every user has his or her profile monitored by the agent system and according to our statistical model prediction is possible. Implementation aspects are discussed using real data and an assessment of the model is provided. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Constructing robust crew schedules with bicriteria optimization

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 3 2002
Matthias Ehrgott
Abstract Optimization-based computer systems are used by many airlines to solve crew planning problems by constructing minimal cost tours of duty. However, today airlines do not only require cost effective solutions, but are also very interested in robust solutions. A more robust solution is understood to be one where disruptions in the schedule (due to delays) are less likely to be propagated into the future, causing delays of subsequent flights. Current scheduling systems based solely on cost do not automatically provide robust solutions. These considerations lead to a multiobjective framework, as the maximization of robustness will be in conflict with the minimization of cost. For example crew changing aircraft within a duty period is discouraged if inadequate ground time is provided. We develop a bicriteria optimization framework to generate Pareto optimal schedules for the domestic airline. A Pareto optimal schedule is one which does not allow an improvement in cost and robustness at the same time. We developed a method to solve the bicriteria problem, implemented it and tested it with actual airline data. Our results show that considerable gain in robustness can be achieved with a small increase in cost. The additional cost is mainly due to an increase in overnights, which allows for a reduction of the number of aircraft changes. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Word Sense Disambiguation: An Overview

LINGUISTICS & LANGUAGE COMPASS (ELECTRONIC), Issue 2 2009
Diana McCarthy
Word sense disambiguation is a subfield of computational linguistics in which computer systems are designed to determine the appropriate meaning of a word as it appears in the linguistic context. This article provides a survey of what has been done in this area: the ways that word meaning can be represented in the computer, the approaches taken by systems, how performance is evaluated and an overview of the intended applications that might benefit from this technology. One of the major issues has been, and still remains, that of finding an appropriate computational representation of word meaning as this is fundamental to the performance and utility of systems. [source]


Web-based Health Survey Systems in Outcome Assessment and Management of Pain

PAIN MEDICINE, Issue 2007
Vinod K. Podichetty MD
ABSTRACT Pain is a complex phenomenon lacking a well-defined paradigm for diagnosis and management across medical disciplines. This is due in part to inconsistencies in the assessment of pain as well as in the measurement of related social and psychological states. Efforts to evaluate and measure pain through objective tests have been hindered by challenges such as methodological differences in data acquisition, and the lack of common, universally accepted information systems. Physicians and hospital administrators have expressed mixed reactions to the costs that inevitably accompany advances in medical technology. Nonetheless, computer systems are currently being developed for use in the quantitative assessment and management of pain, which can advance our understanding of the public health impact of pain, improve the care individual patients receive, and educate providers. The description of an interdisciplinary, integrated, health survey system illustrates the approach and highlights the advantages of using information technology in pain evaluation and management. [source]


The prescribed duration algorithm: utilising ,free text' from multiple primary care electronic systems

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 9 2010
Caroline J. Brooks
Abstract Purpose To develop and test an algorithm that translates total dose and daily regimen, inputted as ,free text' on a prescription, into numerical values to calculate the prescribed treatment duration. Method The algorithm was developed using antibiotic prescriptions (n,=,711,714) from multiple primary care computer systems. For validation, the prescribed treatment duration of an independent sample of antibiotic scripts was calculated in two ways: (a) computer algorithm, (b) manually reviewed by a researcher blinded to the results of (a). The outputs of the two methods were compared and the level of agreement assessed, using confidence intervals for differences in proportions. This was repeated on sample of antidepressant scripts to test generalisabilty of the algorithm. Results For the antibiotic prescriptions, the algorithm processed 98.5% with an accuracy of 99.8% and the manual review processed 98.5% with 98.9% accuracy. The differences between these proportions are 0.0% (95%CI of ,0.9, 0.9%) and 1.0% (95%CI of ,0.1, 2.3%), respectively. For the antidepressant prescriptions, the algorithm processed 91.5% with an accuracy of 96.6% compared to the manual review with 96.4% processed and 99.8% accuracy; difference between these proportions is 4.9% (95%CI of 2.0, 8.0%) and 3.2% (95%CI of 1.6, 5.3%), respectively. Conclusion The algorithm proved to be applicable and efficient for assessing prescribed duration, with sensitivity and specificity values close to the manual review, but with the added advantage that the computer can process large volume of scripts rapidly and automatically. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Prospective cohort study of adverse events monitored by hospital pharmacists

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 2 2001
Angela Emerson BPharm, MRPharmS
Abstract Purpose To examine the feasibility of pharmacist-led intensive hospital monitoring of adverse events (AEs) associated with newly marketed drugs. Subjects/setting 303 patients admitted to Southampton Hospitals who were prescribed selected newly marketed drugs during their inpatient stay in 1998. Methods Prospective observational study. Patients were identified from computerized pharmacy records, clinical pharmacist ward rounds, dispensary records or via nursing staff. The pharmacist reviewed medical notes and recorded AEs, suspected adverse drug reactions (ADRs) and reasons for stopping drugs. Outcomes Incidence of AEs, ADRs; proportionate agreement between the physician's and pharmacist's event recording. Results 303 patients were monitored. Of the patients taking newly marketed drugs 92% were identifiable using pharmacy computer systems and pharmacist ward visits. There were 21 (7%) suspected ADRs detected during this pilot study. The types of adverse events detected were broadly similar to those identified by general practice-based prescription event monitoring. However, biochemical changes featured more frequently than in general practice. Differences between adverse events recorded by pharmacist and physician were systematic and attributed to differences in event coding. Conclusion Pharmacist-led monitoring in a typical NHS hospital setting was effective at detecting ADRs in newly marketed drugs. However, this effort might have been substantially less time-consuming and more effective were electronic patient records (EPRs) available. Pharmacy computer systems are not designed to be patient focused and are therefore unable to identify patients taking newly marketed drugs. It is argued that future EPR and computerised patient-specific prescribing systems should be designed to capture this data in the same way as some US systems are currently able to do. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Fault-tolerant procedures for redundant computer systems

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 1 2009
Refik Samet
Abstract Real-time computer systems deployed in life-critical control applications must be designed to meet stringent reliability specifications. The minimum acceptable degree of reliability for systems of this type is ,7 nines', which is not generally achieved. This paper aims at contributing to the achievement of that degree of reliability. To this end, this paper proposes a classification scheme of the fault-tolerant procedures for redundant computer systems (RCSs). The proposed classification scheme is developed on the basis of the number of counteracted fault types. Table I is created to relate the characteristics of the RCSs to the characteristics of the fault-tolerant procedures. A selection algorithm is proposed, which allows designers to select the optimal type of fault-tolerant procedures according to the system characteristics and capabilities. The fault-tolerant procedure, which is selected by this algorithm, provides the required degree of reliability for a given RCS. According to the proposed graphical model only a part of the fault-tolerant procedure is executed depending on the absence or presence (type and sort) of faults. The proposed methods allow designers to counteract Byzantine and non-Byzantine fault types during degradation of RCSs from N to 3, and only the non-Byzantine fault type during degradation from 3 to 1 with optimal checkpoint time period. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Assessing computer systems against 21 CFR Part 11: developing a checklist

QUALITY ASSURANCE JOURNAL, Issue 2 2002
Angela Carter
Abstract A 21 CFR Part 11 Checklist can satisfy many business, process, and educational needs of companies that use computer systems that must comply with Part 11. This article identifies a strategy for creating your own Part 11 Checklist. Suggestions are presented for: analyzing and sorting the regulations into manageable units; organizing the Checklist; adding supportive information to help users understand and navigate through the Checklist; and including enhancement features such as information mapping and special software. Finally, a sample excerpt from our own Checklist is provided. Copyright © 2002 John Wiley & Sons, Ltd. [source]