Error Reduction (error + reduction)

Distribution by Scientific Domains


Selected Abstracts


Clinical Information Systems: Instant Ubiquitous Clinical Data for Error Reduction and Improved Clinical Outcomes

ACADEMIC EMERGENCY MEDICINE, Issue 11 2004
Craig F. Feied MD
Abstract Immediate access to existing clinical information is inadequate in current medical practice; lack of existing information causes or contributes to many classes of medical error, including diagnostic and treatment error. A review of the literature finds ample evidence to support a description of the problems caused by data that are missing or unavailable but little evidence to support one proposed solution over another. A primary recommendation of the Consensus Committee is that hospitals and departments should adopt systems that provide fast, ubiquitous, and unified access to all types of existing data. Additional recommendations cover a variety of related functions and operational concepts, from backups and biosurveillance to speed, training, and usability. [source]


Echo combination to reduce proton resonance frequency (PRF) thermometry errors from fat

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 3 2008
Viola Rieke PhD
Abstract Purpose To validate echo combination as a means to reduce errors caused by fat in temperature measurements with the proton resonance frequency (PRF) shift method. Materials and Methods Computer simulations were performed to study the behavior of temperature measurement errors introduced by fat as a function of echo time. Error reduction by combining temperature images acquired at different echo times was investigated. For experimental verification, three echoes were acquired in a refocused gradient echo acquisition. Temperature images were reconstructed with the PRF shift method for the three echoes and then combined in a weighted average. Temperature measurement errors in the combined image and the individual echoes were compared for pure water and different fractions of fat in a computer simulation and for a phantom containing a homogenous mixture with 20% fat in an MR experiment. Results In both simulation and MR measurement, the presence of fat caused severe temperature underestimation or overestimation in the individual echoes. The errors were substantially reduced after echo combination. Residual errors were about 0.3°C for 10% fat and 1°C for 20% fat. Conclusion Echo combination substantially reduces temperature measurement errors caused by small fractions of fat. This technique then eliminates the need for fat suppression in tissues such as the liver. J. Magn. Reson. Imaging 2007. © 2007 Wiley-Liss, Inc. [source]


Random error reduction in analytic hierarchies: a comparison of holistic and decompositional decision strategies

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2001
Osvaldo F. Morera
Abstract The principle of ,divide and conquer' (DAC) suggests that complex decision problems should be decomposed into smaller, more manageable parts, and that these parts should be logically aggregated to derive an overall value for each alternative. Decompositional procedures have been contrasted with holistic evaluations that require decision makers to simultaneously consider all the relevant attributes of the alternatives under consideration (Fischer, 1977). One area where decompositional procedures have a clear advantage over holistic procedures is in the reduction of random error (Ravinder, 1992; Ravinder and Kleinmuntz, 1991; Kleinmuntz, 1990). Adopting the framework originally developed by Ravinder and colleagues, this paper details the results of a study of the random error variances associated with another popular multi-criteria decision-making technique, the Analytic Hierarchy Process (AHP); (Saaty, 1977, 1980), as well as the random error variances of a holistic version of the Analytic Hierarchy Process (Jensen, 1983). In addition, data concerning various psychometric properties (e.g. the convergent validity and temporal stability) and values of AHP inconsistency are reported for both the decompositional and holistic evaluations. The results of the study show that the Ravinder and Kleinmuntz (1991) error-propagation framework extends to the AHP and decompositional AHP judgments are more consistent than their holistic counterparts. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Efficiency-based h - and hp -refinement strategies for finite element methods

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2008
H. De Sterck
Abstract Two efficiency-based grid refinement strategies are investigated for adaptive finite element solution of partial differential equations. In each refinement step, the elements are ordered in terms of decreasing local error, and the optimal fraction of elements to be refined is determined based on efficiency measures that take both error reduction and work into account. The goal is to reach a pre-specified bound on the global error with minimal amount of work. Two efficiency measures are discussed, ,work times error' and ,accuracy per computational cost'. The resulting refinement strategies are first compared for a one-dimensional (1D) model problem that may have a singularity. Modified versions of the efficiency strategies are proposed for the singular case, and the resulting adaptive methods are compared with a threshold-based refinement strategy. Next, the efficiency strategies are applied to the case of hp -refinement for the 1D model problem. The use of the efficiency-based refinement strategies is then explored for problems with spatial dimension greater than one. The ,work times error' strategy is inefficient when the spatial dimension, d, is larger than the finite element order, p, but the ,accuracy per computational cost' strategy provides an efficient refinement mechanism for any combination of d and p. Copyright © 2008 John Wiley & Sons, Ltd. [source]


The value of observations.

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 628 2007
II: The value of observations located in singular-vector-based target areas
Abstract Data-assimilation experiments have been run in seven different configurations for two seasons to assess the value of observations taken in target regions identified either using singular vectors (SVs) or randomly, and located over the Pacific or the Atlantic Oceans. The value has been measured by the relative short-range forecast error reduction in downstream areas, specifically a North American region for observations taken in the Pacific Ocean, and a European region for observations taken in the Atlantic Ocean. Overall, results have indicated (1) that observations taken in SV-target areas are on average more valuable than observations taken in randomly selected areas, (2) that it is important that the daily set of singular vectors are used to compute the target areas, and (3) that the value of targeted observations depends on the region, the season and the baseline observing system. If the baseline observing system is data-void over the ocean, then the average value of observations taken in SV-target areas is very high. Considering for example winter 2004, SV-targeted observations over the Pacific (Atlantic) reduce the day-2 forecasts error of 500 hPa geopotential height forecasts in the verification region by 27.5% (19.1%), compared to 15.7% (14.9%) for observations taken in random areas. By contrast, if the baseline observing system is data-rich over the ocean, then the average value of observations taken in SV-target areas is rather small. Considering for example winter 2004, it has been estimated that adding SV-targeted observations over the Pacific (Atlantic) would reduce, on average, the day-2 forecasts error in the verification region by 4.0% (2.0%), compared to 0.5% (1.7%) for observations in random areas. These average results have been confirmed by single-case investigations, and by a careful examination of time series of forecast errors. These results indicate that more accurate assimilation systems that can exploit the potential value of localized observations are needed to increase the average return of investments in targeting field experiments. Copyright © 2007 Royal Meteorological Society [source]


Nonlinear parametric predictive control.

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 6 2009
Application to a continuous stirred tank reactor
Abstract This paper presents a nonlinear model-based controller based on the ideas of parametric predictive control applied to a continuous stirred tank reactor (CSTR) process unit. Controller design aims at avoiding the complexity of implementation and long computational times associated with conventional NMPC while maintaining the main advantage of taking into account process nonlinearities that are relevant for control. The design of the parametric predictive controller is based on a rather simplified process model having parameters that are instrumental in determining the required changes to the manipulated variables for error reduction. The nonlinear controller is easy to tune and can operate successfully over a wide range of operating conditions. The use of an estimator of unmeasured disturbances and process-model mismatch further enhances the behavior of the controller. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


Revisiting the Emergency Medicine Services for Children Research Agenda: Priorities for Multicenter Research in Pediatric Emergency Care

ACADEMIC EMERGENCY MEDICINE, Issue 4 2008
Steven Zane Miller MD
Abstract Objectives:, To describe the creation of an Emergency Medical Services for Children (EMSC) research agenda specific to multicenter research. Given the need for multicenter research in EMSC and the unique opportunity afforded by the creation of the Pediatric Emergency Care Applied Research Network (PECARN), the authors revisited existing EMSC research agendas to develop a PECARN-specific research agenda. They sought to prioritize PECARN research efforts, to guide investigators planning to conduct research in PECARN, and to describe the creation of a prioritized EMSC research agenda specific for multicenter research. Methods:, The authors used the Nominal Group Process and Hanlon Process of Prioritization (HPP), which are recognized research prioritization methods incorporating both quantitative and qualitative data collection in group settings. The formula used to generate the final priority list heavily weighted practicality of conduct in a multicenter research network. By using size, seriousness, and practicality measures of each health priority, PECARN was able to identify factors that could be scored individually and were weighted relative to each other. Results:, The prioritization processes resulted in a ranked list of 16 multicenter EMSC research topics. Top among these priorities were 1) respiratory illnesses/asthma, 2) prediction rules for high-stakes/low-likelihood diseases, 3) medication error reduction, 4) injury prevention, and 5) urgency and acuity scaling. Conclusions:, The PECARN prioritization process identified high-priority EMSC research topics specific to multicenter research. PECARN has the capacity to answer long-standing, important clinical controversies in EMSC, largely due to its ability to conduct randomized controlled trials and observational studies on a large scale. [source]


Optimizing precision of rotating compensator ellipsometry

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 5 2008
L. Broch
Abstract We present a method especially adapted to the RCE ellipsometer in the PCSA configuration (Polarizer, Compensator, Sample, Analyzer) when measurements with high degree of accuracy are required. The optimum precision conditions can be achieved for any sample by an adjustement of the azimuth of the analyzer and the polarizer. This new method of tracking is used to minimize the random errors and new positions of the analyzer and the polarizer according the sample are given. In these cases, the variances of tan , and tan , can be divided up to five. Experimental verification of error reductions is also achieved. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Issues in targeted observing

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 613 2005
(Invited paper for the Q. J. R. Meteorol.
Abstract This paper summarizes successes and limitations of targeted observing field programmes starting from the Fronts and Atlantic Storm-Track Experiment in 1997 through recent programmes targeting winter storms and tropical cyclones. These field programmes have produced average reductions in short-range forecast errors of about 10 per cent over regional verification areas, and maximum forecast error reductions as large as 50 per cent in certain cases. The majority of targeting cases investigated so far involve sets of dropsondes and other observation data that provide partial coverage of target areas. The primary scientific challenges for targeting include the refinement of objective methods that can identify optimal times and locations for targeted observations, as well as identify the specific types of satellite and in situ measurements that are required for the improvement of numerical weather forecasts. The most advanced targeting procedures, at present, include: the ensemble transform Kalman Filter, Hessian singular vectors, and observation-space targeting using the adjoint of a variational data assimilation procedure. Targeted observing remains an active research topic in numerical weather prediction, with plans for continued refinement of objective targeting procedures, and field tests of new satellite and in situ observing systems. Copyright © 2005 Royal Meteorological Society [source]