Residual Error (residual + error)

Distribution by Scientific Domains


Selected Abstracts


Echo combination to reduce proton resonance frequency (PRF) thermometry errors from fat

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 3 2008
Viola Rieke PhD
Abstract Purpose To validate echo combination as a means to reduce errors caused by fat in temperature measurements with the proton resonance frequency (PRF) shift method. Materials and Methods Computer simulations were performed to study the behavior of temperature measurement errors introduced by fat as a function of echo time. Error reduction by combining temperature images acquired at different echo times was investigated. For experimental verification, three echoes were acquired in a refocused gradient echo acquisition. Temperature images were reconstructed with the PRF shift method for the three echoes and then combined in a weighted average. Temperature measurement errors in the combined image and the individual echoes were compared for pure water and different fractions of fat in a computer simulation and for a phantom containing a homogenous mixture with 20% fat in an MR experiment. Results In both simulation and MR measurement, the presence of fat caused severe temperature underestimation or overestimation in the individual echoes. The errors were substantially reduced after echo combination. Residual errors were about 0.3°C for 10% fat and 1°C for 20% fat. Conclusion Echo combination substantially reduces temperature measurement errors caused by small fractions of fat. This technique then eliminates the need for fat suppression in tissues such as the liver. J. Magn. Reson. Imaging 2007. © 2007 Wiley-Liss, Inc. [source]


On the reliability of a dental OSCE, using SEM: effect of different days

EUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 3 2008
M. Schoonheim-Klein
Abstract Aim:, The first aim was to study the reliability of a dental objective structured clinical examination (OSCE) administered over multiple days, and the second was to assess the number of test stations required for a sufficiently reliable decision in three score interpretation perspectives of a dental OSCE administered over multiple days. Materials and methods:, In four OSCE administrations, 463 students of the year 2005 and 2006 took the summative OSCE after a dental course in comprehensive dentistry. The OSCE had 16,18 5-min stations (scores 1,10), and was administered per OSCE on four different days of 1 week. ANOVA was used to test for examinee performance variation across days. Generalizability theory was used for reliability analyses. Reliability was studied from three interpretation perspectives: for relative (norm) decisions, for absolute (domain) and pass,fail (mastery) decisions. As an indicator of reproducibility of test scores in this dental OSCE, the standard error of measurement (SEM) was used. The benchmark of SEM was set at <0.51. This is corresponding to a 95% confidence interval (CI) of <1 on the original scoring scale that ranged from 1 to 10. Results:, The mean weighted total OSCE score was 7.14 on a 10-point scale. With the pass,fail score set at 6.2 for the four OSCE, 90% of the 463 students passed. There was no significant increase in scores over the different days the OSCE was administered. ,Wished' variance owing to students was 6.3%. Variance owing to interaction between student and stations and residual error was 66.3%, more than two times larger than variance owing to stations' difficulty (27.4%). The SEM norm was 0.42 with a CI of ±0.83 and the SEM domain was 0.50, with a CI of ±0.98. In order to make reliable relative decisions (SEM <0.51), the use of minimal 12 stations is necessary, and for reliable absolute and pass,fail decisions, the use of minimal 17 stations is necessary in this dental OSCE. Conclusions:, It appeared reliable, when testing large numbers of students, to administer the OSCE on different days. In order to make reliable decisions for this dental OSCE, minimum 17 stations are needed. Clearly, wide sampling of stations is at the heart of obtaining reliable scores in OSCE, also in dental education. [source]


Accuracy and precision of different sampling strategies and flux integration methods for runoff water: comparisons based on measurements of the electrical conductivity

HYDROLOGICAL PROCESSES, Issue 2 2006
Patrick Schleppi
Abstract Because of their fast response to hydrological events, small catchments show strong quantitative and qualitative variations in their water runoff. Fluxes of solutes or suspended material can be estimated from water samples only if an appropriate sampling scheme is used. We used continuous in-stream measurements of the electrical conductivity of the runoff in a small subalpine catchment (64 ha) in central Switzerland and in a very small (0·16 ha) subcatchment. Different sampling and flux integration methods were simulated for weekly water analyses. Fluxes calculated directly from grab samples are strongly biased towards high conductivities observed at low discharges. Several regressions and weighted averages have been proposed to correct for this bias. Their accuracy and precision are better, but none of these integration methods gives a consistently low bias and a low residual error. Different methods of peak sampling were also tested. Like regressions, they produce important residual errors and their bias is variable. This variability (both between methods and between catchments) does not allow one to tell a priori which sampling scheme and integration method would be more accurate. Only discharge-proportional sampling methods were found to give essentially unbiased flux estimates. Programmed samplers with a fraction collector allow for a proportional pooling and are appropriate for short-term studies. For long-term monitoring or experiments, sampling at a frequency proportional to the discharge appears to be the best way to obtain accurate and precise flux estimates. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Using Image and Curve Registration for Measuring the Goodness of Fit of Spatial and Temporal Predictions

BIOMETRICS, Issue 4 2004
Cavan Reilly
Summary Conventional measures of model fit for indexed data (e.g., time series or spatial data) summarize errors in y, for instance by integrating (or summing) the squared difference between predicted and measured values over a range of x. We propose an approach which recognizes that errors can occur in the x -direction as well. Instead of just measuring the difference between the predictions and observations at each site (or time), we first "deform" the predictions, stretching or compressing along the x -direction or directions, so as to improve the agreement between the observations and the deformed predictions. Error is then summarized by (a) the amount of deformation in x, and (b) the remaining difference in y between the data and the deformed predictions (i.e., the residual error in y after the deformation). A parameter, ,, controls the tradeoff between (a) and (b), so that as ,,, no deformation is allowed, whereas for ,= 0 the deformation minimizes the errors in y. In some applications, the deformation itself is of interest because it characterizes the (temporal or spatial) structure of the errors. The optimal deformation can be computed by solving a system of nonlinear partial differential equations, or, for a unidimensional index, by using a dynamic programming algorithm. We illustrate the procedure with examples from nonlinear time series and fluid dynamics. [source]


Evaluating MT3DMS for Heat Transport Simulation of Closed Geothermal Systems

GROUND WATER, Issue 5 2010
Jozsef Hecht-Méndez
Owing to the mathematical similarities between heat and mass transport, the multi-species transport model MT3DMS should be able to simulate heat transport if the effects of buoyancy and changes in viscosity are small. Although in several studies solute models have been successfully applied to simulate heat transport, these studies failed to provide any rigorous test of this approach. In the current study, we carefully evaluate simulations of a single borehole ground source heat pump (GSHP) system in three scenarios: a pure conduction situation, an intermediate case, and a convection-dominated case. Two evaluation approaches are employed: first, MT3DMS heat transport results are compared with analytical solutions. Second, simulations by MT3DMS, which is finite difference, are compared with those by the finite element code FEFLOW and the finite difference code SEAWAT. Both FEFLOW and SEAWAT are designed to simulate heat flow. For each comparison, the computed results are examined based on residual errors. MT3DMS and the analytical solutions compare satisfactorily. MT3DMS and SEAWAT results show very good agreement for all cases. MT3DMS and FEFLOW two-dimensional (2D) and three-dimensional (3D) results show good to very good agreement, except that in 3D there is somewhat deteriorated agreement close to the heat source where the difference in numerical methods is thought to influence the solution. The results suggest that MT3DMS can be successfully applied to simulate GSHP systems, and likely other systems with similar temperature ranges and gradients in saturated porous media. [source]


Accuracy and precision of different sampling strategies and flux integration methods for runoff water: comparisons based on measurements of the electrical conductivity

HYDROLOGICAL PROCESSES, Issue 2 2006
Patrick Schleppi
Abstract Because of their fast response to hydrological events, small catchments show strong quantitative and qualitative variations in their water runoff. Fluxes of solutes or suspended material can be estimated from water samples only if an appropriate sampling scheme is used. We used continuous in-stream measurements of the electrical conductivity of the runoff in a small subalpine catchment (64 ha) in central Switzerland and in a very small (0·16 ha) subcatchment. Different sampling and flux integration methods were simulated for weekly water analyses. Fluxes calculated directly from grab samples are strongly biased towards high conductivities observed at low discharges. Several regressions and weighted averages have been proposed to correct for this bias. Their accuracy and precision are better, but none of these integration methods gives a consistently low bias and a low residual error. Different methods of peak sampling were also tested. Like regressions, they produce important residual errors and their bias is variable. This variability (both between methods and between catchments) does not allow one to tell a priori which sampling scheme and integration method would be more accurate. Only discharge-proportional sampling methods were found to give essentially unbiased flux estimates. Programmed samplers with a fraction collector allow for a proportional pooling and are appropriate for short-term studies. For long-term monitoring or experiments, sampling at a frequency proportional to the discharge appears to be the best way to obtain accurate and precise flux estimates. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Evaluation of megavoltage CT imaging protocols in patients with lung cancer

JOURNAL OF MEDICAL IMAGING AND RADIATION ONCOLOGY, Issue 1 2010
S Smith
Summary Currently, megavoltage CT studies in most centres with tomotherapy units are performed prior to every treatment for patient set-up verification and position correction. However, daily imaging adds to the total treatment time, which may cause patient discomfort as well as results in increased imaging dose. In this study, four alternative megavoltage CT imaging protocols (images obtained: during the first five fractions, once per week, alternating fractions and daily on alternative weeks) were evaluated retrospectively using the daily position correction data for 42 patients with lung cancer. The additional uncertainty introduced by using a specific protocol with respect to the daily imaging, or residual uncertainty, was analysed on a patient and population bases. The impact of less frequent imaging schedules on treatment margin calculation was also analysed. Systematic deviations were reduced with increased imaging frequency, while random deviations were largely unaffected. Mean population systematic errors were small for all protocols evaluated. In the protocol showing the greatest error, the treatment margins necessary to accommodate residual errors were 1.2, 1.3 and 1.7 mm larger in the left,right, superior,inferior and anterior,posterior directions, respectively, compared with the margins calculated using the daily imaging data. The increased uncertainty because of the use of less frequent imaging protocols may be acceptable when compared with other sources of uncertainty in lung cancer cases, such as target volume delineation and motion because of respiration. Further work needs to be carried out to establish the impact of increased residual errors on dose distribution. [source]