Overestimation

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Overestimation

  • significant overestimation


  • Selected Abstracts


    Atopic dermatitis: quality of life of young Italian children and their families and correlation with severity score

    PEDIATRIC ALLERGY AND IMMUNOLOGY, Issue 3 2007
    Giampaolo Ricci
    The aim of this study was to determine the ways in which atopic dermatitis (AD) affects the lives of young Italian children and their families, in terms of quality of life, and correlate it with AD severity and the perception of severity as estimated by the family. The parents of 45 children aged 3,84 months affected by AD were asked to complete two validated questionnaires after clinical examination. The first questionnaire was about the child's quality of life (Infants' Dermatitis Quality of Life Index); the second regarded the family's quality of life (Dermatitis Family Impact questionnaire). In a further question parents were asked to estimate the severity of the disease of the child. Children's quality of life appeared slightly-moderately altered (mean score 10.2) compared with the value of a control group (3.3), and itching, sleep problems and the influence of the disease on the child's mood were the cause of greatest discomfort for the child. Family quality of life appeared moderately altered (mean score 11) compared with the value of the control group (7.4). The greatest problem was the disturbed sleep of the family members. Other important problems were the economic cost for the management of the disease and the tiredness and irritability caused by the disease in parents. Analysis of the responses confirms the incorrect estimation of the severity of the disease perceived by the family. In our opinion, the two questionnaires may be useful in clinical practice to understand better the difficulties suffered by a family with a child affected by AD. They also provide data that may help to improve the clinical approach for the child and the family, and to assess the degree of under-/overestimation of the disease by the family. [source]


    Overestimation of Left Ventricular Mass and Misclassification of Ventricular Geometry in Heart Failure Patients by Two-Dimensional Echocardiography in Comparison with Three-Dimensional Echocardiography

    ECHOCARDIOGRAPHY, Issue 3 2010
    Dmitry Abramov M.D.
    Background: Accurate assessment of left ventricular hypertrophy (LVH) and ventricular geometry is important, especially in patients with heart failure (HF). The aim of this study was to compare the assessment of ventricular size and geometry by 2D and 3D echocardiography in normotensive controls and among HF patients with a normal and a reduced ejection fraction. Methods: One hundred eleven patients, including 42 normotensive patients without cardiac disease, 41 hypertensive patients with HF and a normal ejection fraction (HFNEF), and 28 patients with HF and a low ejection fraction (HFLEF), underwent 2DE and freehand 3DE. The differences between 2DE and 3DE derived LVM were evaluated by use of a Bland,Altman plot. Differences in classification of geometric types among the cohort between 2DE and 3DE were determined. Results: Two-dimensional echocardiography overestimated ventricular mass compared to 3D echocardiography (3DE) among normal (166 ± 36 vs. 145 ± 20 gm, P = 0.002), HFNEF (258 ± 108 vs. 175 ± 47gm, P < 0.001), and HFLEF (444 ± 136 vs. 259 ± 77 gm, P < 0.001) patients. The overestimation of mass by 2DE increased in patients with larger ventricular size. The use of 3DE to assess ventricular geometry resulted in reclassification of ventricular geometric patterns in 76% of patients with HFNEF and in 21% of patients with HFLEF. Conclusion: 2DE overestimates ventricular mass when compared to 3DE among patients with heart failure with both normal and low ejection fractions and leads to significant misclassification of ventricular geometry in many heart failure patients. (Echocardiography 2010;27:223-229) [source]


    Perceived peer smoking prevalence and its association with smoking behaviours and intentions in Hong Kong Chinese adolescents

    ADDICTION, Issue 9 2004
    Man Kin Lai
    ABSTRACT Background Among the many personal, social and environmental risk factors of adolescence smoking, normative beliefs stand out for their potential to be modified with factual information on smoking prevalence. Aims To study the perceived peer smoking prevalence and its association with smoking behaviours in Hong Kong Chinese adolescents. Design and setting Cross-sectional territorial-wide school-based survey conducted in 64 randomly selected secondary schools in Hong Kong. Participants A total of 13 280 forms 1,3 students (equivalent to grades 7,9 in the United States) aged 12,16 years. Measurements Perceived peer smoking prevalence, smoking status, intention to smoke in future, other smoking-related factors and demographic information. Findings Overestimation of peer smoking prevalence was observed regardless of gender and smoking status, and was more common in girls (69.4%) than boys (61.0%), and in experimental (74.3%) and current smokers (85.4%) than in never smokers (60.7%). Boys who overestimated and grossly overestimated (over two times) peer smoking were more likely to be current smokers, with adjusted odds ratios and 95% confidence intervals (95% CI) of 1.95 (1.24,3.07) and 3.52 (2.37,5.24) (P for trend <0.001). Similarly, boys who grossly overestimated peer smoking were 76% (95% CI: 41,120%) more likely to have ever smoked. Conclusion Overestimation of peer smoking prevalence was common in Hong Kong Chinese boys and girls, and was associated with current and ever smoking in boys. These findings have important implications on normative education in adolescence smoking prevention programmes. [source]


    Comparison Insight Bone Measurements by Histomorphometry and ,CT,

    JOURNAL OF BONE AND MINERAL RESEARCH, Issue 7 2005
    Daniel Chappard MD
    Abstract Morphometric analysis of 70 bone biopsies was done in parallel by ,CT and histomorphometry. ,CT provided higher results for trabecular thickness and separation because of the 3D shape of these anatomical objects. Introduction: Bone histomorphometry is used to explore the various metabolic bone diseases. The technique is done on microscopic 2D sections, and several methods have been proposed to extrapolate 2D measurements to the 3D dimension. X-ray ,CT is a recently developed imaging tool to appreciate 3D architecture. Recently the use of 2D histomorphometric measurements have been shown to provide discordant results compared with 3D values obtained directly. Material and Methods: Seventy human bone biopsies were removed from patients presenting with metabolic bone diseases. Complete bone biopsies were examined by ,CT. Bone volume (BV/TV), Tb.Th, and Tb.Sp were measured on the 3D models. Tb.Th and Tb.Sp were measured by a method based on the sphere algorithm. In addition, six images were resliced and transferred to an image analyzer: bone volume and trabecular characteristics were measured after thresholding of the images. Bone cores were embedded undecalcified; histological sections were prepared and measured by routine histomorphometric methods providing another set of values for bone volume and trabecular characteristics. Comparison between the different methods was done by using regression analysis, Bland-Altman, Passing-Bablock, and Mountain plots. Results: Correlations between all parameters were highly significant, but ,CT overestimated bone volume. The osteoid volume had no influence in this series. Overestimation may have been caused by a double threshold used in ,CT, giving trabecular boundaries less well defined than on histological sections. Correlations between Tb.Th and Tb.Sp values obtained by 3D or 2D measurements were lower, and 3D analysis always overestimated thickness by ,50%. These increases could be attributed to the 3D shape of the object because the number of nodes and the size of the marrow cavities were correlated with 3D values. Conclusion: In clinical practice, ,CT seems to be an interesting method providing reliable morphometric results in less time than conventional histomorphometry. The correlation coefficient is not sufficient to study the agreement between techniques in histomorphometry. The architectural descriptors are influenced by the algorithms used in 3D. [source]


    Underestimation and overestimation of personal weight status: associations with socio-demographic characteristics and weight maintenance intentions

    JOURNAL OF HUMAN NUTRITION & DIETETICS, Issue 4 2006
    J. Brug
    Abstract Objective, Unwarranted underestimation and overestimation of personal weight status may prevent weight maintenance behaviour. The present study reports on correlates of under- and overestimation of personal weight status and the association with weight maintenance intentions and self-reported action. Design, Comparison of three cross-sectional surveys, representing different population groups. Subjects, Survey 1: 1694 adolescents 13,19 years of age; survey 2: 979 nonobese adults 25,35 years of age; survey 3: 617 adults 21,62 years of age. Measurements, Self-administered written questionnaires (surveys 1 and 3) and telephone-administered questionnaires (survey 2); self-reported BMI, self-rated weight status, intentions and self-reported actions to avoid weight gain or to lose weight, sex, age, education and ethnic background. Respondents were classified as people who are realistic about personal body weight status or people who under- and overestimate their body weight status, based on BMI and self-rated weight status. Results, Most respondents in the three survey populations were realistic about their weight status. Overestimation of weight status was consistently more likely among women, whereas underestimation was more likely among men, older respondents and respondents from ethnic minorities. Self-rated weight status was a stronger correlate of intentions and self-report actions to avoid weight gain than weight status based on Body Mass Index. Conclusions, Relevant proportions of the study populations underestimated or overestimated their bodyweight status. Overestimation of personal weight status may lead to unwarranted weight maintenance actions, whereas underestimation may result in lack of motivation to avoid further weight gain. [source]


    Heavy Episodic Drinking and Alcohol Consumption in French Colleges: The Role of Perceived Social Norms

    ALCOHOLISM, Issue 1 2010
    Lionel Riou França
    Background:, The effect of normative perceptions (social norms) on heavy episodic drinking (HED) behavior is well known in the U.S. college setting, but little work is available in other cultural contexts. The objective of this study is therefore to assess whether social norms of alcohol use are related to HED in France, taking account of other influential predictors. Methods:, A cross-sectional survey was carried out among 731 second-year university students in the Paris region to explore the role of 29 potential alcohol use risk factors. The probability of heavy episodic drinking and the frequency of HED among heavy episodic drinkers were modeled independently. Monthly alcohol consumption was also assessed. Results:, Of the students, 56% overestimate peer student prevalence of HED (37% for alcohol drinking prevalence). HED frequency rises with perceived peer student prevalence of HED. Other social norms associated with HED are perceived friends' approval of HED (increasing both HED probability and HED frequency) and perceived friend prevalence of alcohol drinking (increasing HED probability only). Cannabis and tobacco use, academic discipline, gender, and the number of friends are also identified as being associated with HED. Conclusions:, Overestimation of peer student prevalence is not uncommon among French university students. Furthermore, perceived peer student prevalence of HED is linked to HED frequency, even after adjusting for other correlates. Interventions correcting misperceived prevalences of HED among peer students have therefore the potential to reduce the frequency of HED in this population. [source]


    Preventable Deaths from Quality Failures in Emergency Department Care for Pneumonia and Myocardial Infarction: An Overestimation

    ACADEMIC EMERGENCY MEDICINE, Issue 3 2008
    Christopher Fee MD
    No abstract is available for this article. [source]


    High-Quality Adaptive Soft Shadow Mapping

    COMPUTER GRAPHICS FORUM, Issue 3 2007
    Gaël Guennebaud
    Abstract The recent soft shadow mapping technique [GBP06] allows the rendering in real-time of convincing soft shadows on complex and dynamic scenes using a single shadow map. While attractive, this method suffers from shadow overestimation and becomes both expensive and approximate when dealing with large penumbrae. This paper proposes new solutions removing these limitations and hence providing an efficient and practical technique for soft shadow generation. First, we propose a new visibility computation procedure based on the detection of occluder contours, that is more accurate and faster while reducing aliasing. Secondly, we present a shadow map multi-resolution strategy keeping the computation complexity almost independent on the light size while maintaining high-quality rendering. Finally, we propose a view-dependent adaptive strategy, that automatically reduces the screen resolution in the region of large penumbrae, thus allowing us to keep very high frame rates in any situation. [source]


    Reparallelization techniques for migrating OpenMP codes in computational grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2009
    Michael Klemm
    Typical computational grid users target only a single cluster and have to estimate the runtime of their jobs. Job schedulers prefer short-running jobs to maintain a high system utilization. If the user underestimates the runtime, premature termination causes computation loss; overestimation is penalized by long queue times. As a solution, we present an automatic reparallelization and migration of OpenMP applications. A reparallelization is dynamically computed for an OpenMP work distribution when the number of CPUs changes. The application can be migrated between clusters when an allocated time slice is exceeded. Migration is based on a coordinated, heterogeneous checkpointing algorithm. Both reparallelization and migration enable the user to freely use computing time at more than a single point of the grid. Our demo applications successfully adapt to the changed CPU setting and smoothly migrate between, for example, clusters in Erlangen, Germany, and Amsterdam, the Netherlands, that use different kinds and numbers of processors. Benchmarks show that reparallelization and migration impose average overheads of about 4 and 2%, respectively. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A New Method of Electron Temperature Determination in Unmagnetized and Magnetized RF Plasmas without RF Compensating Circuit

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7-8 2004
    Y.-S. Choi
    Abstract Collected current versus applied voltage(I-V) curve of Langmuir probe in RF plasma is severely distorted by RF fluctuations leading to overestimation of electron temperature. RF compensation circuit has been used to obtain the undistorted I-V curve, yet it produces time-averaged one. A new and simple method is proposed to get time-resolved I-V curve by picking the synchronized RF signals with digital oscilloscope and Labview program. This technique is tested in magnetized helicon plasmas and unmagnetized capacitive coupled RF plasmas [source]


    Evaluation of a bedside blood ketone sensor: the effects of acidosis, hyperglycaemia and acetoacetate on sensor performance

    DIABETIC MEDICINE, Issue 7 2004
    A. S. A. Khan
    Abstract Aims To assess the performance of a handheld bedside ketone sensor in the face of likely metabolic disturbances in diabetic ketoacidosis, namely: pH, glucose and acetoacetate. Methods The effects of pH (7.44,6.83), glucose (5,50 mmol/l) and acetoacetate (0,5 mmol/l) were examined in venous blood to investigate the accuracy of betahydroxybutyrate measurement (0,5 mmol/l) by a handheld ketone sensor. Sensor results were compared with a reference method. Linear regression models were fitted to the difference between the methods with the concentration of metabolite as the explanatory factor. Results Decreasing pH and increasing glucose had no effect on the accuracy of the handheld ketone sensor; the gradients of the fitted lines were ,0.14 and ,0.003, respectively. The 95% confidence intervals were ,0.7,0.4 and ,0.01,0.004, respectively (P = 0.59 and 0.4, respectively). In the acetoacetate study, a positive relationship between the sensor and reference method results was found, the gradient was 0.09. The 95% confidence interval was 0.05,0.14 (P , 0.001), indicating that high concentrations of acetoacetate interfere with the sensor performance. Conclusions Acidosis and hyperglycaemia have minimal effects on the sensor performance. However, high concentrations of acetoacetate result in some overestimation of betahydroxybutyrate. This bedside ketone sensor provides useful data over a broad range of conditions likely to be encountered during moderate to severe diabetic ketoacidosis. [source]


    Evaluation of the SWEEP model during high winds on the Columbia Plateau ,

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2009
    G. Feng
    Abstract A standalone version of the Wind Erosion Prediction System (WEPS) erosion submodel, the Single-event Wind Erosion Evaluation Program (SWEEP), was released in 2007. A limited number of studies exist that have evaluated SWEEP in simulating soil loss subject to different tillage systems under high winds. The objective of this study was to test SWEEP under contrasting tillage systems employed during the summer fallow phase of a winter wheat,summer fallow rotation within eastern Washington. Soil and PM10 (particulate matter ,10 µm in diameter) loss and soil and crop residue characteristics were measured in adjacent fields managed using conventional and undercutter tillage during summer fallow in 2005 and 2006. While differences in soil surface conditions resulted in measured differences in soil and PM10 loss between the tillage treatments, SWEEP failed to simulate any difference in soil or PM10 loss between conventional and undercutter tillage. In fact, the model simulated zero erosion for all high wind events observed over the two years. The reason for the lack of simulated erosion is complex owing to the number of parameters and interaction of these parameters on erosion processes. A possible reason might be overestimation of the threshold friction velocity in SWEEP since friction velocity must exceed the threshold to initiate erosion. Although many input parameters are involved in the estimation of threshold velocity, internal empirical coefficients and equations may affect the simulation. Calibration methods might be useful in adjusting the internal coefficients and empirical equations. Additionally, the lack of uncertainty analysis is an important gap in providing reliable output from this model. Published in 2009 by John Wiley & Sons, Ltd. [source]


    The effects of torsion and motion coupling in site response estimation

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2003
    Mohammad R. Ghayamghamian
    Abstract Soil amplification characteristics are investigated using data from the Chibaken-Toho-Oki earthquake and its aftershocks recorded at Chiba dense array in Japan. The frequency-dependent amplification function of soil is calculated using uphole-to-downhole spectral ratio analysis, considering the horizontal components of shear wave. The identified spectral ratios consistently demonstrate the splitting of peaks in their resonance frequencies and low amplification values in comparison with a 1D model. The torsional behaviour and horizontal ground motion coupling are clarified as the reasons for these phenomena at the site. To prove the hypothesis, the torsional motion is directly evaluated using the data of the horizontal dense array in different depths at the site. The comparison between Fourier spectra of torsional motion and identified transfer functions reveals the peaks at the same frequencies. The wave equation including torsion and horizontal motion coupling is introduced and solved for the layered media by applying wave propagation theory. Using the developed model, the effects of torsional motion with horizontal motion coupling on soil transfer function are numerically examined. Splitting and low amplification at resonance frequencies are confirmed by the results of numerical analysis. Furthermore, the ground motion in two horizontal directions at the site is simulated using site geotechnical specification and optimizing the model parameters. The simulated and recorded motions demonstrate good agreement that is used to validate the hypothesis. In addition, the spectral density of torsional ground motions are compared with the calculated one and found to be well predicted by the model. Finally, the results are used to explain the overestimation of damping in back-calculation of dynamic soil properties using vertical array data in small strain level. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Comparison of Tissue Doppler Velocities Obtained by Different Types of Echocardiography Systems: Are They Compatible?

    ECHOCARDIOGRAPHY, Issue 3 2010
    Mónika Dénes M.D.
    Background: Both systolic and diastolic tissue Doppler (TD) velocities have an important diagnostic and prognostic role in cardiology. We aimed to compare TD velocities between two different echocardiography systems. Patients: Thirty-one consecutive patients (mean age: 65.2 ± 17.5 years; 12 males) were enrolled. Methods: Systolic (Sa), early (Ea), and late (Aa) diastolic velocities were measured by TD at the lateral mitral annulus by a Sonos 2000 (Hewlett-Packard, Andover, MA, USA) and a Philips iE33 system. The E/Ea ratio was calculated. Results: Ea, Aa, and Sa velocities were higher when measured by the Sonos system (Ea: 13.2 ± 4.1 cm/s vs. 8.3 ± 3.6 cm/s; Aa: 14.8 ± 3.8 cm/s vs. 9.3 ± 2.3 cm/s; Sa: 15.2 ± 3.6 cm/s vs. 8.4 ± 2.0 cm/s; P < 0.0001 all). A significant correlation was found in Ea and in Ea/Aa (r = 0.84 and r = 0.85 resp; P < 0.0001 for both), and a weaker in Aa (r = 0.43; P = 0.02) between the machines. The Bland-Altman analysis showed broad limits of agreement between the measurements for Ea, Aa, and Sa (mean difference: 4.95 cm/s; 5.52 cm/s; 6.73 cm/s, respectively; limits: 0.64,9.25 cm/s; ,1.39,12.39 cm/s; ,0.37,13.83 cm/s, respectively). An E/Ea ratio >5.6 by the Sonos system showed 75% sensitivity and 79% specificity for elevated left ventricular filling pressure, defined as E/Ea >10 by the reference Philips system. Conclusions: Although diastolic TD velocities had excellent correlations between the two machines, there was a systematic overestimation by the Sonos system. Since the limits of agreement do not allow replacing the measurements, we suggest using the same echocardiographic equipment at patient follow-up. (Echocardiography 2010;27:230-235) [source]


    Discrepancy between Gradients Derived by Cardiac Catheterization and by Doppler Echocardiography in Aortic Stenosis: How Often Does Pressure Recovery Play a Role?

    ECHOCARDIOGRAPHY, Issue 9 2009

    Studies have shown very good correlation between Doppler-derived gradients and gradients obtained by cardiac catheterization (cath) in aortic stenosis (AS). However, the phenomenon of pressure recovery may lead to significant overestimation of aortic valve (AV) gradients by Doppler echocardiography (echo). We hypothesized that echo-derived gradients will be higher in mild,moderate AS because of pressure recovery. We studied 94 patients who had echo and cardiac caths in a span of 1 week. The mean age was 72 ± 13 years, 54% males, 79% had coronary artery disease, and the mean left ventricular ejection fraction was 45 ± 22%. The mean cardiac output and cardiac indices were 5.1 ± 1.4/2.7 ± 0.6 (l/mt), (l/m2), respectively. For those with mild AS, echo overestimated gradients in 9.5% of patients (4/42) by an average of 19 mmHg, thus misclassifying the degree of stenosis. In those with moderate AS, 14% (3/21) were misclassified as severe AS (gradient overestimation by an average of 13.6 mmHg). In those with severe AS, echo underestimated gradients in 13% (4/31) by an average of 22.7 mmHg. The aorta at the sinotubular junction was 2.8 cm in those patients with mild AS in whom gradients were overestimated by more than 20 mmHg compared to a sinotubular junction diameter of 3.12 cm in those with mild AS and no overestimation of gradients. The AV area/aortic root ratio was 0 .4 in those with mild AS and 0.2 in those with severe AS (P < 0.05). [source]


    Variable reporting and quantitative reviews: a comparison of three meta-analytical techniques

    ECOLOGY LETTERS, Issue 5 2003
    Marc J. Lajeunesse
    Abstract Variable reporting of results can influence quantitative reviews by limiting the number of studies for analysis, and thereby influencing both the type of analysis and the scope of the review. We performed a Monte Carlo simulation to determine statistical errors for three meta-analytical approaches and related how such errors were affected by numbers of constituent studies. Hedges'd and effect sizes based on item response theory (IRT) had similarly improved error rates with increasing numbers of studies when there was no true effect, but IRT was conservative when there was a true effect. Log response ratio had low precision for detecting null effects as a result of overestimation of effect sizes, but high ability to detect true effects, largely irrespective of number of studies. Traditional meta-analysis based on Hedges'd are preferred; however, quantitative reviews should use various methods in concert to improve representation and inferences from summaries of published data. [source]


    Effects of the estrogen agonist 17,-estradiol and antagonist tamoxifen in a partial life-cycle assay with zebrafish (Danio rerio)

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2007
    Leo T. M. van der Ven
    Abstract A partial life-cycle assay (PLC) with zebrafish (Danio rerio) was conducted to identify endocrine-disrupting effects of 17,-estradiol (E2) and tamoxifen (TMX) as reference for estrogen agonist and antagonist activity. Adult zebrafish were exposed for 21 d and offspring for another 42 d, allowing differentiation of gonads in control animals. The assessed end points included reproductive variables (egg production, fertilization, and hatching), gonad differentiation of juveniles, histopathology, and vitellogenin (VTG) expression. With E2, the most sensitive end points were feminization of offspring (at 0.1 nM) and increased VTG production in males (at 0.32 nM). At 1 nM, decreased F1 survival, increased F1 body length and weight, VTG-related edema and kidney lesions, and inhibited spermatogenesis were observed. Oocyte atresia occurred at even higher concentrations. Exposure to TMX resulted in specific effects at an intermediate test concentration (87 nM), including oocyte atresia with granulosa cell transformation and disturbed spermatogenesis (asynchrony within cysts). In F1, decreased hatching, survival, and body weight and length as well as decreased feminization were observed. Decreased vitellogenesis and egg production in females and clustering of Leydig cells in males occurred at higher concentrations. Toxicological profiles of estrogen agonists and antagonists are complex and specific; a valid and refined characterization of endocrine activity of field samples therefore can be obtained only by using a varied set of end points, including histology, as applied in the presented PLC. Evaluation of only a single end point can easily produce under- or overestimation of the actual hazard. [source]


    Phase distribution of synthetic pyrethroids in runoff and stream water

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2004
    Weiping Liu
    Abstract Synthetic pyrethroids (SPs) are a group of hydrophobic compounds with significant aquatic toxicity. Their strong affinity to suspended solids and humic materials suggests that SPs in natural surface water are distributed in solid-adsorbed, dissolved organic matter (DOM)-adsorbed, and freely dissolved phases. The freely dissolved phase is of particular importance because of its mobility and bioavailability. In the present study, we used solid-phase microextraction to detect the freely dissolved phase, and we evaluated the phase distribution of bifenthrin and permethrin in stream and runoff waters. In stream water, most SPs were associated with the suspended solids and, to a lesser extent, with DOM. The freely dissolved phase contributed only 0.4% to 1.0%. In runoff effluents, the freely dissolved concentration was 10% to 27% of the overall concentration. The predominant partitioning into the adsorbed phases implies that the toxicity of SPs in surface water is reduced because of decreased bioavailability. This also suggests that monitoring protocols that do not selectively define the freely dissolved phase can lead to significant overestimation of toxicity or water-quality impacts by SPs. [source]


    Determining toxicity of lead and zinc runoff in soils: Salinity effects on metal partitioning and on phytotoxicity

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2003
    Daryl P. Stevens
    Abstract When assessingcationic metal toxicity in soils, metals are often added to soil as the chloride, nitrate, or sulfate salts. In many studies, the effects of these anions are ignored or discounted; rarely are appropriate controls included. This study used five soils varying in pH, clay content, and organic matter to determine whether salinity from counter-ions contributed to or confounded metal phytotoxicity. Varying rates of Pb and Zn were applied to soils with or without a leaching treatment to remove the metal counter-ion (NO3 -). Lactuca sattva (lettuce) plants were grown in metal-treated soils, and plant dry weights were used to determine median effective concentrations where there was a 50% reduction in yield (EC50s) on the basis of total metals measured in the soil after harvest. In two of the five soils, leaching increased the EC50s significantly for Zn by 1.4- to 3.7-fold. In three of the five soils, leaching increased the EC50s significantly for Pb by 1.6- to 3.0-fold. The shift in EC50s was not a direct result of toxicity of the nitrate ion but was an indirect effect of the salinity increasing metal concentrations in soil solution and increasing its bioavailability for a given total metal concentration. In addition, calculation of potential salinity changes in toxicological studies from the addition of metals exhibiting strong sorption to soil suggested that if the anion associated with the metal is not leached from the soil, direct salinity responses could also lead to significant overestimation of the EC50 for those metals. These findings question the relevance of the application of single-metal salts to soils as a method of assessing metal phytotoxicity when, in many cases in our environment, Zn and Pb accumulate in soil over a period of time and the associated counter-ions are commonly removed from the soil during the accumulation process (e.g., roof and galvanized tower runoff). [source]


    Cadmium leaching from some New Zealand pasture soils

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 1 2003
    C. W. Gray
    Summary Cadmium (Cd) inputs and losses from agricultural soils are of great importance because of the potential adverse effects Cd can pose to food quality, soil health and the environment in general. One important pathway for Cd losses from soil systems is by leaching. We investigated loss of Cd from a range of contrasting New Zealand pasture soils that had received Cd predominantly from repeated applications of phosphate fertilizer. Annual leaching losses of Cd ranged between 0.27 and 0.86 g ha,l, which are less than most losses recorded elsewhere. These losses equate to between 5 and 15% of the Cd added to soil through a typical annual application of single superphosphate, which in New Zealand contains on average 280 mg Cd kg,1 P. It appears that Cd added to soil from phosphate fertilizer is fairly immobile and Cd tends to accumulate in the topsoil. The pH of the leachate and the total volume of drainage to some extent control the amount of Cd leached. Additional factors, such as the soil sorption capacity, are also important in controlling Cd movement in these pasture soils. The prediction of the amount of Cd leached using the measured concentrations of Cd in the soil solution and rainfall data resulted in an overestimation of Cd losses. Cadmium concentrations in drainage water are substantially less than the current maximum acceptable value of 3 µg l,1 for drinking water in New Zealand set by the Ministry of Health. [source]


    COMPARING STRENGTHS OF DIRECTIONAL SELECTION: HOW STRONG IS STRONG?

    EVOLUTION, Issue 10 2004
    Joe Hereford
    Abstract The fundamental equation in evolutionary quantitative genetics, the Lande equation, describes the response to directional selection as a product of the additive genetic variance and the selection gradient of trait value on relative fitness. Comparisons of both genetic variances and selection gradients across traits or populations require standardization, as both are scale dependent. The Lande equation can be standardized in two ways. Standardizing by the variance of the selected trait yields the response in units of standard deviation as the product of the heritability and the variance-standardized selection gradient. This standardization conflates selection and variation because the phenotypic variance is a function of the genetic variance. Alternatively, one can standardize the Lande equation using the trait mean, yielding the proportional response to selection as the product of the squared coefficient of additive genetic variance and the mean-standardized selection gradient. Mean-standardized selection gradients are particularly useful for summarizing the strength of selection because the mean-standardized gradient for fitness itself is one, a convenient benchmark for strong selection. We review published estimates of directional selection in natural populations using mean-standardized selection gradients. Only 38 published studies provided all the necessary information for calculation of mean-standardized gradients. The median absolute value of multivariate mean-standardized gradients shows that selection is on average 54% as strong as selection on fitness. Correcting for the upward bias introduced by taking absolute values lowers the median to 31%, still very strong selection. Such large estimates clearly cannot be representative of selection on all traits. Some possible sources of overestimation of the strength of selection include confounding environmental and genotypic effects on fitness, the use of fitness components as proxies for fitness, and biases in publication or choice of traits to study. [source]


    Emergency Medicine Resident Documentation: Results of the 1999 American Board of Emergency Medicine In-Training Examination Survey

    ACADEMIC EMERGENCY MEDICINE, Issue 10 2000
    John Howell MD
    Abstract. Objectives: To assess how emergency medicine (EM) residents perform medical record documentation, and how well they comply with Health Care Financing Administration (HCFA) Medicare charting guidelines. In addition, the study investigated their abilities and confidence with billing and coding of patient care visits and procedures performed in the emergency department (ED). Finally, the study assessed their exposure to both online faculty instruction and formal didactic experience with this component of their curriculum. Methods: A survey was conducted consisting of closed-ended questions investigating medical record documentation in the ED. The survey was distributed to all EM residents, EM,internal medicine, and EM,pediatrics residents taking the 1999 American Board of Emergency Medicine (ABEM) In-Training examination. Five EM residents and the Society for Academic Emergency Medicine (SAEM) board of directors prevalidated the survey. Summary statistics were calculated and resident levels were compared for each question using either chi-square or Fisher's exact test. Alpha was 0.05 for all comparisons. Results: Completed surveys were returned from 88.5% of the respondents. A small minority of the residents code their own charts (6%). Patient encounters are most frequently documented on free-form handwritten charts (38%), and a total of 76% of the respondents reported using handwritten forms as a portion of the patient's final chart. Twenty-nine percent reported delays of more than 30 minutes to access medical record information for a patient evaluated in their ED within the previous 72 hours. Twenty-five percent "never" record their supervising faculty's involvement in patient care, and another 25% record that information "1-25%" of the time. Seventy-nine percent are "never" or "rarely" requested by their faculty to clarify or add to medical records for billing purposes. Only 4% of the EM residents were "extremely confident" in their ability to perform billing and coding, and more than 80% reported not knowing the physician charges for services or procedures performed in the ED. Conclusions: The handwritten chart is the most widely used method of patient care documentation, either entirely or as a component of a templated chart. Most EM residents do not document their faculty's participation in the care of patients. This could lead to overestimation of faculty noncompliance with HCFA billing guidelines. Emergency medicine residents are not confident in their knowledge of medical record documentation and coding procedures, nor of charges for services rendered in the ED. [source]


    Quantification and correction of bias in tagging SNPs caused by insufficient sample size and marker density by means of haplotype-dropping,

    GENETIC EPIDEMIOLOGY, Issue 1 2008
    Mark M. Iles
    Abstract Tagging single nucleotide polymorphisms (tSNPs) are commonly used to capture genetic diversity cost-effectively. It is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be inadequate and studies underpowered. Using data simulated under a coalescent model, we show that insufficient sample size can lead to overestimation of tSNP efficacy. Quantifying this we find that even when insufficient marker density is adjusted for, estimates of tSNP efficacy are up to 45% higher than the true values. Even with as many as 100 individuals, estimates of tSNP efficacy may be 9% higher than the true value. We describe a novel method for estimating tSNP efficacy accounting for limited sample size. The method is based on exclusion of haplotypes, incorporating a previous adjustment for insufficient marker density. We show that this method outperforms an existing Bootstrap approach. We compare the efficacy of multimarker and pairwise tSNP selection methods on real data. These confirm our findings with simulated data and suggest that pairwise methods are less sensitive to sample size, but more sensitive to marker density. We conclude that a combination of insufficient sample size and overfitting may cause overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel method corrects much of this bias and is superior to a previous method. However, sample sizes larger than previously suggested may be required for accurate estimation of tSNP efficacy. This has obvious ramifications for tSNP selection both in candidate regions and using HapMap or SNP chips for genomewide studies. Genet. Epidemiol. 31, 2007. © 2007 Wiley-Liss, Inc. [source]


    Controllable Molecular Doping and Charge Transport in Solution-Processed Polymer Semiconducting Layers

    ADVANCED FUNCTIONAL MATERIALS, Issue 12 2009
    Yuan Zhang
    Abstract Here, controlled p-type doping of poly(2-methoxy-5-(2,-ethylhexyloxy)- p -phenylene vinylene) (MEH-PPV) deposited from solution using tetrafluoro-tetracyanoquinodimethane (F4-TCNQ) as a dopant is presented. By using a co-solvent, aggregation in solution can be prevented and doped films can be deposited. Upon doping the current,voltage characteristics of MEH-PPV-based hole-only devices are increased by several orders of magnitude and a clear Ohmic behavior is observed at low bias. Taking the density dependence of the hole mobility into account the free hole concentration due to doping can be derived. It is found that a molar doping ratio of 1 F4-TCNQ dopant per 600 repeat units of MEH-PPV leads to a free carrier density of 4,×,1022,m,3. Neglecting the density-dependent mobility would lead to an overestimation of the free hole density by an order of magnitude. The free hole densities are further confirmed by impedance measurements on Schottky diodes based on F4-TCNQ doped MEH-PPV and a silver electrode. [source]


    Site-level evaluation of satellite-based global terrestrial gross primary production and net primary production monitoring

    GLOBAL CHANGE BIOLOGY, Issue 4 2005
    David P. Turner
    Abstract Operational monitoring of global terrestrial gross primary production (GPP) and net primary production (NPP) is now underway using imagery from the satellite-borne Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Evaluation of MODIS GPP and NPP products will require site-level studies across a range of biomes, with close attention to numerous scaling issues that must be addressed to link ground measurements to the satellite-based carbon flux estimates. Here, we report results of a study aimed at evaluating MODIS NPP/GPP products at six sites varying widely in climate, land use, and vegetation physiognomy. Comparisons were made for twenty-five 1 km2 cells at each site, with 8-day averages for GPP and an annual value for NPP. The validation data layers were made with a combination of ground measurements, relatively high resolution satellite data (Landsat Enhanced Thematic Mapper Plus at ,30 m resolution), and process-based modeling. There was strong seasonality in the MODIS GPP at all sites, and mean NPP ranged from 80 g C m,2 yr,1 at an arctic tundra site to 550 g C m,2 yr,1 at a temperate deciduous forest site. There was not a consistent over- or underprediction of NPP across sites relative to the validation estimates. The closest agreements in NPP and GPP were at the temperate deciduous forest, arctic tundra, and boreal forest sites. There was moderate underestimation in the MODIS products at the agricultural field site, and strong overestimation at the desert grassland and at the dry coniferous forest sites. Analyses of specific inputs to the MODIS NPP/GPP algorithm , notably the fraction of photosynthetically active radiation absorbed by the vegetation canopy, the maximum light use efficiency (LUE), and the climate data , revealed the causes of the over- and underestimates. Suggestions for algorithm improvement include selectively altering values for maximum LUE (based on observations at eddy covariance flux towers) and parameters regulating autotrophic respiration. [source]


    Estimates of CO2 uptake and release among European forests based on eddy covariance data

    GLOBAL CHANGE BIOLOGY, Issue 9 2004
    Albert I. J. M. Van Dijk
    Abstract The net ecosystem exchange (NEE) of forests represents the balance of gross primary productivity (GPP) and respiration (R). Methods to estimate these two components from eddy covariance flux measurements are usually based on a functional relationship between respiration and temperature that is calibrated for night-time (respiration) fluxes and subsequently extrapolated using daytime temperature measurements. However, respiration fluxes originate from different parts of the ecosystem, each of which experiences its own course of temperature. Moreover, if the temperature,respiration function is fitted to combined data from different stages of biological development or seasons, a spurious temperature effect may be included that will lead to overestimation of the direct effect of temperature and therefore to overestimates of daytime respiration. We used the EUROFLUX eddy covariance data set for 15 European forests and pooled data per site, month and for conditions of low and sufficient soil moisture, respectively. We found that using air temperature (measured above the canopy) rather than soil temperature (measured 5 cm below the surface) yielded the most reliable and consistent exponential (Q10) temperature,respiration relationship. A fundamental difference in air temperature-based Q10 values for different sites, times of year or soil moisture conditions could not be established; all were in the range 1.6,2.5. However, base respiration (R0, i.e. respiration rate scaled to 0°C) did vary significantly among sites and over the course of the year, with increased base respiration rates during the growing season. We used the overall mean Q10 of 2.0 to estimate annual GPP and R. Testing suggested that the uncertainty in total GPP and R associated with the method of separation was generally well within 15%. For the sites investigated, we found a positive relationship between GPP and R, indicating that there is a latitudinal trend in NEE because the absolute decrease in GPP towards the pole is greater than in R. [source]


    Uncertainties in interpretation of isotope signals for estimation of fine root longevity: theoretical considerations

    GLOBAL CHANGE BIOLOGY, Issue 7 2003
    YIQI LUOArticle first published online: 25 JUN 200
    Abstract This paper examines uncertainties in the interpretation of isotope signals when estimating fine root longevity, particularly in forests. The isotope signals are depleted ,13C values from elevated CO2 experiments and enriched ,14C values from bomb 14C in atmospheric CO2. For the CO2 experiments, I explored the effects of six root mortality patterns (on,off, proportional, constant, normal, left skew, and right skew distributions), five levels of nonstructural carbohydrate (NSC) reserves, and increased root growth on root ,13C values after CO2 fumigation. My analysis indicates that fitting a linear equation to ,13C data provides unbiased estimates of longevity only if root mortality follows an on,off model, without dilution of isotope signals by pretreatment NSC reserves, and under a steady state between growth and death. If root mortality follows the other patterns, the linear extrapolation considerably overestimates root longevity. In contrast, fitting an exponential equation to ,13C data underestimates longevity with all the mortality patterns except the proportional one. With either linear or exponential extrapolation, dilution of isotope signals by pretreatment NSC reserves could result in overestimation of root longevity by several-fold. Root longevity is underestimated if elevated CO2 stimulates fine root growth. For the bomb 14C approach, I examined the effects of four mortality patterns (on,off, proportional, constant, and normal distribution) on root ,14C values. For a given ,14C value, the proportional pattern usually provides a shorter estimate of root longevity than the other patterns. Overall, we have to improve our understanding of root growth and mortality patterns and to measure NSC reserves in order to reduce uncertainties in estimated fine root longevity from isotope data. [source]


    Drawdown and Stream Depletion Produced by Pumping in the Vicinity of a Partially Penetrating Stream

    GROUND WATER, Issue 5 2001
    James J. Butler Jr.
    Commonly used analytical approaches for estimation of pumping-induced drawdown and stream depletion are based on a series of idealistic assumptions about the stream-aquifer system. A new solution has been developed for estimation of drawdown and stream depletion under conditions that are more representative of those in natural systems (finite width stream of shallow penetration adjoining an aquifer of limited lateral extent). This solution shows that the conventional assumption of a fully penetrating stream will lead to a significant overestimation of stream depletion (> 100%) in many practical applications. The degree of overestimation will depend on the value of the stream leakance parameter and the distance from the pumping well to the stream. Although leakance will increase with stream width, a very wide stream will not necessarily be well represented by a model of a fully penetrating stream. The impact of lateral boundaries depends upon the distance from the pumping well to the stream and the stream leakance parameter. In most cases, aquifer width must be on the order of hundreds of stream widths before the assumption of a laterally infinite aquifer is appropriate for stream-depletion calculations. An important assumption underlying this solution is that stream-channel penetration is negligible relative to aquifer thickness. However, an approximate extension to the case of nonnegligible penetration provides reasonable results for the range of relative penetrations found in most natural systems (up to 85%). Since this solution allows consideration of a much wider range of conditions than existing analytical approaches, it could prove to be a valuable new tool for water management design and water rights adjudication purposes. [source]


    Diagnosis of pancreatic cancer

    HPB, Issue 5 2006
    Fumihiko Miura
    Abstract The ability to diagnose pancreatic carcinoma has been rapidly improving with the recent advances in diagnostic techniques such as contrast-enhanced Doppler ultrasound (US), helical computed tomography (CT), enhanced magnetic resonance imaging (MRI), and endoscopic US (EUS). Each technique has advantages and limitations, making the selection of the proper diagnostic technique, in terms of purpose and characteristics, especially important. Abdominal US is the modality often used first to identify a cause of abdominal pain or jaundice, while the accuracy of conventional US for diagnosing pancreatic tumors is only 50,70%. CT is the most widely used imaging examination for the detection and staging of pancreatic carcinoma. Pancreatic adenocarcinoma is generally depicted as a hypoattenuating area on contrast-enhanced CT. The reported sensitivity of helical CT in revealing pancreatic carcinoma is high, ranging between 89% and 97%. Multi-detector-row (MD) CT may offer an improvement in the early detection and accurate staging of pancreatic carcinoma. It should be taken into consideration that some pancreatic adenocarcinomas are depicted as isoattenuating and that pancreatitis accompanied by pancreatic adenocarcinoma might occasionally result in the overestimation of staging. T1-weighted spin-echo images with fat suppression and dynamic gradient-echo MR images enhanced with gadolinium have been reported to be superior to helical CT for detecting small lesions. However, chronic pancreatitis and pancreatic carcinoma are not distinguished on the basis of degree and time of enhancement on dynamic gadolinium-enhanced MRI. EUS is superior to spiral CT and MRI in the detection of small tumors, and can also localize lymph node metastases or vascular tumor infiltration with high sensitivity. EUS-guided fine-needle aspiration biopsy is a safe and highly accurate method for tissue diagnosis of patients with suspected pancreatic carcinoma. 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) has been suggested as a promising modality for noninvasive differentiation between benign and malignant lesions. Previous studies reported the sensitivity and specificity of FDG-PET for detecting malignant pancreatic tumors as being 71,100% and 64,90%, respectively. FDG-PET does not replace, but is complementary to morphologic imaging, and therefore, in doubtful cases, the method must be combined with other imaging modalities. [source]


    Limitations of previously published systematic reviews evaluating the outcome of endodontic treatment

    INTERNATIONAL ENDODONTIC JOURNAL, Issue 8 2009
    M-K. Wu
    Abstract The aim of this work was to identify the limitations of previously published systematic reviews evaluating the outcome of root canal treatment. Traditionally, periapical radiography has been used to assess the outcome of root canal treatment with the absence of a periapical radiolucency being considered a confirmation of a healthy periapex. However, a high percentage of cases confirmed as healthy by radiographs revealed apical periodontitis on cone beam computed tomography (CBCT) and by histology. In teeth, where reduced size of the existing radiolucency was diagnosed by radiographs and considered to represent periapical healing, enlargement of the lesion was frequently confirmed by CBCT. In clinical studies, two additional factors may have further contributed to the overestimation of successful outcomes after root canal treatment: (i) extractions and re-treatments were rarely recorded as failures; and (ii) the recall rate was often lower than 50%. The periapical index (PAI), frequently used for determination of success, was based on radiographic and histological findings in the periapical region of maxillary incisors. The validity of using PAI for all tooth positions might be questionable, as the thickness of the cortical bone and the position of the root tip in relation with the cortex vary with tooth position. In conclusion, the serious limitations of longitudinal clinical studies restrict the correct interpretation of root canal treatment outcomes. Systematic reviews reporting the success rates of root canal treatment without referring to these limitations may mislead readers. The outcomes of root canal treatment should be re-evaluated in long-term longitudinal studies using CBCT and stricter evaluation criteria. [source]