Home About us Contact | |||
Percentile
Kinds of Percentile Terms modified by Percentile Selected AbstractsThe influences of data precision on the calculation of temperature percentile indicesINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 3 2009Xuebin Zhang Abstract Percentile-based temperature indices are part of the suite of indices developed by the WMO CCl/CLIVAR/JCOMM Expert Team on Climate Change Detection and Indices. They have been used to analyse changes in temperature extremes for various parts of the world. We identify a bias in percentile-based indices which consist of annual counts of threshold exceedance. This bias occurs when there is insufficient precision in temperature data, and affects the estimation of the means and trends of percentile-based indices. Such imprecision occurs when temperature observations are truncated or rounded prior to being recorded and archived. The impacts on the indices depend upon the type of relation (i.e. temperature greater than or greater than or equal to) used to determine the exceedance rate. This problem can be solved when the loss of precision is not overly severe by adding a small random number to artificially restore data precision. While these adjustments do not improve the accuracy of individual observations, the exceedance rates that are computed from data adjusted in this way have properties, such as long-term mean and trend, which are similar to those directly estimated from data that are originally of the same precision as the adjusted data. Copyright © 2008 Royal Meteorological Society [source] Percentile-based spread: a more accurate way to compare crystallographic modelsACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2010Edwin Pozharski The comparison of biomacromolecular crystal structures is traditionally based on the root-mean-square distance between corresponding atoms. This measure is sensitive to the presence of outliers, which inflate it disproportionately to their fraction. An alternative measure, the percentile-based spread (p.b.s.), is proposed and is shown to represent the average variation in atomic positions more adequately. It is discussed in the context of isomorphous crystal structures, conformational changes and model ensembles generated by repetitive automated rebuilding. [source] A Bootstrap Control Chart for Weibull PercentilesQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2006Michele D. Nichols Abstract The problem of detecting a shift of a percentile of a Weibull population in a process monitoring situation is considered. The parametric bootstrap method is used to establish lower and upper control limits for monitoring percentiles when process measurements have a Weibull distribution. Small percentiles are of importance when observing tensile strength and it is desirable to detect their downward shift. The performance of the proposed bootstrap percentile charts is considered based on computer simulations, and some comparisons are made with an existing Weibull percentile chart. The new bootstrap chart indicates a shift in the process percentile substantially quicker than the previously existing chart, while maintaining comparable average run lengths when the process is in control. An illustrative example concerning the tensile strength of carbon fibers is presented. Copyright © 2005 John Wiley & Sons, Ltd. [source] Following in mother's footsteps?DIABETIC MEDICINE, Issue 3 2010Mother, cardiovascular disease 15 years after gestational diabetes, daughter risks for insulin resistance Diabet. Med. 27, 257,265 (2010) Abstract Aims, To determine effects on mothers and daughters of gestational diabetes mellitus/gestational impaired glucose tolerance (GDM/GIGT) on their future metabolic and cardiovascular risks. Methods, Case mothers who had GDM/GIGT in pregnancy (cases; n = 90) and normoglycaemic control women (n = 99) and their daughters underwent lifestyle assessment and metabolic tests 15-years post-partum. Results, Prevalence of glucose intolerance (GI) in daughters was 1.1%. Maternal prevalence was 44.4% in cases compared to 13.1% in controls, with conversion best predicted by weight gain. Case daughters had higher insulin resistance (IR) and greater waist circumference (WC) (51.2%) relative to control daughters (36.4%, p < 0.05) made worse if case mothers became GI at follow-up (65%) (relative risk =1.8; 95% confidence interval 1.2,2.9). In multivariable linear regression analyses adjusting for daughters' birthweight, maternal obesity (> 30.0 kg/m2) at 15years and mothers' case-control status were strong predictors of daughters' WC (p < 0.01; P < 0.01, respectively). For daughters' body mass index (BMI) percentile and percentage of body fat, maternal obesity was a stronger predictor (p < 0.01; p < 0.001)) than mothers' case-control status (p < 0.01; P = 0.09). Conclusions, GDM/GIGT pregnancies led to increased conversion to GI in mothers, minimal in daughters. Case daughters have increased risk of central adiposity and insulin resistance, whereas maternal obesity strongly predicted daughters' BMI percentile and per cent of body fat. Controlling hyperglycaemia in pregnancy and family weight management may provide the key to preventing offspring obesity and glucose intolerance post GDM/GIGT. [source] Phobia of self-injecting and self-testing in insulin-treated diabetes patients: opportunities for screeningDIABETIC MEDICINE, Issue 8 2001E. D. Mollema Abstract Aims To define clinically relevant cut-off points for severe fear of self-injecting (FSI) and self-testing (FST) (phobia) in insulin-treated patients with diabetes, and to estimate the magnitude of these phobias in our research population. Methods, FSI and FST were assessed in a cross-sectional survey using the Diabetes Fear of Injecting and Self-testing Questionnaire (D-FISQ). A sample of 24 insulin-treated adult diabetic patients was selected from the high-scorers on FSI and/or FST (, 95th percentile). FSI and FST were re-assessed, after which patients participated in a behavioural avoidance test (BAT), thereby determining the current level of avoidance of either self-injecting or self-testing. FSI and FST scores were linked to the outcome of the BATs. Cut-off scores for severe FSI/FST were determined and extrapolated to the total study population (n = 1275). Results, Seven patients participated in the self-injecting BAT: two patients refused to perform an extra injection. In the self-testing BAT (n = 17) four patients declined to perform the extra blood glucose self-test. Extrapolation of FSI and FST cut-off scores to the total research population showed that 0.2,1.3% of the population scored in the severe FSI range. In FST, 0.6,0.8% of the total study population obtained scores in the cut-off range. Conclusions, Severe FSI and FST, characterized by emotional distress and avoidance behaviour, seems to occur in a small group of insulin-treated patients with diabetes. The D-FISQ can be of use to health care professionals (physicians, nurse specialists) in quickly providing valuable information on levels of FSI and FST in diabetes patients. Diabet. Med. 18, 671,674 (2001) [source] Early home-based intervention in the Netherlands for children at familial risk of dyslexiaDYSLEXIA, Issue 3 2009Sandra G. van Otterloo Abstract Dutch children at higher familial risk of reading disability received a home-based intervention programme before formal reading instruction started to investigate whether this would reduce the risk of dyslexia. The experimental group (n=23) received a specific training in phoneme awareness and letter knowledge. A control group (n=25) received a non-specific training in morphology, syntax, and vocabulary. Both interventions were designed to take 10,min a day, 5 days a week for 10 weeks. Most parents were sufficiently able to work with the programme properly. At post-test the experimental group had gained more on phoneme awareness than the control group. The control group gained more on one of the morphology measures. On average, these specific training results did not lead to significant group differences in first-grade reading and spelling measures. However, fewer experimental children scored below 10th percentile on word recognition. Copyright © 2008 John Wiley & Sons, Ltd. [source] Ventricular Asynchrony of Time-to-Peak Systolic Velocity in Structurally Normal Heart by Tissue Doppler ImagingECHOCARDIOGRAPHY, Issue 7 2010Hakimeh Sadeghian M.D. Background: Echocardiographic measurements of time-to-peak systolic velocities (Ts) are helpful for assessing the degree of cardiac asynchrony. We assessed the degree of ventricular asynchrony in structurally normal heart according to Ts by tissue Doppler imaging. Methods: We performed conventional echocardiography and tissue velocity imaging for 65 healthy adult volunteers to measure the Ts of 12 left ventricular segments in the mid and basal levels delay of Ts and standard deviation (SD) of Ts in all and basal segments. Six frequently used markers of dyssynchrony were measured and were also compared between men and women. Data are presented as median (25th and 75th percentile). Results: Septal-lateral and anteroseptal-posterior delays were 50 (20, 90) and 20 (0, 55) ms. The delay between the longest and the shortest Ts in basal and all segments were 100 (80, 120) and 110 (83, 128) ms, respectively. SD of Ts was 39 (24, 52) ms for basal and 41 (28, 51) ms for all segments. Overall, 76.9% of cases had at least one marker of dyssynchrony. Frequencies of dyssynchrony markers were almost significantly higher in women compared to men. The most frequently observed dyssynchrony marker was SD of Ts of all segments (70.8%) and the lowest was anteroseptal-posterior delay (21.5%). Conclusions: Normal population almost had dyssynchrony by previously described markers and many of these markers were more frequent in women. Conducting more studies on normal population by other tissue Doppler modalities may give better description of cardiac synchronicity. (Echocardiography 2010;27:823-830) [source] Predictors of hangover during a week of heavy drinking on holidayADDICTION, Issue 3 2010Morten Hesse ABSTRACT Aims To investigate predictors of hangover during a week of heavy drinking in young adults. Design Observational prospective study. Methods A total of 112 young Danish tourists were interviewed on three occasions during their holiday. They completed the Acute Hangover Scale and answered questions about their alcohol consumption and rest duration. The incidence of hangover was analysed as the proportion of heavy drinkers (i.e. those reporting drinking more than 12 standard units of alcohol during the night before) scoring above the 90th percentile of light drinkers (i.e. those who had consumed fewer than seven standard units the night before). We estimated the course and predictors of hangover using random effects regression. Results The incidence of hangover was 68% after drinking more than 12 standard units in the whole sample. The severity of hangover increased significantly during a week of heavy drinking and there was a time × number of drinks interaction, indicating that the impact of alcohol consumed on hangover became more pronounced later in the week. Levels of drinking before the holiday did not predict hangover. Conclusions Hangovers after heavy drinking during holidays appear to be related both to amount drunk and time into the holiday. [source] Finnegan neonatal abstinence scoring system: normal values for first 3 days and weeks 5,6 in non-addicted infantsADDICTION, Issue 3 2010Urs Zimmermann-Baer ABSTRACT Objective The neonatal abstinence scoring system proposed by Finnegan is used widely in neonatal units to initiate and to guide therapy in babies of opiate-dependent mothers. The purpose of this study was to assess the variability of the scores in newborns and infants not exposed to opiates during the first 3 days of life and during 3 consecutive days in weeks 5 or 6. Patients and methods Healthy neonates born after 34 completed weeks of gestation, whose parents denied opiate consumption and gave informed consent, were included in this observational study. Infants with signs or symptoms of disease or with feeding problems were excluded. A modified scoring system was used every 8 hours during 72 hours by trained nurses; 102 neonates were observed for the first 3 days of life and 26 neonates in weeks 5,6. A meconium sample and a urine sample at weeks 5,6 were stored from all infants to be analysed for drugs when the baby scored high. Given a non-Gaussian distribution the scores were represented as percentiles. Results During the first 3 days of life median scores remained stable at 2 but the variability increased, with the 95th percentile rising from 5.5 on day 1 to 7 on day 2. At weeks 5,6 median values were higher during daytime (50th percentile = 5, 95th percentile = 8) than night-time (50th percentile = 2, 95th percentile = 6, P = 0.02). Conclusion Scores increase from days 1,3 to weeks 5,6 and show day,night cycles with 5,6 weeks. Values above 8 can be considered pathological. This data may help to raise suspicion of narcotic withdrawal and to guide therapy. [source] The cost-effectiveness of antidepressants for smoking cessation in chronic obstructive pulmonary disease (COPD) patientsADDICTION, Issue 12 2009Constant P. Van Schayck ABSTRACT Objectives In healthy smokers, antidepressants can double the odds of cessation. Because of its four times lower costs and comparable efficacy in healthy smokers, nortriptyline appears to be favourable compared to bupropion. We assessed which of both drugs was most effective and cost-effective in stopping smoking after 1 year compared with placebo among smokers at risk or with existing chronic obstructive pulmonary disease (COPD). Methods A total of 255 participants, aged 30,70 years, received smoking cessation counselling and were assigned bupropion, nortriptyline or placebo randomly for 12 weeks. Prolonged abstinence from smoking was defined as a participant's report of no cigarettes from week 4 to week 52, validated by urinary cotinine. Costs were calculated using a societal perspective and uncertainty was assessed using the bootstrap method. Results The prolonged abstinence rate was 20.9% with bupropion, 20.0% with nortriptyline and 13.5% with placebo. The differences between bupropion and placebo [relative risk (RR) = 1.6; 95% confidence interval (CI) 0.8,3.0] and between nortriptyline and placebo (RR = 1.5; 95% CI 0.8,2.9) were not significant. Severity of airway obstruction did not influence abstinence significantly. Societal costs were ,1368 (2.5th,97.5th percentile 193,5260) with bupropion, ,1906 (2.5th,97.5th 120,17 761) with nortriptyline and ,1212 (2.5th,97.5th 96,6602) with placebo. Were society willing to pay more than ,2000 for a quitter, bupropion was most likely to be cost-effective. Conclusions Bupropion and nortriptyline seem to be equally effective, but bupropion appears to be more cost-effective when compared to placebo and nortriptyline. This impression holds using only health care costs. As the cost-effectiveness analyses concern some uncertainties, the results should be interpreted with care and future studies are needed to replicate the findings. [source] Comparing the effects of entertainment media and tobacco marketing on youth smoking in GermanyADDICTION, Issue 5 2009James D. Sargent ABSTRACT Aims To examine differential effects of smoking in films and tobacco advertising on adolescent smoking. We hypothesize that movie smoking will have greater effects on smoking initiation, whereas tobacco advertising receptivity will primarily affect experimentation. Design Longitudinal observational study of adolescents. Setting School-based surveys conducted in Schleswig-Holstein, Germany. Participants A total of 4384 adolescents age 11,15 years at baseline and re-surveyed 1 year later; ever smoking prevalence was 38% at time 1. Measurements The main outcome variable combined two items assessing life-time and current smoking (alpha = 0.87). Baseline never smokers were analyzed separately from those who had tried smoking (ever smokers). Exposure to smoking in 398 internationally distributed US movies was modeled as a continuous variable, with 0 corresponding to the 5th percentile and 1 to the 95th percentile of exposure. Tobacco marketing receptivity consisted of naming a brand for a favorite tobacco advertisement. Ordinal logistic regressions controlled for socio-demographics, other social influences, personality characteristics of the adolescent and parenting style. Findings Whereas 34% of ever smokers were receptive to tobacco marketing at time 1, only 6% of never smokers were. Among time 1 never smokers, exposure to movie smoking was a significantly stronger predictor of higher time 2 smoking level [adjusted proportional odds ratio = 2.76, 95% confidence interval (1.84, 4.15)] than was tobacco marketing receptivity (1.53 [1.07, 2.20]). Among time 1 ever smokers, both tobacco marketing receptivity and exposure to movie smoking predicted higher levels of time 2 smoking [2.17 (1.78, 2.63) and 1.62 (1.18, 2.23), respectively], and the two estimates were not significantly different. Conclusions In this longitudinal study, exposure to movie smoking was a stronger predictor of smoking initiation than tobacco marketing receptivity, which was more common among ever smokers. The results suggest that entertainment media smoking should be emphasized in programs aimed at preventing onset, and both exposures should be emphasized in programs aimed at experimental smokers. [source] Characterization and uncertainty analysis of VOCs emissions from industrial wastewater treatment plantsENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2010Kaishan Zhang Abstract Air toxics from the industrial wastewater treatment plants (IWTPs) impose serious health concerns on its surrounding residential neighborhoods. To address such health concerns, one of the key challenges is to quantify the air emissions from the IWTPs. The objective here is to characterize the air emissions from the IWTPs and quantify its associated uncertainty. An IWTP receiving the wastewaters from an airplane maintenance facility is used for illustration with focus on the quantification of air emissions for benzyl alcohol, phenol, methylene chloride, 2-butanone, and acetone. Two general fate models, i.e., WATER9 and TOXCHEM+V3.0 were used to model the IWTP and quantify the air emissions. Monte Carlo and Bootstrap simulation were used for uncertainty analysis. On average, air emissions from the IWTP were estimated to range from 0.003 lb/d to approximately 16 lb/d with phenol being the highest and benzyl alcohol being the least. However, emissions are associated with large uncertainty. The ratio of the 97.5th percentile to the 2.5th percentile air emissions ranged from 5 to 50 depending on pollutants. This indicates point estimates of air emissions might fail to capture the worst scenarios, leading to inaccurate conclusion when used for health risk assessment. © 2009 American Institute of Chemical Engineers Environ Prog, 2010 [source] Exposure and effects assessment of resident mink (Mustela vison) exposed to polychlorinated dibenzofurans and other dioxin-like compounds in the Tittabawassee River basin, Midland, Michigan, USA,ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 10 2008Matthew J. Zwiernik Abstract Historically, sediments and floodplain soils of the Tittabawassee River (TR; MI, USA) have been contaminated with polychlorinated dibenzofurans (PCDFs), polychlorinated dibenzo- p -dioxins (PCDDs), and polychlorinatedbiphenyls (PCBs). Median concentrations of 2,3,7,8-tetrachlorodibenzo- p -dioxin equivalents (TEQs) based on 2006 World Health Organization tetrachloro-dibenzo- p -dioxin toxic equivalency factors (TEFs) in the diet of mink (Mustela vison) ranged from 6.8 × 10,1 ng TEQ/kg wet weight upstream of the primary source of PCDF to 3.1 × 101 ng TEQ/kg wet weight downstream. Estimates of toxicity reference values (TRVs) derived from laboratory studies with individual PCDDs/PCDFs and PCB congeners or mixtures of those congeners, as well as application of TEFs, were compared to site-specific measures of mink exposure. Hazard quotients based on exposures expressed as concentrations of TEQs in the 95th percentile of the mink diet or liver and the no-observable-adverse-effect TRVs were determined to be 1.7 and 8.6, respectively. The resident mink survey, however, including number of mink present, morphological measures, sex ratios, population age structure, and gross and histological tissue examination, indicated no observable adverse effects. This resulted for multiple reasons: First, the exposure estimate was conservative, and second, the predominantly PCDF congener mixture present in the TR appeared to be less potent than predicted from TEQs based on dose,response comparisons. Given this, there appears to be great uncertainty in comparing the measured concentrations of TEQs at this site to TRVs derived from different congeners or congener mixtures. Based on the lack of negative outcomes for any measurement endpoints examined, including jaw lesions, a sentinel indicator of possible adverse effects, and direct measures of effects on individual mink and their population, it was concluded that current concentrations of PCDDs/PCDFs were not causing adverse effects on resident mink of the TR. [source] Risk assessment of Magnacide® H herbicide at Río Colorado irrigation channels (Argentina). tier 3: Studies on native speciesENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2007Andrés Venturino Abstract We evaluated the potential environmental risk of the herbicide Magnacide® (Baker Petrolite, TX, USA) using native species from Argentina, representing the ecosystem at the Irrigation Corporation (CORFO) channels at the Colorado River mouth, Buenos Aires, Argentina. Six species including fish, toads, snails, crustaceans, and insects were selected to perform studies on acute toxicity and repeated exposure effects. Magnacide H susceptibility ranking was Bufo arenarum (lethal concentration 50 [LC50] = 0.023 mg/L), Onchorhynchus mykiss (LC50 = 0.038 mg/L), Heleobia parchappii (LC50 = 0.21 mg/L), Hyalella curvispina (LC50 = 0.24 mg/L), Simulium spp. (LC50 = 0.60 mg/L), and Chironomus spp. (LC50 = 2.83 mg/L). The risk limit of 10th percentile (0.013 mg/L) determined by probit analysis on sensitivity distribution was similar to the one calculated from literature data. Risk assessment based on field application data suggested lethal exposures for more than 70 to 90% of the species up to 20 km downstream from the application point. Repeated exposures to Magnacide H of amphibian larvae at the lowest-observed-effect concentration caused some effects during the first exposure, but without cumulative effects. Amphipods were insensitive to repeated exposures, showing no cumulative effects. Whether short-term exposures may result in long-term sublethal effects on the organism's life history was not addressed by these laboratory tests. In conclusion, tier 3 studies indicate that Magnacide H application schedule is extremely toxic for most native species at CORFO,Río Colorado channels, representing a high potential risk in the environment. The real environmental impact must be addressed by field studies at tier 4 giving more information on population effects and communities. [source] Applicability of spraints for monitoring organic contaminants in free-ranging otters (Lutra lutra)ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2006Nico W. van den Brink Abstract In the current study, the use of spraints for monitoring levels of polychlorinated biphenyls (PCBs) in individual otters was experimentally validated. On the basis of a detailed pattern analysis, it is concluded that in the current study, PCB concentrations in spraints that contain relatively high concentrations of nonmetabolizable PCB congeners (PCB 138 and 153>42.5% of total PCB concentrations) reflect the internal PCB concentrations of the otter that produced the spraint. In general, however, spraints should be selected that contain relative concentrations of PCB 138 and PCB 153>95th percentile of these congeners in samples from local food items of otters. On the basis of relationships between levels in spraints and internal levels and on earlier reported effect concentrations, a threshold level range of 1.0 to 2.3 ,g/g (lipid normalized) in such spraints is proposed. The validated methods to monitor PCBs in otters may be combined with genetic marker techniques that can assess the identity of the otter that produced the spraints. In such a design, it is feasible to monitor PCB levels in individual free-ranging otters in a truly animal friendly way. [source] Approaches for linking whole-body fish tissue residues of mercury or DDT to biological effects thresholdsENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 8 2005Nancy Beckvar Abstract A variety of methods have been used by numerous investigators attempting to link tissue concentrations with observed adverse biological effects. This paper is the first to evaluate in a systematic way different approaches for deriving protective (i.e., unlikely to have adverse effects) tissue residue-effect concentrations in fish using the same datasets. Guidelines for screening papers and a set of decision rules were formulated to provide guidance on selecting studies and obtaining data in a consistent manner. Paired no-effect (NER) and low-effect (LER) whole-body residue concentrations in fish were identified for mercury and DDT from the published literature. Four analytical approaches of increasing complexity were evaluated for deriving protective tissue residues. The four methods were: Simple ranking, empirical percentile, tissue threshold-effect level (t-TEL), and cumulative distribution function (CDF). The CDF approach did not yield reasonable tissue residue thresholds based on comparisons to synoptic control concentrations. Of the four methods evaluated, the t-TEL approach best represented the underlying data. A whole-body mercury t-TEL of 0.2 mg/kg wet weight, based largely on sublethal endpoints (growth, reproduction, development, behavior), was calculated to be protective of juvenile and adult fish. For DDT, protective whole-body concentrations of 0.6 mg/kg wet weight in juvenile and adult fish, and 0.7 mg/kg wet weight for early life-stage fish were calculated. However, these DDT concentrations are considered provisional for reasons discussed in this paper (e.g., paucity of sublethal studies). [source] Migrating Partial Seizures in Infancy: Expanding the Phenotype of a Rare Seizure SyndromeEPILEPSIA, Issue 4 2005Eric Marsh Summary:,Purpose: The constellation of early-onset, unprovoked, alternating electroclinical seizures and neurodevelopmental devastation was first described by Coppola et al. We report six new patients and the prospect of a more optimistic developmental outcome. Methods: Retrospective chart reviews were performed on six infants evaluated at the Children's Hospital of Philadelphia (five patients) and at Hershey Medical Center (one patient) who had electroclinically alternating seizures before age 6 months of age. Electroclinical characteristics and long-term follow-up were recorded. Results: All had unprovoked, early-onset (range, 1 day to 3 months; mean, 25 days) intractable electroclinical seizures that alternated between the two hemispheres. Each patient underwent comprehensive brain imaging and neurometabolic workups, which were unrevealing. In all patients, subsequently intractable partial seizures developed and often a progressive decline of head circumference percentile occurred with age. Three demonstrated severe developmental delay and hypotonia. All survived, and 7-year follow-up on one patient was quite favorable. Conclusions: Our patients satisfied the seven major diagnostic criteria first described by Coppola et al. The prognosis of this rare neonatal-onset epilepsy syndrome from the original description and subsequent case reports was very poor, with 28% mortality, and the majority of survivors were profoundly retarded and nonambulatory. Our patient data validate the diagnostic criteria of this syndrome and further quantify a previously described observation of progressive decline of head circumference percentiles with age. Our data also suggest that the prognosis of this syndrome, although poor, is not as uniformly grim as the cases reported previously in the literature. [source] Anti-oxidized low-density lipoprotein antibody levels are associated with the development of type 2 diabetes mellitusEUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 9 2008L. Garrido-Sánchez Abstract Background, Anti-oxidized low-density lipoprotein (LDL) antibodies are associated with the oxidative capacity of plasma, but whether they protect or promote diabetes is unknown. We undertook a prospective study to determine the predictive capacity of anti-oxidized LDL antibodies for the onset of type 2 diabetes mellitus (T2DM). Materials and methods, We selected 391 non-diabetic women aged 18,65 years. The subjects were classified as being normal (oral glucose test tolerance normal, OGTT-N), or having impaired glucose tolerance (IGT), impaired fasting glucose (IFG) or T2DM according to their baseline glucose levels and after an OGTT. The same subjects were studied six years later. The levels of anti-oxidized LDL antibodies were classified as above or below the 50th percentile. Results, Of the women who were OGTT-N at the start of the study and who had anti-oxidized LDL antibody levels below the 50th percentile, only 65·1% were still OGTT-N after 6 years versus 79·5% of those who had anti-oxidized LDL antibody levels above the 50th percentile (P = 0·015). Women who had IGT or IFG at the start of the study whose anti-oxidized LDL antibody levels were below the 50th percentile had a relative risk of 9·79 (95% confidence interval, 1·40,68·45) of developing diabetes (P < 0·001). Logistic regression analysis showed that the variables predicting the development of a carbohydrate metabolism disorder in the women after 6 years were body mass index (P < 0·001) and the levels of anti-oxidized LDL antibodies (P = 0·042). Conclusions, Levels of anti-oxidized LDL antibodies are independent predictors for the development of T2DM in women. [source] Impaired nutritional status in common variable immunodeficiency patients correlates with reduced levels of serum IgA and of circulating CD4+ T lymphocytesEUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 6 2001M. Muscaritoli Background Common variable immunodeficiency (CVI) is a primary defect of the immune system. Infections, persistent diarrhoea and malabsorption may result in malnutrition, which may in turn contribute to increased morbidity. In this paper, the prevalence of malnutrition in CVI was evaluated. Patients and methods Forty CVI patients (20 male, 20 female, aged 17,75 years) underwent anthropometric measurements from which body mass index, arm fat and muscle area were calculated. Body mass index values <,18·5 and arm fat and muscle area values <,10th percentile were considered indicative of malnutrition. Patients were divided into four groups according to circulating CD4+ T cells (lower or greater than 300 µL,1) and serum immunoglobulin A (IgA) levels (detectable and undetectable). Results Body mass index <,18·5, arm fat and muscle area <,10th percentile were observed in 23%, 58% and 44%, respectively, of patients. Lower values of body mass index, arm fat and muscle area were more frequent in patients with low CD4+ cells and undetectable IgA. Low arm fat values were more frequent in patients with diarrhoea (P = 0·03). Infectious episodes were more frequent in undetectable IgA than in detectable IgA patients (P = 0·04). Conclusions Anthropometric measurements revealed an increased rate of malnutrition in CVI patients, particularly in those with low CD4+ and undetectable IgA, suggesting that selected CVI subjects could be considered for standard or specialized nutritional support. [source] Haemoglobin and anaemia in a gender perspective: The Tromsř Study,EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 5 2005Tove Skjelbakken Abstract:,Objectives:,To examine the gender-specific distribution of haemoglobin (Hb) and the World Health Organization (WHO) criteria for anaemia compared with the 2.5 percentile for Hb. Methods:,A population-based study from Tromsř, Northern Norway. All inhabitants above 24 yr were invited. In total, 26 530 (75%) had their Hb analysed. Results:,The 2.5,97.5 percentile of Hb was 129,166 and 114,152 g/L for all men and women, respectively. In men, mean Hb decreased from 148 to 137 g/L between 55,64 and 85+ yr. In women, mean Hb increased from 132 to 137 g/L between 35,44 and 65,74 yr and then decreased to 131 g/L among the oldest. Using the WHO criteria for anaemia (Hb: <130 and <120 g/L, men and women respectively), the prevalence of anaemia in men increased with age from 0.6% aged 25,34 to 29.6% aged 85+. For women, the prevalence of anaemia varied from 9.1%, 2.2% and 16.5% in the age groups of 35,44, 55,64 and 85+ yr, respectively. The WHO criteria gave a two to three times higher prevalence of anaemia compared with the 2.5 percentile of Hb in women, but the difference was small in men. Poor self-rated health was not associated with low values of Hb in women. In men, there was an association in some age groups. Conclusion:,The WHO criteria for anaemia and the 2.5 percentile for Hb corresponded well for men, but not for women. The WHO criteria of anaemia may result in medicalization of healthy women. [source] Iron status in Danish men 1984,94: a cohort comparison of changes in iron stores and the prevalence of iron deficiency and iron overloadEUROPEAN JOURNAL OF HAEMATOLOGY, Issue 6 2002Nils Milman Abstract:,Background and objectives : From 1954 to 1987, flour in Denmark was fortified with 30 mg carbonyl iron per kg. This mandatory fortification was abolished in 1987. The aim of this study was to compare iron status in Danish men before and after abolition of iron fortification. Methods : Iron status (serum ferritin, haemoglobin), was assessed in population surveys in Copenhagen County during 1983,84 comprising 1324 Caucasian men (1024 non-blood-donors, 300 blood donors) and in 1993,94 comprising 1288 Caucasian men (1103 non-blood-donors, 185 donors), equally distributed in age cohorts of 40, 50, 60 and 70 yr. Results : In the 1984 survey median serum ferritin values in the four age cohorts in non-blood-donors were 136, 141, 133 and 111 µg/L, and in the 1994 survey 177, 173, 186 and 148 µg L ,1 , respectively. The difference was significant in all age groups ( P <0.001). There was no significant difference between the two surveys concerning the prevalence of small iron stores (ferritin 16,32 µg L ,1 ), depleted iron stores (ferritin <16 µg L ,1 ) or iron-deficiency anaemia (ferritin <13 µg L ,1 and Hb <5th percentile for iron-replete men). However, from 1984 to 1994, the prevalence of iron overload (ferritin >300 µg L ,1 ) increased from 11.3% to 18.9% ( P <0.0001). During the study period there was an increase in body mass index ( P <0.0001), alcohol consumption ( P <0.03) and use of non-steroid anti-inflammatory drugs (NSAID) ( P <0.0001), and a decrease in the use of vitamin,mineral supplements ( P <0.04) and in the prevalence of tobacco smoking ( P <0.0001). In contrast, median ferritin in blood donors showed a significant fall from 1984 to 1994 (103 vs. 74 µg L ,1 , P <0.02). Conclusion : Abolition of iron fortification reduced the iron content of the Danish diet by an average of 0.24 mg MJ ,1 , and the median dietary iron intake in men from 17 to 12 mg d ,1 . From 1984 to 1994, body iron stores and the prevalence of iron overload in Danish men increased significantly, despite the abolition of food iron fortification. The reason appears to be changes in dietary habits, with a lower consumption of dairy products and eggs, which inhibit iron absorption, and a higher consumption of alcohol, meat, and poultry, containing haem iron and enhancing iron absorption. The high prevalence of iron overload in men may constitute a health risk. [source] Thermally oxidized palm olein exposure increases triglyceride polymer levels in rat small intestineEUROPEAN JOURNAL OF LIPID SCIENCE AND TECHNOLOGY, Issue 9 2010Raul Olivero David Abstract The origin and presence of triglyceride polymers in small intestine have been poorly studied. The present study combined a short in vivo absorption experiment and high-performance size-exclusion chromatography determination. Groups of six male Wistar rats were administered by esophageal probe 1,g/100,g body weight unused palm olein and palm oleins used in 40 and 90 potato frying operations. Small intestines were dissected, cleaned of luminal fat, and analyzed for the presence of triglyceride polymers (oligomers and/or dimers) after 4,h oil administration. The intestinal fat content did not change but the polymers content was positively and significantly correlated (r,=,0.5983; p<0.01) with the amount of polymers present in the oil tested. The small intestine contained 5.05,mg [median and percentile 25 (1.57,mg),percentile 75 (10.40,mg)] of polymers after 4-h exposure to palm olein used for frying 90 times. The results suggested that 2.7,4.9% of the triglyceride polymers administered were present in the small intestine 4,h after ingestion. TBARS levels (p<0.05) and the redox index (oxidized glutathione/total glutathione) (p<0.01) in the small intestine increased significantly after exposure to the palm olein used in 90 frying operations. In conclusion, administration of altered oil increased the presence of resynthesized polymers in the small intestine, thus contributing to small intestine oxidative stress. [source] Price Movers on the Stock Exchange of Thailand: Evidence from a Fully Automated Order-Driven MarketFINANCIAL REVIEW, Issue 3 2010Charlie Charoenwong G12; G14; G15 Abstract This study examines which trade sizes move stock prices on the Stock Exchange of Thailand (SET), a pure limit order market, over two distinct market conditions of bull and bear. Using intraday data, the study finds that large-sized trades (i.e., those larger than the 75th percentile) account for a disproportionately large impact on changes in traded and quoted prices. The finding remains even after it has been subjected to a battery of robustness checks. In contrast, the results of studies conducted in the United States show that informed traders employ trade sizes falling between the 40th and 95th percentiles (Barclay and Warner, 1993; Chakravarty, 2001). Our results support the hypothesis that informed traders in a pure limit order market, such as the SET, where there are no market makers, also use larger-size trades than those employed by informed traders in the United States. [source] Existing in plenty: abundance, biomass and diversity of ciliates and meiofauna in small streamsFRESHWATER BIOLOGY, Issue 4 2008JULIA REISS Summary 1. The ciliate and metazoan meiofaunal assemblages of two contrasting lowland streams in south-east England were examined over the period of a year, using a high taxonomic resolution. Monthly samples were taken from an oligotrophic, acid stream (Lone Oak) and a circumneutral, nutrient-rich stream (Pant) between March 2003 and February 2004. 2. We assessed the relative importance of ciliates and rotifers within the small-sized benthic assemblage with respect to their abundance, biomass and species richness. In addition, we examined the influence of abiotic and biotic parameters and season on the assemblage composition at two levels of taxonomic resolution (species and groups). 3. Ciliates dominated the assemblages numerically, with maximum densities of over 900 000 and 6 000 000 ind. m,2 in Lone Oak and Pant respectively. Rotifers and nematodes dominated meiofaunal densities, although their contribution to total meiofaunal biomass (maxima of 71.9 mgC m,2 in Lone Oak and of 646.8 mgC m,2 in the Pant) was low and rotifer biomass equalled that of ciliates. 4. Although the two streams differed in terms of total abundance of ciliates and meiofauna and shared only 7% of species, the relative proportion of groups was similar. Sediment grain size distribution (the percentile representing the 0.5,1 mm fraction) was correlated with assemblage structure at the species level, revealing the tight coupling between these small organisms and their physical environment. Seasonal changes in the relative abundance of groups followed similar patterns in both streams, and were correlated with the abundance of cyclopoid copepods and temperature. 5. Information on these highly abundant but often overlooked faunal groups is essential for estimates of overall abundance, biomass, species richness and productivity in the benthos, and as such has important implications for several areas of aquatic research, e.g. for those dealing with trophic dynamics. [source] Prospective Validation of a Modified Thrombolysis In Myocardial Infarction Risk Score in Emergency Department Patients With Chest Pain and Possible Acute Coronary SyndromeACADEMIC EMERGENCY MEDICINE, Issue 4 2010Erik P. Hess MD Abstract Objectives:, This study attempted to prospectively validate a modified Thrombolysis In Myocardial Infarction (TIMI) risk score that classifies patients with either ST-segment deviation or cardiac troponin elevation as high risk. The objectives were to determine the ability of the modified score to risk-stratify emergency department (ED) patients with chest pain and to identify patients safe for early discharge. Methods:, This was a prospective cohort study in an urban academic ED over a 9-month period. Patients over 24 years of age with a primary complaint of chest pain were enrolled. On-duty physicians completed standardized data collection forms prior to diagnostic testing. Cardiac troponin T-values of >99th percentile (,0.01 ng/mL) were considered elevated. The primary outcome was acute myocardial infarction (AMI), revascularization, or death within 30 days. The overall diagnostic accuracy of the risk scores was compared by generating receiver operating characteristic (ROC) curves and comparing the area under the curve. The performance of the risk scores at potential decision thresholds was assessed by calculating the sensitivity and specificity at each potential cut-point. Results:, The study enrolled 1,017 patients with the following characteristics: mean (±SD) age 59.3 (±13.8) years, 60.6% male, 17.9% with a history of diabetes, and 22.4% with a history of myocardial infarction. A total of 117 (11.5%) experienced a cardiac event within 30 days (6.6% AMI, 8.9% revascularization, 0.2% death of cardiac or unknown cause). The modified TIMI risk score outperformed the original with regard to overall diagnostic accuracy (area under the ROC curve = 0.83 vs. 0.79; p = 0.030; absolute difference 0.037; 95% confidence interval [CI] = 0.004 to 0.071). The specificity of the modified score was lower at all cut-points of >0. Sensitivity and specificity at potential decision thresholds were: >0 = sensitivity 96.6%, specificity 23.7%; >1 = sensitivity 91.5%, specificity 54.2%; and >2 = sensitivity 80.3%, specificity 73.4%. The lowest cut-point (TIMI/modified TIMI >0) was the only cut-point to predict cardiac events with sufficient sensitivity to consider early discharge. The sensitivity and specificity of the modified and original TIMI risk scores at this cut-point were identical. Conclusions:, The modified TIMI risk score outperformed the original with regard to overall diagnostic accuracy. However, it had lower specificity at all cut-points of >0, suggesting suboptimal risk stratification in high-risk patients. It also lacked sufficient sensitivity and specificity to safely guide patient disposition. Both scores are insufficiently sensitive and specific to recommend as the sole means of determining disposition in ED chest pain patients. ACADEMIC EMERGENCY MEDICINE,2010; 17:368,375 © 2010 by the Society for Academic Emergency Medicine [source] Variability in clinical phenotype of severe haemophilia: the role of the first joint bleedHAEMOPHILIA, Issue 5 2005K. van Dijk Summary., To quantify variation in clinical phenotype of severe haemophilia we performed a single centre cohort study among 171 severe haemophilia patients. Age at first joint bleed, treatment requirement (i.e. annual clotting factor use), annual bleeding frequency and arthropathy were documented. Because treatment strategies intensified during follow-up, patients were stratified in two age groups: patients born 1968,1985 (n = 91), or 1985,2002 (n = 80). A total of 2166 patient-years of follow-up were available (median 12.0 years per patient). Age at first joint bleed ranged from 0.2 to 5.8 years. Patients who had their first joint bleed later needed less treatment and developed less arthropathy. In patients born 1968,1985 during both on-demand and prophylactic treatment, the 75th percentile of annual joint bleed frequency was consistently four times as high as the 25th percentile. In both age groups variation in annual clotting factor use between 25th and 75th percentiles was 1.4,1.5 times for prophylaxis and 3.8 times for on-demand treatment. To conclude, the onset of joint bleeding is inversely related with treatment requirement and arthropathy and may serve as an indicator of clinical phenotype. Thus, providing a starting point for aetiological research and individualization of treatment. [source] Age at Acquisition of Helicobacter pylori in a Pediatric Canadian First Nations PopulationHELICOBACTER, Issue 2 2002Samir K. Sinha Abstract Background. Few data exist regarding the epidem-iology of Helicobacter pylori infections in aboriginal, including the First Nations (Indian) or Inuit (Eskimo) populations of North America. We have previously found 95% of the adults in Wasagamack, a First Nations community in Northeastern Manitoba, Canada, are seropositive for H. pylori. We aimed to determine the age at acquisition of H. pylori among the children of this community, and if any association existed with stool occult blood or demographic factors. Materials and Methods. We prospectively enrolled children resident in the Wasagamack First Nation in August 1999. A demographic questionnaire was administered. Stool was collected, frozen and batch analyzed by enzyme-linked immunosorbent assay (ELISA) for H. pylori antigen and for the presence of occult blood. Questionnaire data were analyzed and correlated with the presence or absence of H. pylori. Results. 163 (47%) of the estimated 350 children aged 6 weeks to 12 years, resident in the community were enrolled. Stool was positive for H. pylori in 92 (56%). By the second year of life 67% were positive for H. pylori. The youngest to test positive was 6 weeks old. There was no correlation of a positive H. pylori status with gender, presence of pets, serum Hgb, or stool occult blood. Forty-three percent of H. pylori positive and 24% of H. pylori negative children were < 50th percentile for height (p = 0.024). Positive H. pylori status was associated with the use of indoor pail toileting (86/143) compared with outhouse toileting (6/20) (p = 0.01). Conclusions. In a community with widespread H. pylori infection, overcrowded housing and primitive toileting, H. pylori is acquired as early as 6 weeks of age, and by the second year of life 67% of children test positive for H. pylori. [source] The effect of combination antiretroviral therapy on CD5 B- cells, B-cell activation and hypergammaglobulinaemia in HIV-1-infected patientsHIV MEDICINE, Issue 5 2005BE Redgrave Objectives This study assessed B-cell activation, CD5 B-cells and circulating immunoglobulin levels in HIV-infected patients treated with combination antiretroviral therapy (CART). Methods Measurement of plasma immunoglobulin levels and electrophoresis of plasma proteins, and analyses of total numbers of B-cells and B-cells expressing CD38 and CD5 in whole blood, were undertaken in 47 consecutive HIV-1-infected patients attending an out-patient clinic. Results All HIV-infected patients had similar percentages and numbers of B-cells. Proportions of CD5 B-cells in all HIV-infected patients were significantly lower than those in HIV-negative controls. Aviraemic HIV-infected patients on CART had lower percentages of CD5, CD38 and CD5 CD38 B-cell subsets and lower plasma levels of immunoglobulin G (IgG) and immunoglobulin A (IgA) than viraemic HIV-infected patients (untreated or on CART). However, 33,37% of aviraemic HIV-infected patients had IgG and IgA levels above the 95th percentile of the normal range defined in HIV-seronegative donors. In aviraemic HIV-infected patients, plasma IgA levels correlated only with proportions of activated (CD38) B-cells. IgG levels did not correlate with the proportions of B-cell subsets or any marker of HIV disease activity. Monoclonal immunoglobulins were not detected in any plasma sample. Conclusions Aviraemic HIV-infected patients on CART have lower plasma levels of IgG and IgA than viraemic HIV-infected patients, but levels are often above the normal range. CD5 B-cell numbers are depressed, so these cells are unlikely to contribute to hypergammaglobulinaemia in HIV-infected patients. [source] Evidence for prolonged clinical benefit from initial combination antiretroviral therapy: Delta extended follow-upHIV MEDICINE, Issue 3 2001Delta Coordinating Committee Background The findings from therapeutic trials in HIV infection with surrogate endpoints based on laboratory markers are only partially relevant for clinical decisions on treatment. Although the collection of clinical follow-up data from such a trial would be relatively straightforward, this rarely occurs. An important reason for this may be the perception that such data have little value because the number of participants remaining on their original allocated therapy has usually fallen substantially. Methods Delta was an international, multicentre trial in which 3207 HIV infected individuals were randomly allocated to treatment with zidovudine (ZDV) alone, ZDV combined with didanosine (ddI) or ZDV combined with zalcitabine (ddC). Although the trial closed in September 1995, information on vital status, AIDS events, treatment changes and CD4 counts was still collected every 12 months until at least March 1997. This has allowed analyses of the longer term clinical effect of treatment. Results The median follow-up to date of death or last known vital status was 43 months (10th percentile 18 months; 90th percentile 55 months). The proportion of participants remaining on their allocated treatment fell steadily over time; by 4 years after trial entry, 3% remained on ZDV, 20% on ZDV + ddI and 21% on ZDV + ddC. Changes mainly involved the stopping, addition or switching of a nucleoside reverse transcriptase inhibitor (NRTIs). There was little use of protease inhibitors (PIs) or non-nucleoside reverse transcriptase inhibitors (NNRTIs) before the third year of the trial. Between the third and fourth years, regimens included a drug from one of these classes for approximately 17% of person-time in all treatment groups. Relative to ZDV monotherapy, the beneficial effects of combination therapy on mortality and disease progression rates increased significantly with time since randomization. The maximum effects on mortality were observed between 2 and 3 years, with a 48% reduction for ZDV + ddI and a 26% reduction for ZDV + ddC. These rates were observed when the original allocated treatment was received 42% and 47% of the time in the ZDV + ddI and ZDV + ddC groups, respectively. The mean CD4 count remained significantly higher (approximately 50 cells/,L) in the combination therapy groups 4 years after randomization, suggesting a projection of a clinical benefit beyond this time point. Conclusions The sustained clinical effect of the initial allocation to combination therapy, particularly ZDV + ddI, was remarkable in light of the convergence of drug regimens actually received across the three treatment groups. Interpretation of this finding is not straightforward. One of the possible explanations is that the effectiveness of ddI and ddC is diminished if first used later in infection or with greater prior exposure to ZDV, although the data do not clearly support either hypothesis. This analysis highlights the value of long-term clinical follow-up of therapeutic trials in HIV infection, which should be considered in the planning of all new studies. [source] Probabilistic risk evaluation for triclosan in surface water, sediments, and aquatic biota tissuesINTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 3 2010Jennifer Lyndall Abstract Triclosan, an antimicrobial compound used in personal care products, occurs in the aquatic environment due to residual concentrations in municipal wastewater treatment effluent. We evaluate triclosan-related risks to the aquatic environment, for aquatic and sediment-dwelling organisms and for aquatic-feeding wildlife, based on measured and modeled exposure concentrations. Triclosan concentrations in surface water, sediment, and biota tissue are predicted using a fugacity model parameterized to run probabilistically, to supplement the limited available measurements of triclosan in sediment and tissue. Aquatic toxicity is evaluated based on a species sensitivity distribution, which is extrapolated to sediment and tissues assuming equilibrium partitioning. A probabilistic wildlife exposure model is also used, and estimated doses are compared with wildlife toxicity benchmarks identified from a review of published and proprietary studies. The 95th percentiles of measured and modeled triclosan concentrations in surface water, sediment, and biota tissues are consistently below the 5th percentile of the respective species sensitivity distributions, indicating that, under most scenarios, adverse affects due to triclosan are unlikely. Integr Environ Assess Manag 2010;6:419,440. © 2010 SETAC [source] |