Home About us Contact | |||
Standard Error (standard + error)
Terms modified by Standard Error Selected AbstractsHeteroskedasticity,Autocorrelation Robust Standard Errors Using The Bartlett Kernel Without TruncationECONOMETRICA, Issue 5 2002Nicholas M. Kiefer No abstract is available for this article. [source] The Effects of a Student Sampling Plan on Estimates of the Standard Errors for Student Passing RatesJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2003Guemin Lee Examined in this study were three procedures for estimating the standard errors of school passing rates using a generalizability theory model. Also examined was how these procedures behaved for student samples that differed in size. The procedures differed in terms of their assumptions about the populations from which students were sampled, and it was found that student sample size generally had a notable effect on the size of the standard error estimates they produced. Also the three procedures produced markedly different standard error estimates when student sample size was small. [source] Standard Errors for EM Estimates in Generalized Linear Models with Random EffectsBIOMETRICS, Issue 3 2000Herwig Friedl Summary. A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulas are used to approximate the integrals in the EM algorithm, where two different approaches are pursued, i.e., Gauss-Hermite quadrature in the case of Gaussian random effects and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations. [source] Changes in Quality of Life in Epilepsy: How Large Must They Be to Be Real?EPILEPSIA, Issue 1 2001Samuel Wiebe Summary: ,Purpose: The study goal was to assess the magnitude of change in generic and epilepsy-specific health-related quality-of-life (HRQOL) instruments needed to exclude chance or error at various levels of certainty in patients with medically refractory epilepsy. Methods: Forty patients with temporal lobe epilepsy and clearly defined criteria of clinical stability received HRQOL measurements twice, 3 months apart, using the Quality of Life in Epilepsy Inventory-89 and -31 (QOLIE-89 and QOLIE-31), Liverpool Impact of Epilepsy, adverse drug events, seizure severity scales, and the Generic Health Utilities Index (HUI-III). Standard error of measurement and test-retest reliability were obtained for all scales and for QOLIE-89 subscales. Using the Reliable Change Index described by Jacobson and Truax, we assessed the magnitude of change required by HRQOL instruments to be 90 and 95% certain that real change has occurred, as opposed to change due to chance or measurement error. Results: Clinical features, point estimates and distribution of HRQOL measures, and test-retest reliability (all > 0.70) were similar to those previously reported. Score changes of ±13 points in QOLIE-89, ±15 in QOLIE-31, ±6.3 in Liverpool seizure severity,ictal, ±11 in Liverpool adverse drug events, ±0.25 in HUI-III, and ±9.5 in impact of epilepsy exclude chance or measurement error with 90% certainty. These correspond, respectively, to 13, 15, 17, 18, 25, and 32% of the potential range of change of each instrument. Conclusions: Threshold values for real change varied considerably among HRQOL tools but were relatively small for QOLIE-89, QOLIE-31, Liverpool Seizure Severity, and adverse drug events. In some instruments, even relatively large changes cannot rule out chance or measurement error. The relation between the Reliable Change Index and other measures of change and its distinction from measures of minimum clinically important change are discussed. [source] Are cervical physical outcome measures influenced by the presence of symptomatology?PHYSIOTHERAPY RESEARCH INTERNATIONAL, Issue 3 2002Michele Sterling Abstract Background and Purpose Outcome measures must be repeatable over time to judge changes as a result of treatment. It is unknown whether the presence of neck pain can affect measurement reliability over a time period when some change could be expected as a result of an intervention. The present study investigated the reliability of two measures, active cervical range of movement (AROM) and pressure pain thresholds (PPTs), in symptomatic and asymptomatic subjects. Method A repeated-measures study design with one week between testing sessions was used. Nineteen healthy asymptomatic subjects and 19 subjects with chronic neck pain participated in the study. The neck movements measured were: flexion, extension, right and left lateral flexion, and axial rotation. PPTs were measured over six bilateral sites, both local and remote to the cervical spine. Results The between-week intra-class correlation coefficients (ICCs2,1) for AROM ranged from 0.67 to 0.93 (asymptomatic group) and from 0.64 to 0.88 (chronic neck pain group). Standard error of measurement (SEM) was similar in both groups, from 2.66° to 5.59° (asymptomatic group) and from 2.36° to 6.72° (chronic neck pain group). ICCs2,1 for PPTs ranged from 0.70 to 0.91 (asymptomatic group) and from 0.69 to 0.92 (chronic neck pain group). SEM ranged from 11.14 to 87.71 kPa (asymptomatic group) and from 14.25 to 102.95 kPa (chronic neck pain group). Conclusions The findings of moderate to very high between-week reliability of measures of AROM and PPTs in both asymptomatic and chronic neck pain subjects suggest the presence of symptomatology does not adversely affect reliability of these measures. The results support the use of these measures for monitoring change in chronic neck pain conditions. Copyright © 2002 Whurr Publishers Ltd. [source] Optimal designs for estimating penetrance of rare mutations of a disease-susceptibility geneGENETIC EPIDEMIOLOGY, Issue 3 2003Gail Gong Abstract Many clinical decisions require accurate estimates of disease risks associated with mutations of known disease-susceptibility genes. Such risk estimation is difficult when the mutations are rare. We used computer simulations to compare the performance of estimates obtained from two types of designs based on family data. In the first (clinic-based designs), families are ascertained because they meet certain criteria concerning multiple disease occurrences among family members. In the second (population-based designs), families are sampled through a population-based registry of affected individuals called probands, with oversampling of probands whose families are more likely to segregate mutations. We generated family structures, genotypes, and phenotypes using models that reflect the frequencies and penetrances of mutations of the BRCA1/2 genes. We studied the effects of risk heterogeneity due to unmeasured, shared risk factors by including risk variation due to unmeasured genotypes of another gene. The simulations were chosen to mimic the ascertainment and selection processes commonly used in the two types of designs. We found that penetrance estimates from both designs are nearly unbiased in the absence of unmeasured shared risk factors, but are biased upward in the presence of such factors. The bias increases with increasing variation in risks across genotypes of the second gene. However, it is small compared to the standard error of the estimates. Standard errors from population-based designs are roughly twice those from clinic-based designs with the same number of families. Using the root-mean-square error as a measure of performance, we found that in all instances, the clinic-based designs gave more accurate estimates than did the population-based designs with the same numbers of families. Rough variance calculations suggest that clinic-based designs give more accurate estimates because they include more identified mutation carriers. Genet Epidemiol 24:173,180, 2003. © 2003 Wiley-Liss, Inc. [source] Modeling of transport phenomena and melting kinetics of starch in a co-rotating twin-screw extruder,ADVANCES IN POLYMER TECHNOLOGY, Issue 1 2006Lijun Wang A mathematical model was developed to simulate fluid flow, heat transfer, and melting kinetics of starch in a co-rotating intermeshing twin-screw extruder (TSE). The partial differential equations governing the transport phenomena of the biomaterial in the extruder were solved by a finite element scheme. For validating the model, the predicted product pressure, bulk temperature at the entrance of the die, and minimum residence time of the biomaterial in the extruder were compared with experimental data. Standard errors of product pressure, bulk temperature at the die entrance, and minimum residence time were about 8.8, 2.8, and 17.3%. Simulations were carried out to investigate profiles of product pressure, bulk temperature, and melt fraction within the extruder during extrusion. © 2006 Wiley Periodicals, Inc. Adv Polym Techn 25: 22,40, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/adv.20055 [source] PREDICTION OF TEXTURE IN GREEN ASPARAGUS BY NEAR INFRARED SPECTROSCOPY (NIRS)JOURNAL OF FOOD QUALITY, Issue 4 2002D. PEREZ NIR spectroscopy was used to estimate three textural parameters of green asparagus: maximum cutting force, energy and toughness. An Instron 1140 Texturometer provided reference data. A total of 199 samples from two asparagus varieties (Taxara and UC-157) were used to obtain the calibration models between the reference data and the NIR spectral data. Standard errors of cross validation (SECV) and r2 were (5.73, 0.84) for maximum cutting force, (0.58, 0.66) for toughness, and (0.04, 0.85) for cutting energy. The mathematical models developed as calibration models were tested using independent validation samples (n =20); the resulting standard errors of prediction (SEP) and r2 for the same parameters were (6.73, 0.82), (0.61, 0.57) and (0.04, 0.89), respectively. For toughness, substantially improved r2 (0.85) and SEP (0.36) when four samples exhibiting large residual values were removed. The results indicated that NIRS could accurately predict texture parameters of green asparagus. [source] Standard errors for EM estimationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2000M. Jamshidian The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. [source] Comparing Travel Cost Models And The Precision Of Their Consumer Surplus Estimates: Albert Park And Maroondah ReservoirAUSTRALIAN ECONOMIC PAPERS, Issue 4 2003Nicola Lansdell This study examines different types of Travel Cost Models to estimate and compare the recreational values of two parks in Victoria, Australia: Maroondah Reservoir and Albert Park. Zonal Travel Cost models and a number of different functional forms are used in this study. Standard errors are used to estimate upper and lower bounds for the recreational value estimates, enabling comparison between the precision of the different types of Travel Cost Models and functional forms estimated. The double log functional form city zone Travel Cost Model was chosen as the best estimate for Albert Park's recrea-tional value at $22.9 million per year. Maroondah Reservoir's best estimate is provided by the double log functional form regional zone Travel Cost Model at a value of $2.5 million per year, consider-ably less than that of Albert Park. Albert Park is found to have a comparatively larger ,proximity power' (attracting many more visitors) while Maroondah Reservoir exhibited a larger degree of ,pulling power' (a higher proportion of its visitors travel further distances). [source] Estimation of Centres and Radial Intensity Profiles of Spherical Nano-Particles in Digital MicroscopyBIOMETRICAL JOURNAL, Issue 2 2007Mats Kvarnström Abstract Control of the microscopic characteristics of colloidal systems is critical in a wealth of application areas, ranging from food to pharmaceuticals. To assist in estimating these characteristics, we present a method for estimating the positions of spherical nano-particles in digital microscopy images. The radial intensity profiles of particles, which depend on the distances of the particles from the focal plane of the light microscope and have no closed functional form, are modelled using a local quadratic kernel estimate. We also allow for the case where pixel values are censored at an upper limit of 255. Standard errors of centre estimates are obtained using a sandwich estimator which takes into account spatial autocorrelation in the errors. The approach is validated by a simulation study. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Fisher Information Matrix of the Dirichlet-multinomial DistributionBIOMETRICAL JOURNAL, Issue 2 2005Sudhir R. Paul Abstract In this paper we derive explicit expressions for the elements of the exact Fisher information matrix of the Dirichlet-multinomial distribution. We show that exact calculation is based on the beta-binomial probability function rather than that of the Dirichlet-multinomial and this makes the exact calculation quite easy. The exact results are expected to be useful for the calculation of standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters for data that arise in practice in toxicology and other similar fields. Standard errors of the maximum likelihood estimates of the beta-binomial parameters and those of the Dirichlet-multinomial parameters, based on the exact and the asymptotic Fisher information matrix based on the Dirichlet distribution, are obtained for a set of data from Haseman and Soares (1976), a dataset from Mosimann (1962) and a more recent dataset from Chen, Kodell, Howe and Gaylor (1991). There is substantial difference between the standard errors of the estimates based on the exact Fisher information matrix and those based on the asymptotic Fisher information matrix. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Maximum Likelihood Methods for Nonignorable Missing Responses and Covariates in Random Effects ModelsBIOMETRICS, Issue 4 2003Amy L. Stubbendick Summary. This article analyzes quality of life (QOL) data from an Eastern Cooperative Oncology Group (ECOG) melanoma trial that compared treatment with ganglioside vaccination to treatment with high-dose interferon. The analysis of this data set is challenging due to several difficulties, namely, nonignorable missing longitudinal responses and baseline covariates. Hence, we propose a selection model for estimating parameters in the normal random effects model with nonignorable missing responses and covariates. Parameters are estimated via maximum likelihood using the Gibbs sampler and a Monte Carlo expectation maximization (EM) algorithm. Standard errors are calculated using the bootstrap. The method allows for nonmonotone patterns of missing data in both the response variable and the covariates. We model the missing data mechanism and the missing covariate distribution via a sequence of one-dimensional conditional distributions, allowing the missing covariates to be either categorical or continuous, as well as time-varying. We apply the proposed approach to the ECOG quality-of-life data and conduct a small simulation study evaluating the performance of the maximum likelihood estimates. Our results indicate that a patient treated with the vaccine has a higher QOL score on average at a given time point than a patient treated with high-dose interferon. [source] Initialization Strategies in Simulation-Based SFE Eigenvalue AnalysisCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005Song Du Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam. [source] Albumin enhanced morphometric image analysis in CLL,CYTOMETRY, Issue 1 2004Matthew A. Lunning Abstract BACKGROUND The heterogeneity of lymphocytes from patients with chronic lymphocytic leukemia (CLL) and blood film artifacts make morphologic subclassification of this disease difficult. METHODS We reviewed paired blood films prepared from ethylene-diamine-tetraacetic acid (ETDA) samples with and without bovine serum albumin (BSA) from 82 CLL patients. Group 1 adhered to NCCLS specifications for the preparations of EDTA blood films. Group 2 consisted of blood films containing EDTA and a 1:12 dilution of 22% BSA. Eight patients were selected for digital photomicroscopy and statistical analysis. Approximately 100 lymphocytes from each slide were digitally captured. RESULTS The mean cell area ± standard error was 127.8 ,m2 ± 1.42 for (n = 793) for group 1 versus 100.7 ,m2 ± 1.39 (n = 831) for group 2. The nuclear area was 88.9 ,m2 ± 0.85 for group 1 versus 76.4 ,m2 ± 0.83 for group 2. For the nuclear transmittance, the values were 97.6 ± 0.85 for group 1 and 104.1 ± 0.83 for group 2. The nuclear:cytoplasmic ratios were 0.71 ± 0.003 for group 1 and 0.78 ± 0.003 for group 2. All differences were statistically significant (P < 0.001). CONCLUSIONS BSA addition results in the reduction of atypical lymphocytes and a decrease in smudge cells. BSA also decreases the lymphocyte area and nuclear area, whereas nuclear transmittance and nuclear:cytoplasmic ratio are increased. A standardized method of slide preparation would allow accurate interlaboratory comparison. The use of BSA may permit better implementation of the blood film-based subclassification of CLL and lead to a better correlation of morphology with cytogenetics and immunophenotyping. Published 2003 Wiley-Liss, Inc. [source] Reproducibility evaluation of gross and net walking efficiency in children with cerebral palsyDEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 1 2007Merel-Anne Brehm MSc In evaluating energy cost (EC) of walking, referred to as walking efficiency, the use of net measurement protocols (i.e. net=gross-resting) has recently been recommended. However, nothing is known about the comparative reproducibility of net protocols and the commonly used gross protocols. Ten minutes of resting and 5 minutes of walking at a self-selected speed were used to determine gross and net EC in 13 children with spastic cerebral palsy (CP; seven males, six females; mean age 8y 7mo [SD 3y 4mo], range 4y 1mo,13y) and in 10 children (three males, seven females) with typical development. In the former, their Gross Motor Function Classification System levels ranged from Level I to Level III; and seven had hemiplegia and six diplegia. There were four repeated sessions on different days, with periods of 1 week between sessions. Reproducibility was assessed for speed, and gross and net EC, by using the standard error of measurement. The results of this preliminary study showed that EC measurements were more variable for children with CP than for children with typical development. Furthermore, in both groups there was considerably more variability in the net measurements than in the gross measurements. We conclude that, on the basis of the methodology used, the use of gross EC, rather than net EC, seems a more sensitive measure of walking efficiency to detect clinically relevant changes in an individual child with CP. [source] Interobserver agreement in endoscopic evaluation of reflux esophagitis using a modified Los Angeles classification incorporating grades N and M: A validation study in a cohort of Japanese endoscopistsDISEASES OF THE ESOPHAGUS, Issue 4 2008H. Miwa SUMMARY., The Los Angeles classification system is the most widely employed criteria associated with the greatest interobserver agreement among endoscopists. In Japan, the Los Angeles classification system has been modified (modified LA system) to include minimal changes as a distinct grade of reflux esophagitis, rather than as auxiliary findings. This adds a further grading M defined as minimal changes to the mucosa, such as erythema and/or whitish turbidity. The modified LA system has come to be used widely in Japan. However, there have been few reports to date that have evaluated the interobserver agreement in diagnosis when using the modified LA classification system incorporating these minimal changes as an additional grade. A total of 100 endoscopists from university hospitals and community hospitals, as well as private practices in the Osaka-Kobe area participated in the study. A total of 30 video clips of 30,40 seconds duration, mostly showing the esophagocardiac junction, were created and shown to 100 endoscopists using a video projector. The participating endoscopists completed a questionnaire regarding their clinical experience and rated the reflux esophagitis as shown in the video clips using the modified LA classification system. Agreement was assessed employing kappa (,) statistics for multiple raters. The , -value for all 91 endoscopists was 0.094, with a standard error of 0.002, indicating poor interobserver agreement. The endoscopists showed the best agreement on diagnosing grade A esophagitis (0.167), and the poorest agreement when diagnosing grade M esophagitis (0.033). The , -values for the diagnoses of grades N, M, and A esophagitis on identical video pairs were 0.275,0.315, with a standard error of 0.083,0.091, indicating fair intraobserver reproducibility among the endoscopists. The study results consistently indicate poor agreement regarding diagnoses as well as fair reproducibility of these diagnoses by endoscopists using the modified LA classification system, regardless of age, type of practice, past endoscopic experience, or current workload. However, grade M reflux esophagitis may not necessarily be irrelevant, as it may suggest an early form of reflux disease or an entirely new form of reflux esophagitis. Further research is required to elucidate the pathophysiological basis of minimal change esophagitis. [source] Measurement error: implications for diagnosis and discrepancy models of developmental dyslexiaDYSLEXIA, Issue 3 2005Sue M. Cotton Abstract The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between ,intelligence' and ,actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia. Copyright © 2005 John Wiley & Sons, Ltd. [source] Can cocaine use be evaluated through analysis of wastewater?ADDICTION, Issue 5 2009A nation-wide approach conducted in Belgium ABSTRACT Aims Cocaine is the second most-used illicit drug world-wide and its consumption is increasing significantly, especially in western Europe. Until now, the annual prevalence has been estimated indirectly by means of interviews. A recently introduced and direct nation-wide approach based on measurements of the major urinary excreted metabolite of cocaine, benzoylecgonine, in wastewater is proposed. Design Wastewater samples from 41 wastewater treatment plants (WWTPs) in Belgium, covering approximately 3 700 000 residents, were collected. Each WWTP was sampled on Wednesdays and Sundays during two sampling campaigns in 2007,08. Samples were analysed for cocaine (COC) and its metabolites, benzoylecgonine (BE) and ecgonine methylester (EME) by a validated procedure based on liquid chromatography coupled with tandem mass spectrometry. Concentrations of BE were used to calculate cocaine consumption (g/day per 1000 inhabitants) for each WWTP region and for both sampling campaigns (g/year per 1000 inhabitants). Findings Weekend days showed significantly higher cocaine consumption compared with weekdays. The highest cocaine consumption was observed for WWTPs receiving wastewater from large cities, such as Antwerp, Brussels and Charleroi. Results were extrapolated for the total Belgian population and an estimation of a yearly prevalence of cocaine use was made based on various assumptions. An amount of 1.88 tonnes (t) per year [standard error (SE) 0.05 t] cocaine is consumed in Belgium, corresponding to a yearly prevalence of 0.80% (SE 0.02%) for the Belgian population aged 15,64 years. This result is in agreement with an earlier reported estimate of the Belgian prevalence of cocaine use conducted through socio-epidemiological studies (0.9% for people aged 15,64 years). Conclusions Wastewater analysis is a promising tool to evaluate cocaine consumption at both local and national scale. This rapid and direct estimation of the prevalence of cocaine use in Belgium corresponds with socio-epidemiological data. However, the strategy needs to be refined further to allow a more exact calculation of cocaine consumption from concentrations of BE in wastewater. [source] Concentrations of selenium and mercury in eared grebes (Podiceps nigricollis) from Utah's great Salt Lake, USAENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 6 2009Michael R. Conover Abstract We examined selenium and mercury concentrations in eared grebes (Podiceps nigricollis) that spent the fall of 2006 on the Great Salt Lake (UT, USA), where their diet consisted mainly of brine shrimp (Artemia franciscana). Selenium concentrations in livers varied based on when the grebes were collected (lower in September [mean ± standard error, 9.4 ± 0.7 ,g/g dry wt] than in November [14.5 ± 1.4 ,g/g]), on where the birds were collected on the Great Salt Lake (Antelope Island, 8.6 ± 0.5 ,g/g; Stansbury Island, 15.2 ± 1.4 ,g/g), and on the grebe's age (juveniles, 8.5 ± 1.5 ,g/g; adults, 15.8 ± 1.3 ,g/g), but not by sex. Selenium concentrations in blood differed only by collection site (Antelope Island, 16.8 ± 2.3 ,g/g; Stansbury Island, 25.4 ± 3.0 ,g/g). Mercury concentration in the blood of grebes varied by when the grebes were collected (September, 5.6 ± 0.5 ,g/g; November, 8.4 ± 1.2 ,g/g), where the birds were collected (Antelope Island, 4.3 ± 0.5 ,g/g; Stansbury Island, 10.1 ± 2.6 ,g/g), and the grebe's age (juveniles, 5.5 ± 0.8 ,g/g; adults, 8.4 ± 1.0 ,g/g), but not by sex. Selenium concentrations in blood were correlated with selenium concentrations in the liver and with mercury concentrations in both blood and liver. Body mass of grebes increased dramatically from September (381 ± 14 g wet wt) to November (591 ± 11 g). Body, liver, and spleen mass either were not correlated with selenium or mercury concentrations or the relationship was positive. These results suggest that high mercury and selenium levels were not preventing grebes from increasing or maintaining mass. [source] Selenium and mercury concentrations in California gulls breeding on the Great Salt Lake, Utah, USAENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2009Michael R. Conover Abstract We examined selenium (Se) and mercury (Hg) concentrations in adult California gulls (Larus californicus) nesting on the Great Salt Lake, Utah, USA, during 2006 and 2007. During 2006, the mean Se concentration (± standard error) was 18.1 ± 1.5 ,g/g in blood on a dry-weight basis and 8.1 ± 0.4 ,g/g in liver. During 2007, Se concentrations were 15.7 ± 1.5 ,g/g in blood and 8.3 ± 0.4 ,g/g in liver; Hg concentrations were 2.4 ± 0.3 ,g/g in blood and 4.1 ± 0.5 ,g/g in liver. Gulls collected from a freshwater colony located within the watershed of the Great Salt Lake had similar levels of Se in the blood and liver as gulls collected on the Great Salt Lake but lower Hg concentrations. Body mass of adult gulls was not correlated with Se or Hg concentrations in their blood or liver. Selenium concentration in California gull eggs collected during 2006 was 3.0 ± 0.10 ,g/g. Of 72 eggs randomly collected from Great Salt Lake colonies, only one was infertile, and none of the embryos exhibited signs of malposition or deformities. We examined 100 newly hatched California gull chicks from Great Salt Lake colonies for teratogenesis; all chicks appeared normal. Hence, the elevated Se and Hg concentrations in adult gulls nesting on the Great Salt Lake did not appear to impair gulls' health or reproductive ability. [source] Bioavailability and biodegradation of nonylphenol in sediment determined with chemical and bioanalysis,ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2008Jasperien de Weert Abstract The surfactant nonylphenol (NP) is an endocrine-disrupting compound that is widely spread throughout the environment. Although environmental risk assessments are based on total NP concentrations, only the bioavailable fraction posses an environmental risk. The present study describes the bioavailability and biodegradability of NP over time in contaminated river sediment of a tributary of the Ebro River in Spain. The bioavailable fraction was collected with Tenax TA® beads, and biodegradation was determined in aerobic batch experiments. The presence of NP was analyzed chemically using gas chromatography-mass spectrometry and indirectly as estrogenic potency using an in vitro reporter gene assay (ER, - luc assay). Of the total extractable NP in the sediment, 95% ± 1.5% (mean ± standard error) desorbed quickly into the water phase. By aerobic biodegradation, the total extractable NP concentration and the estrogenic activity were reduced by 97% ± 0.5% and 94% ± 2%, respectively. The easily biodegradable fraction equals the potential bioavailable fraction. Only 43 to 86% of the estrogenic activity in the total extractable fraction, as detected in the ER, - luc assay, could be explained by the present NP concentration. This indicates that other estrogenic compounds were present and that their bioavailability and aerobic degradation were similar to that of NP. Therefore, we propose to use NP as an indicator compound to monitor estrogenicity of this Ebro River sediment. To what extent this conclusion holds for other river sediments depends on the composition of the contaminants and/or the nature of these sediments and requires further testing. [source] Dietary accumulation of hexabromocyclododecane diastereoisomers in juvenile rainbow trout (Oncorhynchus mykiss) I: Bioaccumulation parameters and evidence of bioisomerizationENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 7 2006Kerri Law Abstract Juvenile rainbow trout (Oncorhynchus mykiss) were exposed to three diastereoisomers (,, ,, ,) of hexabromocyclododecane (C12H18Br6) via their diet for 56 d followed by 112 d of untreated food to examine bioaccumulation parameters and test the hypothesis of in vivo bioisomerization. Four groups of 70 fish were used in the study. Three groups were exposed to food fortified with known concentrations of an individual diastereoisomer, while a fourth group were fed unfortified food. Bioaccumulation of the ,-diastereoisomer was linear during the uptake phase, while the ,- and ,-diastereoisomers were found to increase exponentially with respective doubling times of 8.2 and 17.1 d. Both the ,- and the ,-diastereoisomers followed a first-order depuration kinetics with calculated half-lives of 157 ± 71 and 144 ± 60 d (±1 × standard error), respectively. The biomagnification factor (BMF) for the ,-diastereoisomer (BMF = 9.2) was two times greater than the ,-diastereoisomer (BMF = 4.3); the large BMF for the ,-diastereoisomer is consistent with this diastereoisomer dominating higher-trophic-level organisms. Although the BMF of the ,-diastereoisomer suggests that it will biomagnify, it is rarely detected in environmental samples because it is present in small quantities in commercial mixtures. Results from these studies also provide evidence of bioisomerization of the ,- and ,-diastereoisomers. Most importantly, the ,-diastereoisomer that was recalcitrant to bioisomerization by juvenile rainbow trout in this study and known to be the dominant diastereosiomer in fish was bioformed from both the ,- and the ,-diastereoisomers. To our knowledge, this is the first report of bioisomerization of a halogenated organic pollutant in biota. [source] Bootstrap simulations for evaluating the uncertainty associated with peaks-over-threshold estimates of extreme wind velocityENVIRONMETRICS, Issue 1 2003M. D. Pandey Abstract In the peaks-over-threshold (POT) method of extreme quantile estimation, the selection of a suitable threshold is critical to estimation accuracy. In practical applications, however, the threshold selection is not so obvious due to erratic variation of quantile estimates with minor changes in threshold. To address this issue, the article investigates the variation of quantile uncertainty (bias and variance) as a function of threshold using a semi-parametric bootstrap algorithm. Furthermore, the article compares the performance of L-moment and de Haan methods that are used for fitting the Pareto distribution to peak data. The analysis of simulated and actual U.S. wind speed data illustrates that the L-moment method can lead to almost unbiased quantile estimates for certain thresholds. A threshold corresponding to minimum standard error appears to provide reasonable estimates of wind speed extremes. It is concluded that the quantification of uncertainty associated with a quantile estimate is necessary for selecting a suitable threshold and estimating the design wind speed. For this purpose, semi-parametric bootstrap method has proved to be a simple, practical and effective tool. Copyright © 2003 John Wiley & Sons, Ltd. [source] Hyperhomocysteinemia in epileptic patients on new antiepileptic drugsEPILEPSIA, Issue 2 2010Vincenzo Belcastro Summary Purpose:, Older enzyme-inducing antiepileptic drugs (AEDs) may induce supraphysiologic plasma concentrations of total (t) homocysteine (Hcy). The aim of the present study was to investigate the effect of new AEDs on plasma tHcy levels. Methods:, Patients 18,50 years of age, on AEDs monotherapy, with no other known cause of hyper-tHcy were enrolled. Plasma tHcy, folate, vitamin B12, and AEDs levels were determined by standard high-performance liquid chromatography (HPLC) methods. Methylenetetrahydrofolate-reductase (MTHFR) polymorphisms were checked using Puregene genomic DNA purification system (Gentra, Celbio, Italy). A group of healthy volunteers matched for age and sex was taken as control. Results:, Two hundred fifty-nine patients (151 on newer and 108 on older AEDs) and 231 controls were enrolled. Plasma tHcy levels were significantly higher [mean values, standard error (SE) 16.8, 0.4 vs. 9.1, 0.2 ,m; physiologic range 5,13 ,m] and folate lower (6.3, 0.1 vs. 9.3, 0.1 nm; normal > 6.8 nm) in patients compared to controls. Patients treated with oxcarbazepine, topiramate, carbamazepine, and phenobarbital exhibited mean plasma tHcy levels above the physiologic range [mean values (SE) 16 (0.8), 19.1 (0.8), 20.5 (1.0), and 18.5 (1.5) ,m, respectively]. Conversely, normal tHcy concentrations were observed in the lamotrigine and levetiracetam groups [both 11.1 (0.5) ,m]. Discussion:, Oxcarbazepine and topiramate might cause hyper-tHcy, most likely because of the capacity of these agents to induce the hepatic enzymes. Because literature data suggest that hyper-tHcy may contribute to the development of cerebrovascular diseases and brain atrophy, a supplement of folate can be considered in these patients to normalize plasma tHcy. [source] On the reliability of a dental OSCE, using SEM: effect of different daysEUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 3 2008M. Schoonheim-Klein Abstract Aim:, The first aim was to study the reliability of a dental objective structured clinical examination (OSCE) administered over multiple days, and the second was to assess the number of test stations required for a sufficiently reliable decision in three score interpretation perspectives of a dental OSCE administered over multiple days. Materials and methods:, In four OSCE administrations, 463 students of the year 2005 and 2006 took the summative OSCE after a dental course in comprehensive dentistry. The OSCE had 16,18 5-min stations (scores 1,10), and was administered per OSCE on four different days of 1 week. ANOVA was used to test for examinee performance variation across days. Generalizability theory was used for reliability analyses. Reliability was studied from three interpretation perspectives: for relative (norm) decisions, for absolute (domain) and pass,fail (mastery) decisions. As an indicator of reproducibility of test scores in this dental OSCE, the standard error of measurement (SEM) was used. The benchmark of SEM was set at <0.51. This is corresponding to a 95% confidence interval (CI) of <1 on the original scoring scale that ranged from 1 to 10. Results:, The mean weighted total OSCE score was 7.14 on a 10-point scale. With the pass,fail score set at 6.2 for the four OSCE, 90% of the 463 students passed. There was no significant increase in scores over the different days the OSCE was administered. ,Wished' variance owing to students was 6.3%. Variance owing to interaction between student and stations and residual error was 66.3%, more than two times larger than variance owing to stations' difficulty (27.4%). The SEM norm was 0.42 with a CI of ±0.83 and the SEM domain was 0.50, with a CI of ±0.98. In order to make reliable relative decisions (SEM <0.51), the use of minimal 12 stations is necessary, and for reliable absolute and pass,fail decisions, the use of minimal 17 stations is necessary in this dental OSCE. Conclusions:, It appeared reliable, when testing large numbers of students, to administer the OSCE on different days. In order to make reliable decisions for this dental OSCE, minimum 17 stations are needed. Clearly, wide sampling of stations is at the heart of obtaining reliable scores in OSCE, also in dental education. [source] Modelling the Influence of Age, Body Size and Sex on Maximum Oxygen Uptake in Older HumansEXPERIMENTAL PHYSIOLOGY, Issue 2 2000Patrick J. Johnson The purpose of this study was to describe the influence of body size and sex on the decline in maximum oxygen uptake (V,O2,max) in older men and women. A stratified random sample of 152 men and 146 women, aged 55-86 years, was drawn from the study population. Influence of age on V,O2,max, independent of differences in body mass (BM) or fat-free mass (FFM), was investigated using the following allometric model: V,O2,max= BMb (or FFMb) exp(a + (c × age) + (d × sex)) [epsilon]. The model was linearised and parameters identified using standard multiple regression. The BM model explained 68.8% of the variance in V,O2,max. The parameters (± s.e.e., standard error of the estimate) for lnBM (0.563 ± 0.070), age (-0.0154 ± 0.0012), sex (0.242 ± 0.024) and the intercept (-1.09 ± 0.32) were all significant (P < 0.001). The FFM model explained 69.3% of the variance in V,O2,max, and the parameters (± s.e.e) lnFFM (0.772 ± 0.090), age (-0.0159 ± 0.0012) and the intercept (-1.57 ± 0.36) were significant (P < 0.001), while sex (0.077 +/, 0.038) was significant at P = 0.0497. Regardless of the model used, the age-associated decline was similar, with a relative decline of 15% per decade (0.984 exp(age)) in V,O2,max in older humans being estimated. The study has demonstrated that, for a randomly drawn sample, the age-related loss in V,O2,max is determined, in part, by the loss of fat-free body mass. When this factor is accounted for, the loss of V,O2,max across age is similar in older men and women. [source] Exoenzyme activities as indicators of dissolved organic matter composition in the hyporheic zone of a floodplain riverFRESHWATER BIOLOGY, Issue 8 2010SANDRA M. CLINTON Summary 1. We measured the hyporheic microbial exoenzyme activities in a floodplain river to determine whether dissolved organic matter (DOM) bioavailability varied with overlying riparian vegetation patch structure or position along flowpaths. 2. Particulate organic matter (POM), dissolved organic carbon (DOC), dissolved oxygen (DO), electrical conductivity and temperature were sampled from wells in a riparian terrace on the Queets River, Washington, U.S.A. on 25 March, 15 May, 20 July and 09 October 1999. Dissolved nitrate, ammonium and soluble reactive phosphorus were also collected on 20 July and 09 October 1999. Wells were characterised by their associated overlying vegetation: bare cobble/young alder, mid-aged alder (8,20 years) and old alder/old-growth conifer (25 to >100 years). POM was analysed for the ash-free dry mass and the activities of eight exoenzymes (,-glucosidase, ,-glucosidase, , -N-acetylglucosaminidase, xylosidase, phosphatase, leucine aminopeptidase, esterase and endopeptidase) using fluorogenic substrates. 3. Exoenzyme activities in the Queets River hyporheic zone indicated the presence of an active microbial community metabolising a diverse array of organic molecules. Individual exoenzyme activity (mean ± standard error) ranged from 0.507 ± 0.1547 to 22.8 ± 5.69 ,mol MUF (g AFDM),1 h,1, was highly variable among wells and varied seasonally, with the lowest rates occurring in March. Exoenzyme activities were weakly correlated with DO, DOC and inorganic nutrient concentrations. 4. Ratios of leucine aminopeptidase : ,-glucosidase were low in March, May and October and high in July, potentially indicating a switch from polysaccharides to proteins as the dominant component of microbial metabolism. 5. Principal components analysis indicated that there were patch effects and that these effects were strongest in the summer. 6. DOM degradation patterns did not change systematically along hyporheic flowpaths but varied with overlying forest patch type in the Queets River hyporheic zone, suggesting that additional carbon inputs exist. We hypothesise that the most likely input is the downward movement of DOM from overlying riparian soils. Understanding this movement of DOM from soils to subsurface water is essential for understanding both the hyporheic metabolism and the carbon budget of streams and rivers. [source] Measuring metabolic rate in the field: the pros and cons of the doubly labelled water and heart rate methodsFUNCTIONAL ECOLOGY, Issue 2 2004P. J. Butler Summary 1Measuring the metabolic rate of animals in the field (FMR) is central to the work of ecologists in many disciplines. In this article we discuss the pros and cons of the two most commonly used methods for measuring FMR. 2Both methods are constantly under development, but at the present time can only accurately be used to estimate the mean rate of energy expenditure of groups of animals. The doubly labelled water method (DLW) uses stable isotopes of hydrogen and oxygen to trace the flow of water and carbon dioxide through the body over time. From these data, it is possible to derive a single estimate of the rate of oxygen consumption () for the duration of the experiment. The duration of the experiment will depend on the rate of flow of isotopes of oxygen and hydrogen through the body, which in turn depends on the animal's size, ranging from 24 h for small vertebrates to up to 28 days in Humans. 3This technique has been used widely, partly as a result of its relative simplicity and potential low cost, though there is some uncertainty over the determination of the standard error of the estimate of mean . 4The heart rate (fH) method depends on the physiological relationship between heart rate and . 5If these two quantities are calibrated against each other under controlled conditions, fH can then be measured in free-ranging animals and used to estimate . 6The latest generation of small implantable data loggers means that it is possible to measure fH for over a year on a very fine temporal scale, though the current size of the data loggers limits the size of experimental animals to around 1 kg. However, externally mounted radio-transmitters are now sufficiently small to be used with animals of less than 40 g body mass. This technique is gaining in popularity owing to its high accuracy and versatility, though the logistic constraint of performing calibrations can make its use a relatively extended process. [source] Upward bias in odds ratio estimates from genome-wide association studiesGENETIC EPIDEMIOLOGY, Issue 4 2007Chad Garner Abstract Genome-wide association studies are carried out to identify unknown genes for a complex trait. Polymorphisms showing the most statistically significant associations are reported and followed up in subsequent confirmatory studies. In addition to the test of association, the statistical analysis provides point estimates of the relationship between the genotype and phenotype at each polymorphism, typically an odds ratio in case-control association studies. The statistical significance of the test and the estimator of the odds ratio are completely correlated. Selecting the most extreme statistics is equivalent to selecting the most extreme odds ratios. The value of the estimator, given the value of the statistical significance depends on the standard error of the estimator and the power of the study. This report shows that when power is low, estimates of the odds ratio from a genome-wide association study, or any large-scale association study, will be upwardly biased. Genome-wide association studies are often underpowered given the low , levels required to declare statistical significance and the small individual genetic effects known to characterize complex traits. Factors such as low allele frequency, inadequate sample size and weak genetic effects contribute to large standard errors in the odds ratio estimates, low power and upwardly biased odds ratios. Studies that have high power to detect an association with the true odds ratio will have little or no bias, regardless of the statistical significance threshold. The results have implications for the interpretation of genome-wide association analysis and the planning of subsequent confirmatory stages. Genet Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source] |