![]() |
Home About us Contact | ||
![]() |
![]() |
||
Unbiased
Terms modified by Unbiased Selected AbstractsRevised ,34S reference values for IAEA sulfur isotope reference materials S-2 and S-3,RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 8 2009Jacqueline L. Mann Revised ,34S reference values with associated expanded uncertainties (95% confidence interval (C.I.)) are presented for the sulfur isotope reference materials IAEA-S-2 (22.62,±,0.16,) and IAEA-S-3 (,32.49,±,0.16,). These revised values are determined using two relative-difference measurement techniques, gas source isotope ratio mass spectrometry (GIRMS) and double-spike multi-collector thermal ionization mass spectrometry (MC-TIMS). Gas analyses have traditionally been considered the most robust for relative isotopic difference measurements of sulfur. The double-spike MC-TIMS technique provides an independent method for value-assignment validation and produces revised values that are both unbiased and more precise than previous value assignments. Unbiased ,34S values are required to anchor the positive and negative end members of the sulfur delta (,) scale because they are the basis for reporting both ,34S values and the derived mass-independent ,33S and ,36S values. Published in 2009 by John Wiley & Sons, Ltd. [source] Bootstrapping regression models with BLUS residualsTHE CANADIAN JOURNAL OF STATISTICS, Issue 1 2000Michèle Grenier Abstract To bootstrap a regression problem, pairs of response and explanatory variables or residuals can be resam-pled, according to whether we believe that the explanatory variables are random or fixed. In the latter case, different residuals have been proposed in the literature, including the ordinary residuals (Efron 1979), standardized residuals (Bickel & Freedman 1983) and Studentized residuals (Weber 1984). Freedman (1981) has shown that the bootstrap from ordinary residuals is asymptotically valid when the number of cases increases and the number of variables is fixed. Bickel & Freedman (1983) have shown the asymptotic validity for ordinary residuals when the number of variables and the number of cases both increase, provided that the ratio of the two converges to zero at an appropriate rate. In this paper, the authors introduce the use of BLUS (Best Linear Unbiased with Scalar covariance matrix) residuals in bootstrapping regression models. The main advantage of the BLUS residuals, introduced in Theil (1965), is that they are uncorrelated. The main disadvantage is that only n ,p residuals can be computed for a regression problem with n cases and p variables. The asymptotic results of Freedman (1981) and Bickel & Freedman (1983) for the ordinary (and standardized) residuals are generalized to the BLUS residuals. A small simulation study shows that even though only n , p residuals are available, in small samples bootstrapping BLUS residuals can be as good as, and sometimes better than, bootstrapping from standardized or Studentized residuals. Pour appliquer le bootstrap en régression, on peut soit rééchantillonner conjointement les variables réponse et explicatives ou encore rééchantillonner des résidus selon que l'on pense que les variables explicatives sont aléatoires ou fixes. Dans ce dernier cas, plusieurs types de résidus ont été présentés dans la littérature, notamment les résidus ordinaires (Efron 1979), les résidus standardisés (Bickel & Freedman 1983) et les résidus studentisés (Weber 1984). Freedman (1981) a démontré que l'utilisation des résidus ordinaires dans le bootstrap en régression est asymptotiquement valide lorsque le nombre d'unités d'observation augmente alors que le nombre de variables explicatives est fixe. Bickel & Freedman (1983) ont démontré la même validité asymptotique lorsque le nombre de variables explicatives augmente en même temps que le nombre d'unités d'observation, en autant que le rapport des deux converge vers zéro à un taux approprié. Dans cet article, les auteurs considèrent l'emploi des résidus BLUS (Best Linear Unbiased with Scalar covariance matrix) dans le bootstrap en régression. Le principal avantage des résidus BLUS, définis par Theil (1965), est qu'ils sont non corrélés. Toutefois, on ne peut calculer que n,p résidus dans un problème de régression avec n unités d'observation et p variables explicatives. Les résultats asymptotiques de Freedman (1981) et Bickel & Freedman (1983) pour les résidus ordinaires (et standardisés) sont généralisés aux résidus BLUS. Une simulation démontre que bien que l'on ne puisse alors compter que sur n , p résidus, le bootstrap effectué à partir des residus BLUS fait aussi bien et parfois mieux dans les petits échantillons que le bootstrap s'appuyant sur les résidus standardisés ou studentisés. [source] Off-Target Decoding of a Multitarget Kinase Inhibitor by Chemical ProteomicsCHEMBIOCHEM, Issue 7 2009Enrico Missner Abstract Unbiased: Chemical proteomics was used to profile compound interactions in an unbiased fashion. We present here the application of different compound-immobilization routes for decoding nonprotein kinase off-targets of the multitarget kinase inhibitor C1, which interacts with distinct compound moieties. Since the approval of the first selective tyrosine kinase inhibitor, imatinib, various drugs have been developed to target protein kinases. However, due to a high degree of structural conservation of the ATP binding site, off-target effects have been reported for several drugs. Here, we report on off-target decoding for a multitarget protein kinase inhibitor by chemical proteomics, by focusing on interactions with nonprotein kinases. We tested two different routes for the immobilization of the inhibitor on a carrier matrix, and thus identified off-targets that interact with distinct compound moieties. Besides several of the kinases known to bind to the compound, the pyridoxal kinase (PDXK), which has been described to interact with the CDK inhibitor (R)-roscovitine, was captured. The PDXK,inhibitor interaction was shown to occur at the substrate binding site rather than at the ATP binding site. In addition, carbonic anhydrase 2 (CA2) binding was demonstrated, and the determination of the IC50 revealed an enzyme inhibition in the submicromolar range. The data demonstrate that different compound immobilization routes for chemical proteomics approaches are a valuable method to improve the knowledge about the off-target profile of a compound. [source] Replica Exchange Light TransportCOMPUTER GRAPHICS FORUM, Issue 8 2009Shinya Kitaoka I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation Abstract We solve the light transport problem by introducing a novel unbiased Monte Carlo algorithm called replica exchange light transport, inspired by the replica exchange Monte Carlo method in the fields of computational physics and statistical information processing. The replica exchange Monte Carlo method is a sampling technique whose operation resembles simulated annealing in optimization algorithms using a set of sampling distributions. We apply it to the solution of light transport integration by extending the probability density function of an integrand of the integration to a set of distributions. That set of distributions is composed of combinations of the path densities of different path generation types: uniform distributions in the integral domain, explicit and implicit paths in light (particle/photon) tracing, indirect paths in bidirectional path tracing, explicit and implicit paths in path tracing, and implicit caustics paths seen through specular surfaces including the delta function in path tracing. The replica-exchange light transport algorithm generates a sequence of path samples from each distribution and samples the simultaneous distribution of those distributions as a stationary distribution by using the Markov chain Monte Carlo method. Then the algorithm combines the obtained path samples from each distribution using multiple importance sampling. We compare the images generated with our algorithm to those generated with bidirectional path tracing and Metropolis light transport based on the primary sample space. Our proposing algorithm has better convergence property than bidirectional path tracing and the Metropolis light transport, and it is easy to implement by extending the Metropolis light transport. [source] Comparing weighted and unweighted analyses applied to data with a mix of pooled and individual observationsENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 5 2010Sarah G. Anderson Abstract Smaller organisms may have too little tissue to allow assaying as individuals. To get a sufficient sample for assaying, a collection of smaller individual organisms is pooled together to produce a simple observation for modeling and analysis. When a dataset contains a mix of pooled and individual organisms, the variances of the observations are not equal. An unweighted regression method is no longer appropriate because it assumes equal precision among the observations. A weighted regression method is more appropriate and yields more precise estimates because it incorporates a weight to the pooled observations. To demonstrate the benefits of using a weighted analysis when some observations are pooled, the bias and confidence interval (CI) properties were compared using an ordinary least squares and a weighted least squares t -based confidence interval. The slope and intercept estimates were unbiased for both weighted and unweighted analyses. While CIs for the slope and intercept achieved nominal coverage, the CI lengths were smaller using a weighted analysis instead of an unweighted analysis, implying that a weighted analysis will yield greater precision. Environ. Toxicol. Chem. 2010;29:1168,1171. © 2010 SETAC [source] Ratio estimators in adaptive cluster samplingENVIRONMETRICS, Issue 6 2007Arthur L. Dryver Abstract In most surveys data are collected on many items rather than just the one variable of primary interest. Making the most use of the information collected is a issue of both practical and theoretical interest. Ratio estimates for the population mean or total are often more efficient. Unfortunately, ratio estimation is straightforward with simple random sampling, but this is often not the case when more complicated sampling designs are used, such as adaptive cluster sampling. A serious concern with ratio estimates introduced with many complicated designs is lack of independence, a necessary assumption. In this article, we propose two new ratio estimators under adaptive cluster sampling, one of which is unbiased for adaptive cluster sampling designs. The efficiencies of the new estimators to existing unbiased estimators, which do not utilize the auxiliary information, for adaptive cluster sampling and the conventional ratio estimation under simple random sampling without replacement are compared in this article. Related result shows the proposed estimators can be considered as a robust alternative of the conventional ratio estimator, especially when the correlation between the variable of interest and the auxiliary variable is not high enough for the conventional ratio estimator to have satisfactory performance. Copyright © 2007 John Wiley & Sons, Ltd. [source] Occupational exposure to methyl tertiary butyl ether in relation to key health symptom prevalence: the effect of measurement error correctionENVIRONMETRICS, Issue 6 2003Aparna P. Keshaviah Abstract In 1995, White et al. reported that methyl tertiary butyl ether (MTBE), an oxygenate added to gasoline, was significantly associated with key health symptoms, including headaches, eye irritation, and burning of the nose and throat, among 44 people occupationally exposed to the compound and for whom serum MTBE measurements were available (odds ratio (OR),=,8.9, 95% CI,=,[1.2, 75.6]). However, these serum MTBE measurements were available for only 29 per cent of the 150 subjects enrolled. Around the same time, Mannino et al. conducted a similar study among individuals occupationally exposed to low levels of MTBE and did not find a significant association between exposure to MTBE and the presence of one or more key health symptoms among the 264 study participants (OR,=,0.60, 95% CI,=,[0.3, 1.21]). In this article, we evaluate the effect of MTBE on the prevalence of key health symptoms by applying a regression calibration method to White et al.'s and Mannino et al.'s data. Unlike White et al., who classified exposure using actual MTBE levels among a subset of the participants, and Mannino et al., who classified exposure based on job category among all participants, we use all of the available data to obtain an estimate of the effect of MTBE in units of serum concentration, adjusted for measurement error due to using job category instead of measured exposure. After adjusting for age, gender and smoking status, MTBE exposure was found to be significantly associated with a 50 per cent increase in the prevalence of one or more key health symptoms per order of magnitude increase in blood concentration on the log10 scale, using data from the 409 study participants with complete information on the covariates (95% CI,=,[1.00, 2.25]). Simulation results indicated that under conditions similar to those observed in these data, the estimator is unbiased and has a coverage probability close to the nominal value. The methodology illustrated in this article is advantageous because all of the available data were used in the analysis, obtaining a more precise estimate of exposure effect on health outcome, and the estimate is adjusted for measurement error due to using job category instead of measured exposure. Copyright © 2003 John Wiley & Sons, Ltd. [source] Variance estimation for spatially balanced samples of environmental resourcesENVIRONMETRICS, Issue 6 2003Don L. Stevens Jr Abstract The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. We review a unified strategy for designing probability samples of discrete, finite resource populations, such as lakes within some geographical region; linear populations, such as a stream network in a drainage basin, and continuous, two-dimensional populations, such as forests. The strategy can be viewed as a generalization of spatial stratification. In this article, we develop a local neighborhood variance estimator based on that perspective, and examine its behavior via simulation. The simulations indicate that the local neighborhood estimator is unbiased and stable. The Horvitz,Thompson variance estimator based on assuming independent random sampling (IRS) may be two times the magnitude of the local neighborhood estimate. An example using data from a generalized random-tessellation stratified design on the Oahe Reservoir resulted in local variance estimates being 22 to 58 percent smaller than Horvitz,Thompson IRS variance estimates. Variables with stronger spatial patterns had greater reductions in variance, as expected. Copyright © 2003 John Wiley & Sons, Ltd. [source] A STATISTICAL TEST OF UNBIASED EVOLUTION OF BODY SIZE IN BIRDSEVOLUTION, Issue 12 2002Folmer Bokma Abstract., Of the approximately 9500 bird species, the vast majority is small-bodied. That is a general feature of evolutionary lineages, also observed for instance in mammals and plants. The avian interspecific body size distribution is right-skewed even on a logarithmic scale. That has previously been interpreted as evidence that body size evolution has been biased. However, a procedure to test for unbiased evolution from the shape of body size distributions was lacking. In the present paper unbiased body size evolution is defined precisely, and a statistical test is developed based on Monte Carlo simulation of unbiased evolution. Application of the test to birds suggests that it is highly unlikely that avian body size evolution has been unbiased as defined. Several possible explanations for this result are discussed. A plausible explanation is that the general model of unbiased evolution assumes that population size and generation time do not affect the evolutionary variability of body size; that is, that micro- and macroevolution are decoupled, which theory suggests is not likely to be the case. [source] Semiparametric variance-component models for linkage and association analyses of censored trait dataGENETIC EPIDEMIOLOGY, Issue 7 2006G. Diao Abstract Variance-component (VC) models are widely used for linkage and association mapping of quantitative trait loci in general human pedigrees. Traditional VC methods assume that the trait values within a family follow a multivariate normal distribution and are fully observed. These assumptions are violated if the trait data contain censored observations. When the trait pertains to age at onset of disease, censoring is inevitable because of loss to follow-up and limited study duration. Censoring also arises when the trait assay cannot detect values below (or above) certain thresholds. The latent trait values tend to have a complex distribution. Applying traditional VC methods to censored trait data would inflate type I error and reduce power. We present valid and powerful methods for the linkage and association analyses of censored trait data. Our methods are based on a novel class of semiparametric VC models, which allows an arbitrary distribution for the latent trait values. We construct appropriate likelihood for the observed data, which may contain left or right censored observations. The maximum likelihood estimators are approximately unbiased, normally distributed, and statistically efficient. We develop stable and efficient numerical algorithms to implement the corresponding inference procedures. Extensive simulation studies demonstrate that the proposed methods outperform the existing ones in practical situations. We provide an application to the age at onset of alcohol dependence data from the Collaborative Study on the Genetics of Alcoholism. A computer program is freely available. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source] Optimal designs for estimating penetrance of rare mutations of a disease-susceptibility geneGENETIC EPIDEMIOLOGY, Issue 3 2003Gail Gong Abstract Many clinical decisions require accurate estimates of disease risks associated with mutations of known disease-susceptibility genes. Such risk estimation is difficult when the mutations are rare. We used computer simulations to compare the performance of estimates obtained from two types of designs based on family data. In the first (clinic-based designs), families are ascertained because they meet certain criteria concerning multiple disease occurrences among family members. In the second (population-based designs), families are sampled through a population-based registry of affected individuals called probands, with oversampling of probands whose families are more likely to segregate mutations. We generated family structures, genotypes, and phenotypes using models that reflect the frequencies and penetrances of mutations of the BRCA1/2 genes. We studied the effects of risk heterogeneity due to unmeasured, shared risk factors by including risk variation due to unmeasured genotypes of another gene. The simulations were chosen to mimic the ascertainment and selection processes commonly used in the two types of designs. We found that penetrance estimates from both designs are nearly unbiased in the absence of unmeasured shared risk factors, but are biased upward in the presence of such factors. The bias increases with increasing variation in risks across genotypes of the second gene. However, it is small compared to the standard error of the estimates. Standard errors from population-based designs are roughly twice those from clinic-based designs with the same number of families. Using the root-mean-square error as a measure of performance, we found that in all instances, the clinic-based designs gave more accurate estimates than did the population-based designs with the same numbers of families. Rough variance calculations suggest that clinic-based designs give more accurate estimates because they include more identified mutation carriers. Genet Epidemiol 24:173,180, 2003. © 2003 Wiley-Liss, Inc. [source] Estimation of allele frequencies with data on sibshipsGENETIC EPIDEMIOLOGY, Issue 3 2001Karl W. Broman Abstract Allele frequencies are generally estimated with data on a set of unrelated individuals. In genetic studies of late-onset diseases, the founding individuals in pedigrees are often not available, and so one is confronted with the problem of estimating allele frequencies with data on related individuals. We focus on sibpairs and sibships, and compare the efficiency of four methods for estimating allele frequencies in this situation: (1) use the data for one individual from each sibship; (2) use the data for all individuals, ignoring their relationships; (3) use the data for all individuals, taking proper account of their relationships, considering a single marker at a time; and (4) use the data for all individuals, taking proper account of their relationships, considering a set of linked markers simultaneously. We derived the variance of estimator 2, and showed that the estimator is unbiased and provides substantial improvement over method 1. We used computer simulation to study the performance of methods 3 and 4, and showed that method 3 provides some improvement over method 2, while method 4 improves little on method 3. Genet. Epidemiol. 20:307,315, 2001. © 2001 Wiley-Liss, Inc. [source] Localized spectral analysis on the sphereGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2005Mark A. Wieczorek SUMMARY It is often advantageous to investigate the relationship between two geophysical data sets in the spectral domain by calculating admittance and coherence functions. While there exist powerful Cartesian windowing techniques to estimate spatially localized (cross-)spectral properties, the inherent sphericity of planetary bodies sometimes necessitates an approach based in spherical coordinates. Direct localized spectral estimates on the sphere can be obtained by tapering, or multiplying the data by a suitable windowing function, and expanding the resultant field in spherical harmonics. The localization of a window in space and its spectral bandlimitation jointly determine the quality of the spatiospectral estimation. Two kinds of axisymmetric windows are here constructed that are ideally suited to this purpose: bandlimited functions that maximize their spatial energy within a cap of angular radius ,0, and spacelimited functions that maximize their spectral power within a spherical harmonic bandwidth L. Both concentration criteria yield an eigenvalue problem that is solved by an orthogonal family of data tapers, and the properties of these windows depend almost entirely upon the space,bandwidth product N0= (L+ 1) ,0/,. The first N0, 1 windows are near perfectly concentrated, and the best-concentrated window approaches a lower bound imposed by a spherical uncertainty principle. In order to make robust localized estimates of the admittance and coherence spectra between two fields on the sphere, we propose a method analogous to Cartesian multitaper spectral analysis that uses our optimally concentrated data tapers. We show that the expectation of localized (cross-)power spectra calculated using our data tapers is nearly unbiased for stochastic processes when the input spectrum is white and when averages are made over all possible realizations of the random variables. In physical situations, only one realization of such a process will be available, but in this case, a weighted average of the spectra obtained using multiple data tapers well approximates the expected spectrum. While developed primarily to solve problems in planetary science, our method has applications in all areas of science that investigate spatiospectral relationships between data fields defined on a sphere. [source] Estimating lifetime or episode-of-illness costs under censoringHEALTH ECONOMICS, Issue 9 2010Anirban Basu Abstract Many analyses of healthcare costs involve use of data with varying periods of observation and right censoring of cases before death or at the end of the episode of illness. The prominence of observations with no expenditure for some short periods of observation and the extreme skewness typical of these data raise concerns about the robustness of estimators based on inverse probability weighting (IPW) with the survival from censoring probabilities. These estimators also cannot distinguish between the effects of covariates on survival and intensity of utilization, which jointly determine costs. In this paper, we propose a new estimator that extends the class of two-part models to deal with random right censoring and for continuous death and censoring times. Our model also addresses issues about the time to death in these analyses and separates the survival effects from the intensity effects. Using simulations, we compare our proposed estimator to the inverse probability estimator, which shows bias when censoring is large and covariates affect survival. We find our estimator to be unbiased and also more efficient for these designs. We apply our method and compare it with the IPW method using data from the Medicare,SEER files on prostate cancer. Copyright © 2010 John Wiley & Sons, Ltd. [source] Parameter estimation in semi-distributed hydrological catchment modelling using a multi-criteria objective functionHYDROLOGICAL PROCESSES, Issue 22 2007Hamed Rouhani Abstract Output generated by hydrologic simulation models is traditionally calibrated and validated using split-samples of observed time series of total water flow, measured at the drainage outlet of the river basin. Although this approach might yield an optimal set of model parameters, capable of reproducing the total flow, it has been observed that the flow components making up the total flow are often poorly reproduced. Previous research suggests that notwithstanding the underlying physical processes are often poorly mimicked through calibration of a set of parameters hydrologic models most of the time acceptably estimates the total flow. The objective of this study was to calibrate and validate a computer-based hydrologic model with respect to the total and slow flow. The quick flow component used in this study was taken as the difference between the total and slow flow. Model calibrations were pursued on the basis of comparing the simulated output with the observed total and slow flow using qualitative (graphical) assessments and quantitative (statistical) indicators. The study was conducted using the Soil and Water Assessment Tool (SWAT) model and a 10-year historical record (1986,1995) of the daily flow components of the Grote Nete River basin (Belgium). The data of the period 1986,1989 were used for model calibration and data of the period 1990,1995 for model validation. The predicted daily average total flow matched the observed values with a Nash,Sutcliff coefficient of 0·67 during calibration and 0·66 during validation. The Nash,Sutcliff coefficient for slow flow was 0·72 during calibration and 0·61 during validation. Analysis of high and low flows indicated that the model is unbiased. A sensitivity analysis revealed that for the modelling of the daily total flow, accurate estimation of all 10 calibration parameters in the SWAT model is justified, while for the slow flow processes only 4 out of the set of 10 parameters were identified as most sensitive. Copyright © 2007 John Wiley & Sons, Ltd. [source] Spurious correlations between recent warming and indices of local economic activity,INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 14 2009Gavin A. Schmidt Abstract A series of climate model simulations of the 20th Century are analysed to investigate a number of published correlations between indices of local economic activity and recent global warming. These correlations have been used to support a hypothesis that the observed surface warming record has been contaminated in some way and thus overestimates true global warming. However, the basis of the results are correlations over a very restricted set of locations (predominantly western Europe, Japan and the USA) which project strongly onto naturally occurring patterns of climate variability, or are with fields with significant amounts of spatial auto-correlation. Across model simulations, the correlations vary widely due to the chaotic weather component in any short-term record. The reported correlations do not fall outside the simulated distribution, and are probably spurious (i.e. are likely to have arisen from chance alone). Thus, though this study cannot prove that the global temperature record is unbiased, there is no compelling evidence from these correlations of any large-scale contamination. Copyright © 2009 Royal Meteorological Society [source] Resolution errors associated with gridded precipitation fieldsINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 15 2005C. J. Willmott Abstract Spatial-resolution errors are inherent in gridded precipitation (P) fields,such as those produced by climate models and from satellite observations,and they can be sizeable when P is averaged spatially onto a coarse grid. They can also vary dramatically over space and time. In this paper, we illustrate the importance of evaluating resolution errors associated with gridded P fields by investigating the relationships between grid resolution and resolution error for monthly P within the Amazon Basin. Spatial-resolution errors within gridded-monthly and average-monthly P fields over the Amazon Basin are evaluated for grid resolutions ranging from 0.1° to 5.0°. A resolution error occurs when P is estimated for a location of interest within a grid-cell from the unbiased, grid-cell average P. Graphs of January, July and annual resolution errors versus resolution show that, at the higher resolutions (<3° ), aggregation quickly increases resolution error. Resolution error then begins to level off as the grid becomes coarser. Within the Amazon Basin, the largest resolution errors occur during January (summer), but the largest percentage errors appear in July (winter). In January of 1980, e.g., resolution errors of 29, 52 and 65 mm,or 11, 19 and 24% of the grid-cell means,were estimated at resolutions of 1.0°, 3.0° and 5.0°. In July of 1980, however, the percentage errors at these three resolutions were considerably larger, that is, 15%, 27% and 33% of the grid-cell means. Copyright © 2005 Royal Meteorological Society [source] Estimating missing daily temperature extremes using an optimized regression approachINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 11 2001Robert J. Allen Abstract A variation of a least squares regression approach to estimate missing daily maximum and minimum temperatures is developed and evaluated, specifically for temperature extremes. The method focuses on obtaining accurate estimates of annual exceedence counts (e.g. the number of days greater than or equal to the 90th percentile of daily maximum temperatures), as well as counts of consecutive exceedences, while limiting the estimation error associated with each individual value. The performance of this method is compared with that of two existing methods developed for the entire temperature distribution. In these existing methods, temperature estimates are based on data from neighbouring stations using either regression or temperature departure-based approaches. Evaluation of our approach using cold minimum and warm maximum temperatures shows that the median percentage of correctly identified exceedence counts is 97% and the median percentage of correctly identified consecutive exceedence counts is 98%. The other existing methods tend to underestimate both single and consecutive exceedence counts. Using these procedures, the estimated exceedence counts are generally less than 80% of those that actually occurred. Despite the fact that our method is tuned to estimate exceedence counts, the estimation accuracy of individual daily maximum or minimum temperatures is similar to that of the other estimation procedures. The median absolute error (MAE) using all temperatures greater than or equal to the 90th percentile (T90),1.1°C for ten climatically diverse stations is 1.28°C for our method, while the other methods give MAEs of 1.27 and 1.17°C. In terms of median error, however, the tendency for underprediction by the existing methods is pronounced with ,0.77 and ,0.61°C biases. Our optimized method is relatively unbiased as the resulting mean error is ,0.12°C. Copyright © 2001 Royal Meteorological Society [source] Prevalence and patterns of executive impairment in community dwelling Mexican Americans: results from the Hispanic EPESE StudyINTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 10 2004Donald R. Royall Abstract Introduction Little is known about the prevalence of impaired executive control function (ECF) in community dwelling elderly or minority populations. We have determined the prevalence of cognitive impairment and impaired ECF in a community dwelling Mexican American elderly population, and their associations with functional status. Subjects Subjects were 1165 Mexican Americans age 65 and over who were administered CLOX as part of the third wave of the Hispanic Established Population for Epidemiological Study (HEPESE) conducted from 1998 to 1999. Methods ECF was measured by an executive clock-drawing task (CDT) (i.e. CLOX1). Non-executive cognitive function was assessed by the Mini-Mental State Examination (MMSE) and a non-executive CDT (CLOX2). CLOX scores were combined to estimate the prevalence of global CLOX failure (i.e. ,Type 1' cognitive impairment) vs isolated CLOX1 failure (i.e. Type 2 cognitive impairment). Results 59.3% of subjects failed CLOX1. 31.1% failed both CLOX1 and CLOX2 (Type 1 cognitive impairment). 33.3% failed CLOX1 only (Type 2 cognitive impairment). 35.6% passed both measures [no cognitive impairment (NCI)]. Many subjects with CLOX1 impairment at Wave 3 had normal MMSE scores. This was more likely to occur in the context of Type 2 cognitive impairment. Both CLOX defined cognitive impairment groups were associated with functional impairment. Conclusions A large percentage of community dwelling Mexican American elderly suffer cognitive impairment that can be demonstrated through a CDT. Isolated executive impairments appear to be most common. The ability of a CDT to demonstrate ECF impairments potentially offers a rapid, culturally unbiased and cost-effective means of assessing this domain. In contrast, the MMSE is relatively insensitive to ECF assessed by CLOX1. Copyright © 2004 John Wiley & Sons, Ltd. [source] Estimation Optimality of Corrected AIC and Modified Cp in Linear RegressionINTERNATIONAL STATISTICAL REVIEW, Issue 2 2006Simon L. Davies Summary Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators. Résumé Les critères de modèle de sélection naissent souvent de la construction de mesures d'estimation impartiales, ou approximativement impartiales, connues comme divergences globales prévues. De telles mesures quantifient la disparité entre le vrai modèle (c'est-à-dire le modèle qui a produit les données observées) et un modèle candidat correspondant. En ce qui concerne les applications de régression linéaires contenant des erreurs distribuées normalement, le modèle de critère d'information "corrigé" Akaike et le modèle conceptuel de statistique de prévision "modifié" ont été proposés comme étant des instruments exacts de mesures d'estimation impartiales de leurs objectifs respectifs de divergences. En nous appuyant sur les travaux précédents et en les développant, nous proposons de démontrer, en outre, que ces critères réalisent une variance minimum au sein de la classe des instruments de mesures d'estimation impartiales. [source] Caddisfly diapause aggregations facilitate benthic invertebrate colonizationJOURNAL OF ANIMAL ECOLOGY, Issue 6 2003Declan J. McCabe Summary 1We used natural and manipulative field experiments to examine the effects of caddisfly (Trichoptera) diapause aggregations on benthic macroinvertebrates communities in a Vermont river. 2Natural substrates with aggregations of Neophylax and Brachycentrus (Trichoptera: Uenoidae and Brachycentridae) had higher species richness than did substrates lacking aggregations. Aggregations of caddisfly cases added to artificial substrates (bricks) also accumulated greater abundance, species density (number of species per unit area), and species richness (number of species per standard number of individuals) than did control bricks. 3Low-density, uniformly spaced, Brachycentrus cases accumulated higher species density and species richness than did an equivalent density of clumped cases. Similarly, empty Neophylax cases accumulated higher diversity than did cases still occupied by Neophylax pupae. 4Although natural substrates had higher species richness than artificial substrates, substrate type did not change qualitatively the effect of caddisfly aggregations on species richness. 5We subsampled individuals randomly from aggregations and control surfaces to provide an estimate of species richness unbiased by abundance. Expected species richness was higher in aggregations than on control surfaces. These results suggest that caddisfly aggregations increase species density by altering the shape of the species,abundance distribution as well as by accumulating individuals and species passively. 6We conclude that caddisfly diapause aggregations increase habitat complexity and facilitate colonization of other benthic species. [source] Estimates of maximum annual population growth rates (rm) of mammals and their application in wildlife managementJOURNAL OF APPLIED ECOLOGY, Issue 3 2010Jim Hone Summary 1.,The maximum annual population growth rate (rm) is a critical parameter in many models of wildlife dynamics and management. An important application of rm is the estimation of the maximum proportion of a population that can be removed to stop population growth (p). 2.,When rm cannot be estimated in the field, one option is to estimate it from demographic data. We evaluate the use of the relationship between rm and female age at first reproduction (,), which is independent of phylogeny, to estimate rm. We first demonstrate that the relationship between field and demographic estimates of rm is unbiased. We then show that the relationship provides an unbiased and simple method to estimate rm using data for 64 mammal species. We also show that p declines exponentially as , increases. 3.,We use the fitted relationship to estimate annual rm and p for 55 mammal species in Australia and New Zealand for which there are no field estimates of rm. The estimates differ by species but have low precision (wide 95% credible intervals CIs). Our estimate of rm for the Tasmanian devil Sarcophilus harrisii is high (0·6, 95% CI: 0·05,2·39) and suggests devils would become extinct if >0·34 of the population is removed annually (e.g. by facial tumour disease). Our estimate of rm (0·77, 95% CI: 0·71,1·05) for brushtail possum Trichosurus vulpecula is much greater than published estimates and highlights the need for further field estimates of rm for the species in New Zealand. 4.,Synthesis and applications. Since rm has not been estimated in the field for the majority of mammal species, our approach enables estimates with credible intervals for this important parameter to be obtained for any species for which female age at first reproduction is known. However, the estimates have wide 95% CIs. The estimated rm, and associated uncertainty can then be used in population and management models, perhaps most importantly to estimate the proportion that if removed annually would drive the population to extinction. Our approach can be used for taxa other than mammals. [source] The Lost E-Mail Technique: Use of an Implicit Measure to Assess Discriminatory Attitudes Toward Two Minority Groups in IsraelJOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 1 2009Orit E. Tykocinski The effectiveness of the "lost e-mail technique" (LET) as an unobtrusive attitude measure was successfully demonstrated in 2 studies. In Study 1, we found that Israeli students were more likely to reply to a similar other than to a minority group member (an Israeli-Arab or an immigrant from the former Soviet Union). In Study 2, LET was administered to professors and administrators, and its effectiveness was compared to a more traditional self-report measure. Although professors showed less discrimination on the self-report measure than did administrators, they were nevertheless discriminative in their responses to lost e-mails. These results suggest that professors are not necessarily less prejudiced, but probably are better able to detect attitude probes and more motivated to appear unbiased. [source] Biogeography and molecular phylogeny of the genus Schizothorax (Teleostei: Cyprinidae) in China inferred from cytochrome b sequencesJOURNAL OF BIOGEOGRAPHY, Issue 8 2006Dekui He Abstract Aim, To test a vicariant speciation hypothesis derived from geological evidence of large-scale changes in drainage patterns in the late Miocene that affected the drainages in the south-eastern Tibetan Plateau. Location, The Tibetan Plateau and adjacent areas. Methods, The cytochrome b DNA sequences of 30 species of the genus Schizothorax from nine different river systems were analysed. These DNA sequences were analysed using parsimony, maximum likelihood and Bayesian methods. The approximately unbiased and Shimodaira,Hasegawa tests were applied to evaluate the statistical significance of the shortest trees relative to alternative hypotheses. Dates of divergences between lineages were estimated using the nonparametric rate smoothing method, and confidence intervals of dates were obtained by parametric bootstrapping. Results, The phylogenetic relationships recovered from molecular data were inconsistent with traditional taxonomy, but apparently reflected geographical associations with rivers. Within the genus Schizothorax, we observed a divergence between the lineages from the Irrawaddy,Lhuit and Tsangpo,Parlung rivers, and tentatively dated this vicariant event back to the late Miocene (7.3,6.8 Ma). We also observed approximately simultaneous geographical splits within drainages of the south-eastern Tibetan Plateau, the Irrawaddy, the Yangtze and the Mekong,Salween rivers in the late Miocene (7.1,6.2 Ma). Main conclusions, Our molecular evidence tentatively highlights the importance of palaeoriver connections and the uplift of the Tibetan Plateau in understanding the evolution of the genus Schizothorax. Molecular estimates of divergence times allowed us to date these vicariant scenarios back to the late Miocene, which agrees with geological suggestions for the separation of these drainages caused by tectonic uplift in south-eastern Tibet. Our results indicated the substantial role of vicariant-based speciation in shaping the current distribution pattern of the genus Schizothorax. [source] Doppler ultrasound assessment of posterior tibial artery size in humansJOURNAL OF CLINICAL ULTRASOUND, Issue 5 2006Manning J. Sabatier PhD Abstract Purpose. The difference between structural remodeling and changes in tone of peripheral arteries in the lower extremities has not been evaluated. The purpose of this study was to (1) evaluate the day-to-day reproducibility and interobserver reliability (IOR) of posterior tibial artery (PTA) diameter measurements and (2) evaluate the effect of posture on PTA diameter at rest (Drest), during 10 minutes of proximal cuff occlusion (Dmin), and after the release of cuff occlusion (Dmax), as well as range (Dmax , Dmin) and constriction [(Dmax , Drest)/(Dmax , Dmin) × 100] in vivo. Methods. We used B-mode sonography to image the PTA during each condition. Results. Day-to-day reliability was good for Drest (intraclass correlation coefficient [ICC] 0.95; mean difference 4.2%), Dmin (ICC 0.93; mean difference 5.4%), and Dmax (ICC 0.99; mean difference 2.2%). The coefficient of repeatability for IOR was 70.5 ,m, with a mean interobserver error of 4.7 ,m. The seated position decreased Drest (2.6 ± 0.2 to 2.4 ± 0.3 mm; p = 0.002), increased Dmin (2.1 ± 0.2 to 2.4 ± 0.2 mm; p = 0.001), and decreased Dmax (3.1 ± 0.4 to 2.8 ± 0.3 mm; p < 0.001) compared with the supine position. The seated position also decreased arterial range (Dmax , Dmin) from 0.9 ± 0.2 to 0.5 ± 0.1 mm (p = 0.003) and increased basal arterial constriction from 57 ± 19% to 105 ± 27% (p = 0.007). Conclusions. The system employed for measuring PTA diameter yields unbiased and consistent estimates. Furthermore, lower extremity arterial constriction and range change with posture in a manner consistent with known changes in autonomic activity. © 2006 Wiley Periodicals, Inc. J Clin Ultrasound 34:223,230, 2006 [source] Efficient calculation of configurational entropy from molecular simulations by combining the mutual-information expansion and nearest-neighbor methods,,JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 10 2008Vladimir Hnizdo Abstract Changes in the configurational entropies of molecules make important contributions to the free energies of reaction for processes such as protein-folding, noncovalent association, and conformational change. However, obtaining entropy from molecular simulations represents a long-standing computational challenge. Here, two recently introduced approaches, the nearest-neighbor (NN) method and the mutual-information expansion (MIE), are combined to furnish an efficient and accurate method of extracting the configurational entropy from a molecular simulation to a given order of correlations among the internal degrees of freedom. The resulting method takes advantage of the strengths of each approach. The NN method is entirely nonparametric (i.e., it makes no assumptions about the underlying probability distribution), its estimates are asymptotically unbiased and consistent, and it makes optimum use of a limited number of available data samples. The MIE, a systematic expansion of entropy in mutual information terms of increasing order, provides a well-characterized approximation for lowering the dimensionality of the numerical problem of calculating the entropy of a high-dimensional system. The combination of these two methods enables obtaining well-converged estimations of the configurational entropy that capture many-body correlations of higher order than is possible with the simple histogramming that was used in the MIE method originally. The combined method is tested here on two simple systems: an idealized system represented by an analytical distribution of six circular variables, where the full joint entropy and all the MIE terms are exactly known, and the R,S stereoisomer of tartaric acid, a molecule with seven internal-rotation degrees of freedom for which the full entropy of internal rotation has been already estimated by the NN method. For these two systems, all the expansion terms of the full MIE of the entropy are estimated by the NN method and, for comparison, the MIE approximations up to third order are also estimated by simple histogramming. The results indicate that the truncation of the MIE at the two-body level can be an accurate, computationally nondemanding approximation to the configurational entropy of anharmonic internal degrees of freedom. If needed, higher-order correlations can be estimated reliably by the NN method without excessive demands on the molecular-simulation sample size and computing time. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2008 [source] Characterizing the phylogenetic structure of communities by an additive partitioning of phylogenetic diversityJOURNAL OF ECOLOGY, Issue 3 2007OLIVIER J. HARDY Summary 1Analysing the phylogenetic structure of natural communities may illuminate the processes governing the assembly and coexistence of species in ecological communities. 2Unifying previous works, we present a statistical framework to quantify the phylogenetic structure of communities in terms of average divergence time between pairs of individuals or species, sampled from different sites. This framework allows an additive partitioning of the phylogenetic signal into alpha (within-site) and beta (among-site) components, and is closely linked to Simpson diversity. It unifies the treatment of intraspecific (genetic) and interspecific diversity, leading to the definition of differentiation coefficients among community samples (e.g. IST, PST) analogous to classical population genetics coefficients expressing differentiation among populations (e.g. FST, NST). 3Two coefficients which express community differentiation among sites from species identity (IST) or species phylogeny (PST) require abundance data (number of individuals per species per site), and estimators that are unbiased with respect to sample size are given. Another coefficient (,ST) expresses the gain of the mean phylogenetic distance between species found in different sites compared with species found within sites, and requires only incidence data (presence/absence of each species in each site). 4We present tests based on phylogenetic tree randomizations to detect community phylogenetic clustering (PST > IST or ,ST > 0) or phylogenetic overdispersion (PST < IST or ,ST < 0). In addition, we propose a novel approach to detect phylogenetic clustering or overdispersion in different clades or at different evolutionary time depths using partial randomizations. 5IST, PST or ,ST can also be used as distances between community samples and regressed on ecological or geographical distances, allowing us to investigate the factors responsible for the phylogenetic signal and the critical scales at which it appears. 6We illustrate the approach on forest tree communities in Equatorial Guinea, where a phylogenetic clustering signal was probably due to phylogenetically conserved adaptations to the elevation gradient and was mostly contributed to by ancient clade subdivisions. 7The approach presented should find applications for comparing quantitatively phylogenetic patterns of different communities, of similar communities in different regions or continents, or of populations (within species) vs. communities (among species). [source] The common patterns of natureJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 8 2009S. A. FRANK Abstract We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. [source] Welfare in wild-capture marine fisheriesJOURNAL OF FISH BIOLOGY, Issue 10 2009J. D. Metcalfe In contrast to terrestrial farming or aquaculture, little, if any, welfare regulation exists that constrains how fishes are handled or killed in wild-capture marine fisheries. Given that welfare in wild-capture fisheries is moving further up the public agenda, an unbiased, dispassionate account of what happens to fishes caught in wild-capture marine fisheries is needed so as to identify where the main animal welfare issues exist. [source] Multiple horizons and information in USDA production forecastsAGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 1 2008Dwight R. Sanders United States Department of Agriculture (USDA) livestock production forecasts are evaluated for their information content across multiple forecast horizons using the direct test developed by Vuchelen and Gutierrez (2005). Forecasts are explicitly tested for rationality (unbiased and efficient) as well as for incremental information out to three-quarters ahead. The results suggest that although the forecasts are often not rational, they typically do provide the forecast user with unique information at each horizon. Turkey and milk production forecasts are found to provide the most consistent performance, while beef production forecasts provide little information beyond the two-quarter horizon. [C53, Q13] © 2008 Wiley Periodicals, Inc. [source] |