Statistic

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Statistic

  • cohen kappa statistic
  • kappa statistic
  • likelihood ratio statistic
  • new statistic
  • ratio statistic
  • score statistic
  • sufficient statistic
  • summary statistic
  • test statistic


  • Selected Abstracts


    An Approximation for the Rank Adjacency Statistic for Spatial Clustering with Sparse Data

    GEOGRAPHICAL ANALYSIS, Issue 1 2001
    John Paul Ekwaru
    The rank adjacency statistic D provides a simple method to assess regional clustering. It is defined as the weighted average absolute difference in ranks of the data, taken over all possible pairs of adjacent regions. In this paper the usual normal approximation to the D statistic is found to give inaccurate results if the data are sparse and some regions have tied ranks. Adjusted formulae for the moments of D that allow for the existence of ties are derived. An example of analyses of sparse mortality data (with many regions having no deaths, and hence tied ranks) showed satisfactory agreement between the adjusted formulae and the empirical distribution of the D statistic. We conclude that the D statistic, when used with adjusted moments, provides a valid approximate method to evaluate spatial clustering, even in sparse data situations. [source]


    Power of the Rank Adjacency Statistic to Detect Spatial Clustering in a Small Number of Regions

    GEOGRAPHICAL ANALYSIS, Issue 1 2001
    John Paul Ekwaru
    The rank adjacency statistic D is a statistical method for assessing spatial autocorrelation or clustering of geographical data. It was originally proposed for summarizing the geographical patterns of cancer data in Scotland (IARC 1985). In this paper, we investigate the power of the rank adjacency statistic to detect spatial clustering when a small number of regions is involved. The investigations were carried out using Monte Carlo simulations, which involved generating patterned/clustered values and computing the power with which the D statistic would detect it. To investigate the effects of region shapes, structure of the regions, and definition of weights, simulations were carried out using two different region shapes, binary and nonhinary weights, and three different lattice structures. The results indicate that in the typical example of considering Canadian total mortality at the electoral district level, the D statistic had adequate power to detect general spatial autocorrelation in twenty-five or more regions. There was an inverse relationship between power and the level of connectedness of the regions, which depends on the weighting function, shape, and arrangement of the regions. The power of the D statistic was also found to compare favorably with that of Moran's I statistic. [source]


    Monte Carlo Based Null Distribution for an Alternative Goodness-of-Fit Test Statistic in IRT Models

    JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2000
    Clement A. Stone
    Assessing the correspondence between model predictions and observed data is a recommended procedure for justifying the application of an IRT model. However, with shorter tests, current goodness-of-fit procedures that assume precise point estimates of ability, are inappropriate. The present paper describes a goodness-of-fit statistic that considers the imprecision with which ability is estimated and involves constructing item fit tables based on each examinee's posterior distribution of ability, given the likelihood of their response pattern and an assumed marginal ability distribution. However, the posterior expectations that are computed are dependent and the distribution of the goodness-of-fit statistic is unknown. The present paper also describes a Monte Carlo resampling procedure that can be used to assess the significance of the fit statistic and compares this method with a previously used method. The results indicate that the method described herein is an effective and reasonably simple procedure for assessing the validity of applying IRT models when ability estimates are imprecise. [source]


    A quantitative analysis of energy intake reported by young men

    NUTRITION & DIETETICS, Issue 4 2008
    Selma C. LIBERATO
    Abstract Aim:, To quantitatively analyse energy intake reported by young men and the accuracy of the Goldberg cut-off method for identifying misreporters. Methods:, This was a cross-sectional study in which: food intake was assessed by a four-day food record; resting metabolic rate was assessed by indirect calorimetry; percentage body fat was measured by dual-energy X-ray absorptiometry; and energy expenditure was assessed by physical activity record completed simultaneously with food intake measurements. Energy intake was analysed by direct comparison of energy intake and energy expenditure and by the Goldberg cut-off. Subjects: 34 healthy men aged 18,25 years. Setting: Queensland University of Technology, Queensland, Australia. Main outcome measures: percentage of misreporters in a group of young men using different methods. Statistical analyses: data are presented as means and standard deviations. The analyses were conducted using Statistic for Windows 5.5 software. Results:, Seven underreporters were identified by direct comparison of energy intake and energy expenditure. The Goldberg cut-off found six out of the seven underreporters identified by direct comparison of energy intake and energy expenditure, but wrongly identified two acceptable reporters as underreporters. The sensitivity and specificity of the Goldberg cut-off method were 0.86 and 0.93, respectively. Conclusions:, Seven out of 34 participants underreported their energy intake. In the absence of physical activity measurements, the Goldberg cut-off method identified underreporters in this group of young men with assessed resting metabolic rate. [source]


    Score Statistic to Test for Genetic Correlation for Proband-Family Design

    ANNALS OF HUMAN GENETICS, Issue 4 2005
    R. El Galta
    Summary In genetic epidemiological studies informative families are often oversampled to increase the power of a study. For a proband-family design, where relatives of probands are sampled, we derive the score statistic to test for clustering of binary and quantitative traits within families due to genetic factors. The derived score statistic is robust to ascertainment scheme. We considered correlation due to unspecified genetic effects and/or due to sharing alleles identical by descent (IBD) at observed marker locations in a candidate region. A simulation study was carried out to study the distribution of the statistic under the null hypothesis in small data-sets. To illustrate the score statistic, data from 33 families with type 2 diabetes mellitus (DM2) were analyzed. In addition to the binary outcome DM2 we also analyzed the quantitative outcome, body mass index (BMI). For both traits familial aggregation was highly significant. For DM2, also including IBD sharing at marker D3S3681 as a cause of correlation gave an even more significant result, which suggests the presence of a trait gene linked to this marker. We conclude that for the proband-family design the score statistic is a powerful and robust tool for detecting clustering of outcomes. [source]


    A Hypothesis-Free Multiple Scan Statistic with Variable Window

    BIOMETRICAL JOURNAL, Issue 2 2008
    L. Cucala
    Abstract In this article we propose a new technique for identifying clusters in temporal point processes. This relies on the comparision between all the m -order spacings and it is totally independent of any alternative hypothesis. A recursive procedure is introduced and allows to identify multiple clusters independently. This new scan statistic seems to be more efficient than the classical scan statistic for detecting and recovering cluster alternatives. These results have applications in epidemiological studies of rare diseases. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    A Robust Genome-Wide Scan Statistic of the Wellcome Trust Case,Control Consortium

    BIOMETRICS, Issue 4 2009
    Jungnam Joo
    Summary In genome-wide association (GWA) studies, test statistics that are efficient and robust across various genetic models are preferable, particularly for studying multiple diseases in the Wellcome Trust Case,Control Consortium (WTCCC, 2007,,Nature,447, 661,678). A new test statistic, the minimum of the p-values of the trend test and Pearson's test, was considered by the WTCCC. It is referred to here as MIN2. Because the minimum of two p-values is no longer a valid p-value itself, the WTCCC only used it to rank single nucleotide polymorphisms (SNPs) but did not report the p-values of the associated SNPs when MIN2 was used for ranking. Given its importance in practice, we derive the asymptotic null distribution of MIN2, study some of its analytical properties related to GWA studies, and compare it with existing methods (the trend test, Pearson's test, MAX3, and the constrained likelihood ratio test [CLRT]) by simulations across a wide range of possible genetic models: the recessive (REC), additive (ADD), multiplicative (MUL), dominant (DOM), and overdominant models. The results show that MAX3 and CLRT have greater efficiency robustness than other tests when the REC, ADD/MUL, and DOM models are possible, whereas Pearson's test and MIN2 have greater efficiency robustness if the possible genetic models also include the overdominant model. We conclude that robust tests (MAX3, MIN2, CLRT, and Pearson's test) are preferable to a single trend test for initial GWA studies. The four robust tests are applied to more than 100 SNPs associated with 11 common diseases identified by the two WTCCC GWA studies. [source]


    A Spatial Scan Statistic for Survival Data

    BIOMETRICS, Issue 1 2007
    Lan Huang
    Summary Spatial scan statistics with Bernoulli and Poisson models are commonly used for geographical disease surveillance and cluster detection. These models, suitable for count data, were not designed for data with continuous outcomes. We propose a spatial scan statistic based on an exponential model to handle either uncensored or censored continuous survival data. The power and sensitivity of the developed model are investigated through intensive simulations. The method performs well for different survival distribution functions including the exponential, gamma, and log-normal distributions. We also present a method to adjust the analysis for covariates. The cluster detection method is illustrated using survival data for men diagnosed with prostate cancer in Connecticut from 1984 to 1995. [source]


    A Sample Size Formula for the Supremum Log-Rank Statistic

    BIOMETRICS, Issue 1 2005
    Kevin Hasegawa Eng
    Summary An advantage of the supremum log-rank over the standard log-rank statistic is an increased sensitivity to a wider variety of stochastic ordering alternatives. In this article, we develop a formula for sample size computation for studies utilizing the supremum log-rank statistic. The idea is to base power on the proportional hazards alternative, so that the supremum log rank will have the same power as the standard log rank in the setting where the standard log rank is optimal. This results in a slight increase in sample size over that required for the standard log rank. For example, a 5.733% increase occurs for a two-sided test having type I error 0.05 and power 0.80. This slight increase in sample size is offset by the significant gains in power the supremum log-rank test achieves for a wide range of nonproportional hazards alternatives. A small simulation study is used for illustration. These results should facilitate the wider use of the supremum log-rank statistic in clinical trials. [source]


    A Tree-Based Scan Statistic for Database Disease Surveillance

    BIOMETRICS, Issue 2 2003
    Martin Kulldorff
    Summary Many databases exist with which it is possible to study the relationship between health events and various potential risk factors. Among these databases, some have variables that naturally form a hierarchical tree structure, such as pharmaceutical drugs and occupations. It is of great interest to use such databases for surveillance purposes in order to detect unsuspected relationships to disease risk. We propose a tree-based scan statistic, by which the surveillance can be conducted with a minimum of prior assumptions about the group of occupations/drugs that increase risk, and which adjusts for the multiple testing inherent in the many potential combinations. The method is illustrated using data from the National Center for Health Statistics Multiple Cause of Death Database, looking at the relationship between occupation and death from silicosis. [source]


    Premature cessation of breastfeeding in infants: development and evaluation of a predictive model in two Argentinian cohorts: the CLACYD study,, 1993,1999

    ACTA PAEDIATRICA, Issue 5 2001
    S Berra
    The objective of this study was to develop a model to predict premature cessation of breastfeeding of newborns, in order to detect at-risk groups that would benefit from special assistance programmes. The model was constructed using 700 children with a birthweight of 2000 g or more, in 2 representative cohorts in 1993 and 1995 (CLACYD I sample) in Córdoba, Argentina. Data were analysed from 632 of the cases. Mothers were selected during hospital admittance for childbirth and interviewed in their homes at 1 mo and 6 mo. To evaluate the model, an additional sample with similar characteristics was drawn during 1998 (CLACYD II sample). A questionnaire was administered to 347 mothers during the first 24,48 h after birth and a follow-up was completed at 6 mo, with weaning information on 291 cases. Premature cessation of breastfeeding was considered when it occurred prior to 6 mo. A logistic regression model was fitted to predict premature end of breastfeeding, and was applied to the CLACYD II sample. The calibration (Hosmer-Lemeshow C statistic) and the discrimination [area under the receiver operating characteristics (ROC) curve] of the model were evaluated. The predictive factors of premature end of breastfeeding were: mother breastfed for less than 6 mo [odds ratio (OR) = 1.84,95% confidence interval (CI) 1.26,2.70], breastfeeding of previous child for less than 6 mo (OR = 4.01, 95% CI 2.58,6.20), the condition of the firstborn child (OR = 2.75, 95% CI 1.79,4.21), the first mother-child contact occurring after 90min of life (OR =1.88; 95% CI 1.22,2.91) and having an unplanned pregnancy (OR = 1.50, 95% CI 1.05,2.15). The calibration of the model was acceptable in the CLACYD I sample (p= 0.54), as well as in the CLACYD II sample (p= 0.18). The areas under the ROC curve were 0.72 and 0.68, respectively. Conclusion: A model has been suggested that provides some insight onto background factors for the premature end of breastfeeding. Although some limitations prevent its general use at a population level, it may be a useful tool in the identification of women with a high probability of early weaning. [source]


    Temporal Extrapolation of PVA Results in Relation to the IUCN Red List Criterion E

    CONSERVATION BIOLOGY, Issue 1 2003
    Oskar Kindvall
    For a population viability analysis ( PVA) to be useful for assessing a species' threat category, the results must have been expressed as the full extinction probability function over time or at least for the three specified time frames. Often this is not the case, and extrapolations from different kinds of PVA results ( e.g., mean time to extinction ) are often necessary. By means of analytic models, we investigated the possibilities of extrapolation. Based on our results, we suggest that extrapolation is not advisable due to the huge errors that can occur. The extinction probability function is the best kind of summary statistic to use when applying criterion E, but even then the threat categorization may be ambiguous. If the extinction risk is low in the near future but increases rapidly later on, a species may be classified as vulnerable even though it is expected to become extinct within 100 years. To avoid this, we suggest that the guidelines to the IUCN Red List criteria include three reference lines that allow for interpretation of the PVA results in the context of the three threat categories within the entire period of 100 years. If the estimated extinction probability function overshoots one of these functions, the species should be classified accordingly. Resumen: Las categorías de amenaza de la lista roja de la IUCN ( críticamente en peligro, en peligro y vulnerable ) están definidos por un juego de criterios ( A-E ). El criterio E se define cuantitativamente por tres límites especificados de riesgo de extinción ( 50%, 20% y 10% ), cada uno de los cuales se asocia a un marco temporal determinado. Para poder usar el análisis de viabilidad poblacional ( PVA ) durante la evaluación de la categoría de amenaza de una especie, los resultados deben ser expresados en función de probabilidad de extinción total sobre un tiempo o al menos para los tres marcos temporales especificados. Frecuentemente este no es el caso y las extrapolaciones de diferente tipo de resultados de PVA ( tiempo promedio de extinción ) son usualmente necesarias. Por medio de modelos analíticos investigamos las posibilidades de extrapolación. En base a nuestros resultados sugerimos que la extrapolación no es recomendable debido a una enorme cantidad de errores que pueden ocurrir. La función de probabilidad de extinción es el mejor tipo de resumen estadístico a usar cuando se aplica el criterio E, pero, aún así, la categorización de amenaza puede ser ambigua. Si el riesgo de extinción es bajo en un futuro inmediato pero incrementa muy rápidamente después, una especie puede ser clasificada como vulnerable aunque se espere que se extinga dentro de los próximos 100 años. Para evitar esto, sugerimos que los lineamientos de los criterios de la Lista Roja de la UICN incluyan tres lineas de referencia que permitan la interpretación de los resultados del PVA en el contexto de las tres categorías dentro de un periodo completo de 100 años. Si la función de probabilidad de extinción estimada se extiende más allá de una de estas funciones, las especies deberán ser clasificadas según sea el caso. [source]


    Using Quality Management Tools to Enhance Feedback from Student Evaluations

    DECISION SCIENCES JOURNAL OF INNOVATIVE EDUCATION, Issue 1 2005
    John B. Jensen
    ABSTRACT Statistical tools found in the service quality assessment literature,the T2 statistic combined with factor analysis,can enhance the feedback instructors receive from student ratings. T2 examines variability across multiple sets of ratings to isolate individual respondents with aberrant response patterns (i.e., outliers). Analyzing student responses that are outside the "normal" range of responses can identify aspects of the course that cause pockets of students to be dissatisfied. This fresh insight into sources of student dissatisfaction is particularly valuable for instructors willing to make tactical classroom changes that accommodate individual students rather than the traditional approach of using student ratings to develop systemwide changes in course delivery. A case study is presented to demonstrate how the recommended procedure minimizes data overload, allows for valid schoolwide and longitudinal comparisons of correlated survey responses, and helps instructors identify priority areas for instructional improvement. [source]


    Developmental profiles for multiple object tracking and spatial memory: typically developing preschoolers and people with Williams syndrome

    DEVELOPMENTAL SCIENCE, Issue 3 2010
    Kirsten O'Hearn
    The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory task in individuals with Williams syndrome (WS), a genetic disorder characterized by severe visuo-spatial impairment. We now ask whether this unusual pattern of performance is a reflection of general immaturity or of true abnormality, possibly reflecting the atypical brain development in WS. To examine these two possibilities, we tested typically developing 3- and 4-year-olds and people with WS on multiple object tracking (MOT) and memory for static spatial location. The maximum number of objects that could be correctly tracked or remembered (estimated from the k -statistic) showed similar developmental profiles in typically developing 3- and 4-year-old children, but the WS profile differed from either age group. People with WS could track more objects than 3-year-olds, and the same number as 4-year-olds, but they could remember the locations of more static objects than both 3- and 4-year-olds. Combining these data with those from our previous studies, we found that typically developing children show increases in the number of objects they can track or remember between the ages of 3 and 6, and these increases grow in parallel across the two tasks. In contrast, object tracking in older children and adults with WS remains at the level of 4-year-olds, whereas the ability to remember multiple locations of static objects develops further. As a whole, the evidence suggests that MOT and memory for static location develop in tandem typically, but not in WS. Atypical development of the parietal lobe in people with WS could play a causal role in the abnormal, uneven pattern of performance in WS. This interpretation is consistent with the idea that multiple object tracking engages different mechanisms from those involved in memory for static object location, and that the former can be particularly disrupted by atypical development. [source]


    The Bech,Rafaelsen Melancholia Scale (MES) in clinical trials of therapies in depressive disorders: a 20-year review of its use as outcome measure

    ACTA PSYCHIATRICA SCANDINAVICA, Issue 4 2002
    P. Bech
    Bech P. The Bech,Rafaelsen Melancholia Scale (MES) in clinical trials of therapies in depressive disorders A 20-year review of its use as outcome measure. Acta Psychiatr Scand 2002: 106: 252,264. © Blackwell Munksgaard 2002. Objective:,To evaluate the psychometric properties of the Bech,Rafaelsen Melancholia Scale (MES) by reviewing clinical trials in which it has been used as outcome measure. Method:,The psychometric analysis included internal validity (total scores being a sufficient statistic), interobserver reliability, and external validity (responsiveness in short-term trials and relapse prevention in long-term trials). Results:,The results showed that the MES is a unidimensional scale, indicating that the total score is a sufficient statistic. The interobserver reliability of the MES has been found adequate both in unipolar and bipolar depression. External validity including both relapse, response and recurrence indicated that the MES has a high responsiveness and sensitivity. Conclusion:,The MES has been found a valid and reliable scale for the measurement of changes in depressive states during short-term as well as long-term treatment. [source]


    Prevalence of the metabolic syndrome in the island of Gran Canaria: comparison of three major diagnostic proposals

    DIABETIC MEDICINE, Issue 12 2005
    M. Boronat
    Abstract Aims The present study was conducted to estimate the prevalence of the metabolic syndrome in a Canarian population, and to compare its frequency as defined by the most commonly used working definitions. Methods Cross-sectional population-based study. One thousand and thirty adult subjects were randomly selected from the local census of Telde, a city located on the island of Gran Canaria. Participants completed a survey questionnaire and underwent physical examination, fasting blood analyses, and a 75-g standardized oral glucose tolerance test. The prevalence of the metabolic syndrome was estimated according to the definitions proposed by the World Health Organization (WHO), the European Group for the Study of Insulin Resistance (EGIR) and the National Cholesterol Education Program (NCEP), the latter with the original (6.1 mmol/l) and the revised criterion (5.6 mmol/l) for abnormal fasting glucose. Results The adjusted prevalence of the metabolic syndrome was 28.0, 15.9, 23.0 and 28.2%, using the WHO, EGIR, NCEP and revised NCEP criteria, respectively. The measure of agreement (, statistic) was 0.57 between the WHO and the original NCEP definitions, and 0.61 between the WHO and the revised NCEP definitions. After excluding diabetic subjects, the agreement between the EGIR and WHO proposals was fairly good (, = 0.70), whereas concordance of the EGIR with the original and the revised NCEP definitions was moderate (, = 0.47 and 0.46, respectively). Conclusions Whichever the considered diagnostic criteria, the prevalence of the metabolic syndrome in this area of the Canary Islands is greater than that observed in most other European populations. [source]


    Cervical biopsy-based comparison of a new liquid-based thin-layer preparation with conventional Pap smears

    DIAGNOSTIC CYTOPATHOLOGY, Issue 4 2004
    Maria da Gloria Mattosinho de Castro Ferraz M.D.
    Abstract The objective of this study is to compare the diagnostic efficacy of universal collection medium (UCM) liquid-based cytology (LBC) (Digene Corp., MD) and the conventional Pap smear in a comparative study, using histologic results as the gold standard. This was a cross-sectional study. Conventional Pap smears and UCM LBC specimens, obtained from women in a low socioeconomic outpatient population referred to a tertiary center for gynecologic care, were compared. For the purpose of this study, when cervical specimens were collected for cytology, all women underwent colposcopy and biopsy was done if a cervical abnormality was observed. Cytologic evaluation of UCM LBC and conventional Pap smears were carried out separately, masked to the results of the other method. Agreement beyond chance between the two cytologic methods was ascertained by means of the unweighted , statistic. Sensitivity, specificity, and predictive values with 95% confidence intervals were calculated for both methods. McNemar's test was used to determine the level of association between the two cytology procedures. A total of 800 women were evaluated. Assessment of the overall agreement between the two cytologic methods yielded a , of 0.777 (P < 0.0001). After adjustment for histologic diagnosis, the computed , in each stratum was as follows: normal = 0.733; CIN 1 = 0.631; CIN 2/3 = 0.735; cancer = 0.652. The sensitivity and specificity of UCM LBC for detection of cervical intraepithelial lesions and cancer were 75.3% and 86.4%, respectively, not statistically different from the 81.8% and 85.2% seen with the conventional method. This study demonstrates that the UCM LBC method is as accurate as the conventional Pap smear cytology in detecting cervical intraepithelial lesions and cancer even so the UCM samples were systematically prepared from a second sampling of the cervix. Diagn. Cytopathol. 2004;30:220,226. © 2004 Wiley-Liss, Inc. [source]


    Inter-observer agreement for multichannel intraluminal impedance,pH testing

    DISEASES OF THE ESOPHAGUS, Issue 7 2010
    K. Ravi
    SUMMARY Twenty-four-hour ambulatory multichannel intraluminal impedance (MII),pH detects both acid and nonacid reflux (NAR). A computer-based program (AutoscanÔ, Sandhill Scientific, Highlands Ranch, CO, USA) automates the detection of reflux episodes, increasing the ease of study interpretation. Inter-observer agreement between multiple reviewers and with AutoscanÔ for the evaluation of significant NAR with MII,pH has not been studied in the adult population. Twenty MII,pH studies on patients taking a proton pump inhibitor twice daily were randomly selected. AutoscanÔ analyzed all studies using the same pre-programmed parameters. Four reviewers interpreted the MII,pH studies, adding or deleting reflux episodes detected by AutoscanÔ. Positive studies for NAR and total reflux episodes were based on published criteria. Cohen's kappa statistic (,) evaluated inter-observer agreement between reviewers and AutoscanÔ analysis. The average , for pathologic NAR between reviewers was 0.57 (0.47,0.70), and between reviewers and AutoscanÔ was 0.56 (0.4,0.8). When using the total reflux episode number as a marker for pathologic reflux (acid and NAR), the , score was 0.72 (0.61,0.89) between reviewers, and 0.74 (0.53,0.9) when evaluating total reflux episodes. Two reviewers agreed more often with each other and with AutoscanÔ on the number of NAR episodes, while the other two reviewers agreed with each other, but did not agree with either AutoscanÔ or the first two reviewers. Inter-observer agreement between reviewers and AutoscanÔ for detecting pathologic NAR is moderate, with reviewers either excluding more of the AutoscanÔ-defined events or excluding fewer events and therefore agreeing with AutoscanÔ. [source]


    Global burden of disease from alcohol, illicit drugs and tobacco

    DRUG AND ALCOHOL REVIEW, Issue 6 2006
    JÜRGEN REHM PhD
    Abstract The use of alcohol, tobacco and illicit drugs entails considerable burden of disease: in 2000, about 4% of the global burden as measured in disability adjusted life years was attributable to each alcohol and tobacco, and 0.8% to illicit drugs. The burden of alcohol in the above statistic was calculated as net burden, i.e. incorporating the protective health effects. Tobacco use was found to be the most important of 25 risk factors for developed countries in the comparative risk assessment underlying the data. It had the highest mortality risk of all the substance use categories, especially for the elderly. Alcohol use was also important in developed countries, but constituted the most important of all risk factors in emerging economies. Alcohol use affected younger people than tobacco, both in terms of disability and mortality. The burden of disease attributable to the use of legal substances clearly outweighed the use of illegal drugs. A large part of the substance-attributable burden would be avoidable if known effective interventions were implemented. [source]


    Opposite shell-coiling morphs of the tropical land snail Amphidromus martensi show no spatial-scale effects

    ECOGRAPHY, Issue 4 2006
    Paul G. Craze
    Much can be learned about evolution from the identification of those factors maintaining polymorphisms in natural populations. One polymorphism that is only partially understood occurs in land snail species where individuals may coil clockwise or anti-clockwise. Theory shows that polymorphism in coiling direction should not persist yet species in several unrelated groups of land snails occur in stably polymorphic populations. A solution to this paradox may advance our understanding of evolution in general. Here, we examine two possible explanations: firstly, negative frequency-dependent selection due to predation; secondly, random fixation of alternative coiling morphs in tree-sized demes, giving the impression of wider polymorphism. We test these hypotheses by investigating morph-clustering of empty shells at two spatial scales in Amphidromus martensi populations in northern Borneo: the spatial structure of snail populations is relatively easy to estimate and this information may support one or other of the hypotheses under test. For the smaller scale we make novel use of a statistic previously used in botanical studies (the K-function statistic), which allows clustering of more than one morph to be simultaneously investigated at a range of scales and which we have corrected for anisotropy. We believe this method could be of more general use to ecologists. The results show that consistent clustering or separation of morphs cannot be clearly detected at any spatial scale and that predation is not frequency-dependent. Alternative explanations that do not require strong spatial structuring of the population may be needed, for instance ones involving a mechanism of selection actively maintaining the polymorphism. [source]


    Modelling species distributions in Britain: a hierarchical integration of climate and land-cover data

    ECOGRAPHY, Issue 3 2004
    Richard G. Pearson
    A modelling framework for studying the combined effects of climate and land-cover changes on the distribution of species is presented. The model integrates land-cover data into a correlative bioclimatic model in a scale-dependent hierarchical manner, whereby Artificial Neural Networks are used to characterise species' climatic requirements at the European scale and land-cover requirements at the British scale. The model has been tested against an alternative non-hierarchical approach and has been applied to four plant species in Britain: Rhynchospora alba, Erica tetralix, Salix herbacea and Geranium sylvaticum. Predictive performance has been evaluated using Cohen's Kappa statistic and the area under the Receiver Operating Characteristic curve, and a novel approach to identifying thresholds of occurrence which utilises three levels of confidence has been applied. Results demonstrate reasonable to good predictive performance for each species, with the main patterns of distribution simulated at both 10 km and 1 km resolutions. The incorporation of land-cover data was found to significantly improve purely climate-driven predictions for R. alba and E. tetralix, enabling regions with suitable climate but unsuitable land-cover to be identified. The study thus provides an insight into the roles of climate and land-cover as determinants of species' distributions and it is demonstrated that the modelling approach presented can provide a useful framework for making predictions of distributions under scenarios of changing climate and land-cover type. The paper confirms the potential utility of multi-scale approaches for understanding environmental limitations to species' distributions, and demonstrates that the search for environmental correlates with species' distributions must be addressed at an appropriate spatial scale. Our study contributes to the mounting evidence that hierarchical schemes are characteristic of ecological systems. [source]


    Duelling timescales of host movement and disease recovery determine invasion of disease in structured populations

    ECOLOGY LETTERS, Issue 6 2005
    Paul C. Cross
    Abstract The epidemic potential of a disease is traditionally assessed using the basic reproductive number, R0. However, in populations with social or spatial structure a chronic disease is more likely to invade than an acute disease with the same R0, because it persists longer within each group and allows for more host movement between groups. Acute diseases ,perceive' a more structured host population, and it is more important to consider host population structure in analyses of these diseases. The probability of a pandemic does not arise independently from characteristics of either the host or disease, but rather from the interaction of host movement and disease recovery timescales. The R* statistic, a group-level equivalent of R0, is a better indicator of disease invasion in structured populations than the individual-level R0. [source]


    Testing Parameters in GMM Without Assuming that They Are Identified

    ECONOMETRICA, Issue 4 2005
    Frank Kleibergen
    We propose a generalized method of moments (GMM) Lagrange multiplier statistic, i.e., the K statistic, that uses a Jacobian estimator based on the continuous updating estimator that is asymptotically uncorrelated with the sample average of the moments. Its asymptotic ,2 distribution therefore holds under a wider set of circumstances, like weak instruments, than the standard full rank case for the expected Jacobian under which the asymptotic ,2 distributions of the traditional statistics are valid. The behavior of the K statistic can be spurious around inflection points and maxima of the objective function. This inadequacy is overcome by combining the K statistic with a statistic that tests the validity of the moment equations and by an extension of Moreira's (2003) conditional likelihood ratio statistic toward GMM. We conduct a power comparison to test for the risk aversion parameter in a stochastic discount factor model and construct its confidence set for observed consumption growth and asset return series. [source]


    Empirical Likelihood-Based Inference in Conditional Moment Restriction Models

    ECONOMETRICA, Issue 6 2004
    Yuichi Kitamura
    This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood-based procedure. This yields a one-step estimator which avoids estimating optimal instruments. Our likelihood ratio-type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples. [source]


    A Conditional Likelihood Ratio Test for Structural Models

    ECONOMETRICA, Issue 4 2003
    Marcelo J. Moreira
    This paper develops a general method for constructing exactly similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reduced-form covariance matrix. These tests are shown to be similar under weak-instrument asymptotics when the reduced-form covariance matrix is estimated and the errors are non-normal. The conditional test based on the likelihood ratio statistic is particularly simple and has good power properties. Like the score test, it is optimal under the usual local-to-null asymptotics, but it has better power when identification is weak. [source]


    Sample Splitting and Threshold Estimation

    ECONOMETRICA, Issue 3 2000
    Bruce E. Hansen
    Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously-distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model (TAR) is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super-consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross-section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates (the threshold and the regression slopes) is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson (1995). [source]


    Measuring Social Mobility as Unpredictability

    ECONOMICA, Issue 269 2001
    Simon C. Parker
    By associating mobility with the unpredictability of social states, new measures of social mobility may be constructed. We propose a family of three state-by-state and aggregate (scalar) predictability measures. The first set of measures is based on the transition matrix. The second uses a sampling approach and permits statistical testing of the hypothesis of perfect mobility, providing a new justification for the use of the ,2 statistic. The third satisfies the demanding criterion of ,period consistency'. An empirical example demonstrates the usefulness of the new measures to complement existing ones in the literature. [source]


    Variability in agreement between physicians and nurses when measuring the Glasgow Coma Scale in the emergency department limits its clinical usefulness

    EMERGENCY MEDICINE AUSTRALASIA, Issue 4 2006
    Anna Holdgate
    Abstract Objective:, To assess the interrater reliability of the Glasgow Coma Scale (GCS) between nurses and senior doctors in the ED. Methods:, This was a prospective observational study with a convenience sample of patients aged 18 or above who presented with a decreased level of consciousness to a tertiary hospital ED. A senior ED doctor (emergency physicians and trainees) and registered nurse each independently scored the patient's GCS in blinded fashion within 15 min of each other. The data were then analysed to determine interrater reliability using the weighted kappa statistic and the size and directions of differences between paired scores were examined. Results:, A total of 108 eligible patients were enrolled, with GCS scores ranging from 3 to 14. Interrater agreement was excellent (weighted kappa > 0.75) for verbal scores and total GCS scores, and intermediate (weighted kappa 0.4,0.75) for motor and eye scores. Total GCS scores differed by more than two points in 10 of the 108 patients. Interrater agreement did not vary substantially across the range of actual numeric GCS scores. Conclusions:, Although the level of agreement for GCS scores was generally high, a significant proportion of patients had GCS scores which differed by two or more points. This degree of disagreement indicates that clinical decisions should not be based solely on single GCS scores. [source]


    Molecular analysis of mutations at the HPRT and TK loci of human lymphoblastoid cells after combined treatments with 3,-azido-3,-deoxythymidine and 2,,3,-dideoxyinosine,

    ENVIRONMENTAL AND MOLECULAR MUTAGENESIS, Issue 4 2002
    Quanxin Meng
    Abstract Combinations of antiretroviral drugs that include nucleoside reverse transcriptase inhibitors (NRTIs) are superior to single-agent regimens in treating or preventing HIV infection, but the potential long-term health hazards of these treatments in humans are uncertain. In earlier studies, our group found that coexposure of TK6 human lymphoblastoid cells to 3,-azido-2,,3,-dideoxythymidine (AZT) and 2,,3,-dideoxyinosine (ddI), the first two NRTIs approved by the FDA as antiretroviral drugs, produced multiplicative synergistic enhancement of DNA incorporation of AZT and mutagenic responses in both the HPRT and TK reporter genes, as compared with single-drug exposures (Meng Q et al. [2000a]: Proc Natl Acad Sci USA 97:12667,12671). The purpose of the current study was to characterize the mutational specificity of equimolar mixtures of 100 ,M or 300 ,M AZT + ddI at the HPRT and TK loci of exposed cells vs. unexposed control cells, and to compare the resulting mutational spectra data to those previously found in cells exposed to AZT alone (Sussman H et al. [1999]: Mutat Res 429:249,259; Meng Q et al. [2000b]: Toxicol Sci 54:322,329). Molecular analyses of HPRT mutant clones were performed by reverse transcription,mediated production of cDNA, PCR amplification, and cDNA sequencing to define small DNA alterations, followed by multiplex PCR amplification of genomic DNA to define the fractions of deletion events. TK mutants with complete gene deletions were distinguished by Southern blot analysis. The observed HPRT mutational categories included point mutations, microinsertions/microdeletions, splicing-error mutations, and macrodeletions including partial and complete gene deletions. The only significant difference or shift in the mutational spectra for NRTI-treated cells vs. control cells was the increase in the frequency of complete TK gene deletions following exposures (for 3 days) to 300 ,M AZT,ddI (P = 0.034, chi-square test of homogeneity); however, statistical analyses comparing the observed mutant fraction values (measured mutant frequency × percent of a class of mutation) between control and NRTI-treated cells for each class of mutation showed that the occurrences of complete gene deletions of both HPRT and TK were significantly elevated over background values (0.34 × 10,6 in HPRT and 6.0 × 10,6 in TK) at exposure levels of 100 ,M AZT,ddI (i.e., 1.94 × 10,6 in HPRT and 18.6 × 10,6 in TK) and 300 ,M AZT,ddI (i.e., 5.6 × 10,6 in HPRT and 34.6 × 10,6 in TK) (P < 0.05, Mann,Whitney U -statistic). These treatment-related increases in complete gene deletions were consistent with the spectra data for AZT alone (ibid.) and with the known mode of action of AZT and ddI as DNA chain terminators. In addition, cotreatments of ddI with AZT led to substantial absolute increases in the mutant fraction of other classes of mutations, unlike cells exposed solely to AZT [e.g., the frequency of point mutations among HPRT mutants was significantly increased by 130 and 323% over the background value (4.25 × 10,6) in cells exposed to 100 and 300 ,M AZT,ddI, respectively]. These results indicate that, at the same time that AZT,ddI potentiates therapeutic or prophylactic efficacy, the use of a second NRTI with AZT may confer a greater cancer risk, characterized by a spectrum of mutations that deviates from that produced solely by AZT. Environ. Mol. Mutagen. 39:282,295, 2002. Published 2002 Wiley-Liss, Inc. [source]


    Anthropogenic disturbance affects the structure of bacterial communities

    ENVIRONMENTAL MICROBIOLOGY, Issue 3 2010
    Duane Ager
    Summary Patterns of taxa abundance distributions are the result of the combined effects of historical and biological processes and as such are central to ecology. It is accepted that a taxa abundance distribution for a given community of animals or plants following a perturbation will typically change in structure from one of high evenness to increasing dominance. Subsequently, such changes in evenness have been used as indicators of biological integrity and environmental assessment. Here, using replicated experimental treehole microcosms perturbed with different concentrations of the pollutant pentachlorophenol, we investigated whether changes in bacterial community structure would reflect the effects of anthropogenic stress in a similar manner to larger organisms. Community structure was visualized using rank,abundance plots fitted with linear regression models. The slopes of the regression models were used as a descriptive statistic of changes in evenness over time. Our findings showed that bacterial community structure reflected the impact and the recovery from an anthropogenic disturbance. In addition, the intensity of impact and the rate of recovery to pre-perturbation structure were dose-dependent. These properties of bacterial community structures may potentially provide a metric for environmental assessment and regulation. [source]