Valid Test (valid + test)

Distribution by Scientific Domains


Selected Abstracts


Use of the Bruininks,Oseretsky Test of Motor Proficiency for identifying children with motor impairment

DEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 11 2007
Fotini Venetsanou MSc
This study compared the consistency of the Short Form (SF) and the Long Form (LF) of the Bruininks,Oseretsky Test of Motor Proficiency (BOTMP) in identifying preschool children with motor impairment (MI). One hundred and forty-four Greek preschool children participated (74 males, 70 females; mean age 5y 2mo [SD 5mo], range 4y 6mo-5y 6mo). Although total SF and LF scores were highly correlated (r=0.85), paired t -tests indicated significant differences (t=-27.466, p=0.001). SF total scores (mean 58.72 [SD 7.28]) were higher than LF total scores (mean 47.38 [SD 9.43]). SF had low sensitivity (13.6%) and negative predictive value (72.5%) for identifying MI. The BOTMP-SF does not appear to be a valid test for the identification of MI in 5-year-old children. [source]


Adapting the logical basis of tests for Hardy-Weinberg Equilibrium to the real needs of association studies in human and medical genetics

GENETIC EPIDEMIOLOGY, Issue 7 2009
Katrina A. B. Goddard
Abstract The standard procedure to assess genetic equilibrium is a ,2 test of goodness-of-fit. As is the case with any statistical procedure of that type, the null hypothesis is that the distribution underlying the data is in agreement with the model. Thus, a significant result indicates incompatibility of the observed data with the model, which is clearly at variance with the aim in the majority of applications: to exclude the existence of gross violations of the equilibrium condition. In current practice, we try to avoid this basic logical difficulty by increasing the significance bound to the P -value (e.g. from 5 to 10%) and inferring compatibility of the data with Hardy Weinberg Equilibrium (HWE) from an insignificant result. Unfortunately, such direct inversion of a statistical testing procedure fails to produce a valid test of the hypothesis of interest, namely, that the data are in sufficiently good agreement with the model under which the P -value is calculated. We present a logically unflawed solution to the problem of establishing (approximate) compatibility of an observed genotype distribution with HWE. The test is available in one- and two-sided versions. For both versions, we provide tools for exact power calculation. We demonstrate the merits of the new approach through comparison with the traditional ,2 goodness-of-fit test in 2×60 genotype distributions from 43 published genetic studies of complex diseases where departure from HWE was noted in either the case or control sample. In addition, we show that the new test is useful for the analysis of genome-wide association studies. Genet. Epidemiol. 33:569,580, 2009. © 2009 Wiley-Liss, Inc. [source]


Properties of the transmission-disequilibrium test in the presence of inbreeding

GENETIC EPIDEMIOLOGY, Issue 2 2002
Emmanuelle Génin
Abstract Family-based association tests such as the transmission-disequilibrium test (TDT), which compare alleles transmitted and non-transmitted from parents to affected offspring, are widely used to detect the role of genetic risk factors in diseases. These methods have the advantage of being robust to population stratification and are thus believed to be valid whatever the population context. In different studies of the statistical properties of the TDT, parents of affected offspring are typically assumed to be neither inbred nor related. In many human populations, however, this assumption is false and parental alleles are then no longer independent. It is thus of interest to determine whether the TDT is a valid test of linkage and association in the presence of inbreeding. We present a method to derive the expected value of the TDT statistic under different disease models and for any relationship between the parents of affected offspring. Using this method, we show that in the presence of inbreeding, the TDT is still a valid test for linkage but not for association. The power of the test to detect linkage may, however, be increased in the presence of inbreeding under different modes of inheritance. Genet. Epidemiol. 22:116,127, 2002. © 2002 Wiley-Liss, Inc. [source]


Reliability and validity of the Norwegian version of the Severe Impairment Battery (SIB)

INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 9 2008
Sverre Bergh
Abstract Objective The Severe Impairment Battery (SIB) is developed to test cognitive function in patients with dementia of moderate to severe degree. We have conducted a study to assess the inter-rater reliability and the validity of the Norwegian version of SIB. Methods The reliability study comprised 30 patients, and the validity study 59 patients in nursing homes. We assessed Cronbach's alpha coefficient of the scale and the inter-rater reliability for the total SIB score and its nine sub scores between two testers by means of the Spearman's correlation coefficients. In the validity study we compared the SIB scores with the scores on the Clinical Dementia Rating Scale. Results The mean SIB score was 72.10 (SD 25.37). The Cronbach's alpha was 0.97, and the inter-rater reliability of total SIB score was Spearman's rho 0.85, and ranged from 0.46 to 0.76 for the nine sub-scores. The mean SIB score for patients with a CDR score,<,2 was 84.2(13.4), whereas total scores for patients with CDR 2 and 3 were 74 (18.9) and 48.4 (33.3), respectively. A cut-off point of 80.5 points gave the highest accuracy in discriminating between patients with CDR 2 and CDR 3, while a cut-off point of 87.5 best discriminated between CDR,<,2 and CDR 3. Conclusion The study indicates that the Norwegian version of SIB is a reliable and valid test with which to evaluate the cognition in patients with dementia of moderate to severe degree. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Bone marrow biopsy in patients with hepatitis C virus infection: Spectrum of findings and diagnostic utility,

AMERICAN JOURNAL OF HEMATOLOGY, Issue 2 2010
Jeffery M. Klco
Patients with hepatitis C virus (HCV) infection develop a number of hematologic disorders, with benign and malignant B-cell proliferations being the most common. HCV-infected patients are also prone to developing peripheral cytopenias, the etiologies of which are multifactorial and include hypersplenism and/or antiviral medications. Some of these patients may undergo bone marrow biopsy but no study has systematically recorded the bone marrow findings in this patient group. Here, we report on the range of bone marrow findings in 47 adult HCV-infected patients. These patients, who lacked concurrent human immunodefiency virus (HIV) infection, most commonly presented for a bone marrow biopsy due to abnormal peripheral cell counts. The bone marrow biopsies displayed a range of findings. Dyserythropoiesis, present in 19% of the cases, was the most common finding. Patients with pancytopenia(n = 6), as defined by current World Health Organization standards, were the most likely to have bone marrow abnormalities; two pancytopenic patients had acute myeloid leukemia, and one patient had a primary myelodysplastic syndrome. There was no correlation in bone marrow findings and antiviral medications, MELD score, cirrhosis or splenomegaly, suggesting that the degree of bone marrow dysfunction is independent of stage of HCV. The results of this study suggest that bone marrow biopsy in HCV-infected patients, even those with features of hypersplenism and/or documented antiviral therapy, can be a valid test for hematologic evaluation, especially for patients with severe pancytopenia and/or sudden alterations in peripheral cell counts. Am. J. Hematol. 85:106,110, 2010. © 2009 Wiley-Liss, Inc. [source]


Investigating the Incidence of type i errors for chronic whole effluent toxicity testing using Ceriodaphnia dubia

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2000
Timothy F. Moore
Abstract The risk of Type I error (false positives) is thought to be controlled directly by the selection of a critical p value for conducting statistical analyses. The critical value for whole effluent toxicity (WET) tests is routinely set to 0.05, thereby establishing a 95% confidence level about the statistical inferences. In order to estimate the incidence of Type I errors in chronic WET testing, a method blank-type study was performed. A number of municipal wastewater dischargers contracted 16 laboratories to conduct chronic WET tests using the standard test organism Ceriodaphnia dubia. Unbeknownst to the laboratories, the samples they received from the wastewater dischargers were comprised only of moderately hard water, using the U.S. Environmental Protection Agency's standard dilution water formula. Because there was functionally no difference between the sample water and the laboratory control/dilution water, the test results were expected to be less than or equal to 1 TUc (toxic unit). Of the 16 tests completed by the biomonitoring laboratories, two did not meet control performance criteria. Six of the remaining 14 valid tests (43%) indicated toxicity (TUc > 1) in the sample (i.e., no-observed-effect concentration or IC25 < 100%). This incidence of false positives was six times higher than expected when the critical value was set to 0.05. No plausible causes for this discrepancy were found. Various alternatives for reducing the rate of Type I errors are recommended, including greater reliance on survival endpoints and use of additional test acceptance criteria. [source]


Simple estimates of haplotype relative risks in case-control data

GENETIC EPIDEMIOLOGY, Issue 6 2006
Benjamin French
Abstract Methods of varying complexity have been proposed to efficiently estimate haplotype relative risks in case-control data. Our goal was to compare methods that estimate associations between disease conditions and common haplotypes in large case-control studies such that haplotype imputation is done once as a simple data-processing step. We performed a simulation study based on haplotype frequencies for two renin-angiotensin system genes. The iterative and noniterative methods we compared involved fitting a weighted logistic regression, but differed in how the probability weights were specified. We also quantified the amount of ambiguity in the simulated genes. For one gene, there was essentially no uncertainty in the imputed diplotypes and every method performed well. For the other, ,60% of individuals had an unambiguous diplotype, and ,90% had a highest posterior probability greater than 0.75. For this gene, all methods performed well under no genetic effects, moderate effects, and strong effects tagged by a single nucleotide polymorphism (SNP). Noniterative methods produced biased estimates under strong effects not tagged by an SNP. For the most likely diplotype, median bias of the log-relative risks ranged between ,0.49 and 0.22 over all haplotypes. For all possible diplotypes, median bias ranged between ,0.73 and 0.08. Results were similar under interaction with a binary covariate. Noniterative weighted logistic regression provides valid tests for genetic associations and reliable estimates of modest effects of common haplotypes, and can be implemented in standard software. The potential for phase ambiguity does not necessarily imply uncertainty in imputed diplotypes, especially in large studies of common haplotypes. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


Conflicting selection pressures on seed size: evolutionary ecology of fruit size in a bird-dispersed tree, Olea europaea

JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 6 2003
J. M. Alcántara
Abstract Recent evidence indicates that fruit size has evolved according to dispersers' size. This is hypothesized to result from a balance between factors favouring large seeds and dispersers setting the maximum fruit size. This hypothesis assumes that (1) the size of fruits that can be consumed by dispersers is limited, (2) fruit and seed size are positively correlated, and (3) the result of multiple selection pressures on seed size is positive. Our studies on the seed dispersal mutualism of Olea europaea have supported the first and second assumptions, but valid tests of the third assumption are still lacking. Here we confirm the third assumption. Using multiplicative fitness components, we show that conflicting selection pressures on seed size during and after dispersal reverse the negative pattern of selection exerted by dispersers. [source]


Tempo and mode in evolution: phylogenetic inertia, adaptation and comparative methods

JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 6 2002
S. P. Blomberg
Abstract Before the Evolutionary Synthesis, ,phylogenetic inertia' was associated with theories of orthogenesis, which claimed that organisms possessed an endogenous perfecting principle. The concept in the modern literature dates to Simpson (1944), who used ,evolutionary inertia' as a description of pattern in the fossil record. Wilson (1975) used ,phylogenetic inertia' to describe population-level or organismal properties that can affect the course of evolution in response to selection. Many current authors now view phylogenetic inertia as an alternative hypothesis to adaptation by natural selection when attempting to explain interspecific variation, covariation or lack thereof in phenotypic traits. Some phylogenetic comparative methods have been claimed to allow quantification and testing of phylogenetic inertia. Although some existing methods do allow valid tests of whether related species tend to resemble each other, which we term ,phylogenetic signal', this is simply pattern recognition and does not imply any underlying process. Moreover, comparative data sets generally do not include information that would allow rigorous inferences concerning causal processes underlying such patterns. The concept of phylogenetic inertia needs to be defined and studied with as much care as ,adaptation'. [source]


The validity of analyses testing the etiology of comorbidity between two disorders: a review of family studies

THE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, Issue 4 2003
Soo Hyun Rhee
Background:, Knowledge regarding the causes of comorbidity between two disorders has a significant impact on research regarding the classification, treatment, and etiology of the disorders. Two main analytic methods have been used to test alternative explanations for the causes of comorbidity in family studies: biometric model fitting and family prevalence analyses. Unfortunately, the conclusions of family studies using these two methods have been conflicting. In the present study, we examined the validity of family prevalence analyses in testing alternative comorbidity models. Method:, We reviewed 42 family studies that used family prevalence analyses to test three comorbidity models: the alternate forms model, the correlated liabilities model, or the three independent disorders model. We conducted the analyses used in these studies on datasets simulated under the assumptions of 13 alternative comorbidity models including the three models tested most often in the literature. Results:, Results suggest that some analyses may be valid tests of the alternate forms model (i.e., two disorders are alternate manifestations of a single liability), but that none of the analyses are valid tests of the correlated liabilities model (i.e., a significant correlation between the risk factors for the two disorders) or the three independent disorders model (i.e., the comorbid disorder is a third, independent disorder). Conclusion:, Family studies using family prevalence analyses may have made incorrect conclusions regarding the etiology of comorbidity between disorders. [source]