Home About us Contact | |||
Incorrect Conclusions (incorrect + conclusion)
Selected AbstractsMeasurement Equivalence Using Generalizability Theory: An Examination of Manufacturing Flexibility DimensionsDECISION SCIENCES, Issue 4 2008Manoj K. Malhotra ABSTRACT As the field of decision sciences in general and operations management in particular has matured from theory building to theory testing over the past two decades, it has witnessed an explosion in empirical research. Much of this work is anchored in survey-based methodologies in which data are collected from the field in the form of scale items that are then analyzed to measure latent unobservable constructs. It is important to assess the invariance of scales across groups in order to reach valid, scientifically sound conclusions. Because studies have often been conducted in the field of decision sciences with small sample sizes, it further exacerbates the problem of reaching incorrect conclusions. Generalizability theory can more effectively test for measurement equivalence in the presence of small sample sizes than the confirmatory factor analysis (CFA) tests that have been conventionally used for assessing measurement equivalency across groups. Consequently, we introduce and explain the generalizability theory (G-theory) in this article to examine measurement equivalence of 24 manufacturing flexibility dimension scales that have been published in prior literature and also compare and contrast G-theory with CFA. We show that all the manufacturing flexibility scales tested in this study were invariant across the three industry SIC groups from which data were collected. We strongly recommend that G-theory should always be used for determining measurement equivalence in empirical survey-based studies. In addition, because using G-theory alone does not always reveal the complete picture, CFA techniques for establishing measurement equivalence should also be invoked when sample sizes are large enough to do so. Implications of G-theory for practice and its future use in operations management and decision sciences research are also presented. [source] EVOLUTION OF MIGRATION UNDER KIN SELECTION AND LOCAL ADAPTATIONEVOLUTION, Issue 1 2005Sylvain Billiard Abstract We present here a stochastic two-locus, two-habitat model for the evolution of migration with local adaptation and kin selection. One locus determines the migration rate while the other causes local adaptation. We show that the opposing forces of kin competition and local adaptation can lead to the existence of one or two convergence stable migration rates, notably depending on the recombination rate between the two loci. We show that linkage between migration and local adaptation loci has two antagonist effects: when linkage is tight, cost of local adaptation increases, leading to smaller equilibrium migration rates. However, when linkage is tighter, the population structure at the migration locus tends to be very high because of the indirect selection, and thus equilibrium migration rates increases. This result, qualitatively different from results obtained with other models of migration evolution, indicates that ignoring drift or the detail of the genetic architecture may lead to incorrect conclusions. [source] MEASURING PROBABILISTIC REACTION NORMS FOR AGE AND SIZE AT MATURATIONEVOLUTION, Issue 4 2002Mikko Heino Abstract We present a new probabilistic concept of reaction norms for age and size at maturation that is applicable when observations are carried out at discrete time intervals. This approach can also be used to estimate reaction norms for age and size at metamorphosis or at other ontogenetic transitions. Such estimations are critical for understanding phenotypic plasticity and life-history changes in variable environments, assessing genetic changes in the presence of phenotypic plasticity, and calibrating size- and age-structured population models. We show that previous approaches to this problem, based on regressing size against age at maturation, give results that are systematically biased when compared to the probabilistic reaction norms. The bias can be substantial and is likely to lead to qualitatively incorrect conclusions; it is caused by failing to account for the probabilistic nature of the maturation process. We explain why, instead, robust estimations of maturation reaction norms should be based on logistic regression or on other statistical models that treat the probability of maturing as a dependent variable. We demonstrate the utility of our approach with two examples. First, the analysis of data generated for a known reaction norm highlights some crucial limitations of previous approaches. Second, application to the northeast arctic cod (Gadus morhua) illustrates how our approach can be used to shed new light on existing real-world data. [source] The Influence of Gender, Ethnicity, and Individual Differences on Perceptions of Career Progression in Public AccountingINTERNATIONAL JOURNAL OF AUDITING, Issue 1 2001D. Jordan Lowe Prior research examining gender and diversity issues has generally lacked supporting theory and experimental investigation. This study provides theory-based experimental evidence regarding the effects of gender, ethnicity, and other individual differences on performance evaluations of audit seniors. We utilized organizational socialization theory in examining the accounting profession's view of diversity issues. The process model of performance evaluation provided guidance in the selection of ratee, rater, and contextual characteristics as factors to analyze. An experiment was conducted with 95 audit seniors from one of the Big 5 public accounting firms. Results indicate that gender and ethnic heritage are important factors in the career prospects of audit seniors. The demeanor of an auditor was also important as an interactive factor and influences judgments differently depending on the gender or ethnic origin of the auditor evaluated. These results suggest that diversity is a very complex issue. Examining single factors without considering the interactions of a variety of factors may lead to incorrect conclusions. [source] Common Fluorescent Sunlamps are an Inappropriate Substitute for Sunlight ,PHOTOCHEMISTRY & PHOTOBIOLOGY, Issue 3 2000Douglas B. Brown ABSTRACT Fluorescent sunlamps are commonly employed as convenient sources in photobiology experiments. The ability of Kodacel to filter photobiologically irrelevant UVC wavelengths has been described. Yet there still remains a major unaddressed issue,the over representation of UVB in the output. The shortest terrestrial solar wavelengths reaching the surface are ,295 nm with the 295,320 nm range comprising ,4% of the solar UV irradiance. In Kodacel-filtered sunlamps, 47% of the UV output falls in this range. Consequently, in studies designed to understand skin photobiology after solar exposure, the use of these unfiltered sunlamps may result in misleading, or even incorrect conclusions. To demonstrate the importance of using an accurate representation of the UV portion of sunlight, the ability of different ultraviolet radiation (UVR) sources to induce the expression of a reporter gene was assayed. Unfiltered fluorescent sunlamps (FS lamps) induce optimal chloramphenicol acetyltransferase (CAT) activity at apparently low doses (10,20 J/cm2). Filtering the FS lamps with Kodacel raised the delivered dose for optimal CAT activity to 50,60 mJ/cm2. With the more solar-like UVA-340 lamps somewhat lower levels of CAT activities were induced even though the apparent delivered doses were significantly greater than for either the FS or Kodacel-filtered sunlamp (KFS lamps). When DNA from parallel-treated cells was analyzed for photoproduct formation by a radioimmuneassay, it was shown that the induction of CAT activity correlated with the level of induced photoproduct formation regardless of the source employed. [source] LOPA misapplied: Common errors can lead to incorrect conclusions,,PROCESS SAFETY PROGRESS, Issue 4 2009Karen A. Study Abstract Layer of Protection Analysis is a powerful tool for quantitative risk assessments. If applied correctly, it can provide quick and efficient guidance on what additional safeguards are needed, if any, to protect against a given scenario. If misapplied, an overly conservative calculation of risk may result in over-instrumentation, additional life-cycle costs, and spurious trips. A nonconservative calculation of risk could result in an under-protected system and unacceptable risk of an undesired consequence occurring. This article describes several categories of common errors, some overly conservative and some nonconservative. Case studies of actual plant scenarios are used to illustrate. © 2009 American Institute of Chemical Engineers Process Saf Prog 2009 [source] Policy Substance and Performance in American Lawmaking, 1877,1994AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2008John S. Lapinski This article reconsiders the importance of including policy issue content and legislative significance in our study of lawmaking. Specifically, it demonstrates theoretically why lawmaking might vary by policy substance and empirically shows how incorrect conclusions would be drawn if lawmaking is studied by pooling enactments instead of disaggregating laws by policy issue content. It accomplishes this by bringing new tools, including a policy classification system and a way to measure the significance of public laws, to help overcome an array of measurement-related problems that have stymied our ability to better understand lawmaking. The policy coding schema introduced is applied, by careful individual human coding, to every public law enacted between 1877 and 1994 (n = 37,767). The policy issue and significance data are used to construct a number of new measures of legislative performance and are useful to test hypotheses within studies of Congress and American Political Development. [source] The validity of analyses testing the etiology of comorbidity between two disorders: a review of family studiesTHE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, Issue 4 2003Soo Hyun Rhee Background:, Knowledge regarding the causes of comorbidity between two disorders has a significant impact on research regarding the classification, treatment, and etiology of the disorders. Two main analytic methods have been used to test alternative explanations for the causes of comorbidity in family studies: biometric model fitting and family prevalence analyses. Unfortunately, the conclusions of family studies using these two methods have been conflicting. In the present study, we examined the validity of family prevalence analyses in testing alternative comorbidity models. Method:, We reviewed 42 family studies that used family prevalence analyses to test three comorbidity models: the alternate forms model, the correlated liabilities model, or the three independent disorders model. We conducted the analyses used in these studies on datasets simulated under the assumptions of 13 alternative comorbidity models including the three models tested most often in the literature. Results:, Results suggest that some analyses may be valid tests of the alternate forms model (i.e., two disorders are alternate manifestations of a single liability), but that none of the analyses are valid tests of the correlated liabilities model (i.e., a significant correlation between the risk factors for the two disorders) or the three independent disorders model (i.e., the comorbid disorder is a third, independent disorder). Conclusion:, Family studies using family prevalence analyses may have made incorrect conclusions regarding the etiology of comorbidity between disorders. [source] Correction for QT/RR Hysteresis in the Assessment of Drug-Induced QTc Changes,Cardiac Safety of GadobutrolANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2009M.D., Marek Malik Ph.D. Background: The so-called thorough QT/QTc (TQT) studies required for every new pharmaceutical compound are negative if upper single-sided 95% confidence interval (CI) of placebo and baseline corrected QTc prolongation is <10 ms. This tight requirement has many methodological implications. If the investigated drug has a fast action and ECGs cannot be obtained at stable heart rates, QT/RR hysteresis correction is needed. Methods: This was used in a TQT study of gadobutrol. The TQT study was a randomized double-blind five-times crossover study of three doses of gadobutrol (0.1, 0.3, and 0.5 mmol/kg) that was placebo and positive effect controlled (moxifloxacin 400 mg). The study enrolled 50 healthy subjects with data of all periods. QT/RR hysteresis was assessed from prestudy exercise test ECGs. Among others, comparisons were made between population heart rate correction without hysteresis considerations and combined population heart rate and hysteresis correction. Results: The highest heart rate increase (placebo and baseline controlled) of 13.1 beats per minute (90% CI 9.9,16.4) occurred 1 minute after the administration of the highest dose of gadobutrol. Without hysteresis consideration, the highest ,,QTc were 9.91 ms (90% CI 8.01,11.81) while with hysteresis correction, these values were 7.62 ms (90% CI 6.37,8.87), thus turning a marginally positive TQT study into a negative finding. Conclusion: Hence, omitting hysteresis correction from episodes of fast heart rate changes may lead to incorrect conclusions. Despite substantial rate acceleration, accurate hysteresis correction confirms that gadobutrol does not have any effects on cardiac repolarization that would be within the limits of regulatory relevance. [source] Using Regression Models to Analyze Randomized Trials: Asymptotically Valid Hypothesis Tests Despite Incorrectly Specified ModelsBIOMETRICS, Issue 3 2009Michael Rosenblum Summary Regression models are often used to test for cause-effect relationships from data collected in randomized trials or experiments. This practice has deservedly come under heavy scrutiny, because commonly used models such as linear and logistic regression will often not capture the actual relationships between variables, and incorrectly specified models potentially lead to incorrect conclusions. In this article, we focus on hypothesis tests of whether the treatment given in a randomized trial has any effect on the mean of the primary outcome, within strata of baseline variables such as age, sex, and health status. Our primary concern is ensuring that such hypothesis tests have correct type I error for large samples. Our main result is that for a surprisingly large class of commonly used regression models, standard regression-based hypothesis tests (but using robust variance estimators) are guaranteed to have correct type I error for large samples, even when the models are incorrectly specified. To the best of our knowledge, this robustness of such model-based hypothesis tests to incorrectly specified models was previously unknown for Poisson regression models and for other commonly used models we consider. Our results have practical implications for understanding the reliability of commonly used, model-based tests for analyzing randomized trials. [source] Techniques and predictive models to improve prostate cancer detection,CANCER, Issue S13 2009Michael P. Herman MD Abstract The use of prostate-specific antigen (PSA) as a screening test remains controversial. There have been several attempts to refine PSA measurements to improve its predictive value. These modifications, including PSA density, PSA kinetics, and the measurement of PSA isoforms, have met with limited success. Therefore, complex statistical and computational models have been created to assess an individual's risk of prostate cancer more accurately. In this review, the authors examined the methods used to modify PSA as well as various predictive models used in prostate cancer detection. They described the mathematical underpinnings of these techniques along with their intrinsic strengths and weaknesses, and they assessed the accuracy of these methods, which have been shown to be better than physicians' judgment at predicting a man's risk of cancer. Without understanding the design and limitations of these methods, they can be applied inappropriately, leading to incorrect conclusions. These models are important components in counseling patients on their risk of prostate cancer and also help in the design of clinical trials by stratifying patients into different risk categories. Thus, it is incumbent on both clinicians and researchers to become familiar with these tools. Cancer 2009;115(13 suppl):3085,99. © 2009 American Cancer Society. [source] |