| |||
Causal Inference (causal + inference)
Selected AbstractsCausal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity AnalysisAMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2010Kosuke Imai Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of different assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error, which always yields attenuation bias. [source] Principal Stratification in Causal InferenceBIOMETRICS, Issue 1 2002Constantine E. Frangakis Summary. Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable under each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate, such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance, and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to formulate estimands based on principal stratification and principal causal effects and show their superiority. [source] God Does Not Play Dice: Causal Determinism and Preschoolers' Causal InferencesCHILD DEVELOPMENT, Issue 2 2006Laura E. Schulz Three studies investigated children's belief in causal determinism. If children are determinists, they should infer unobserved causes whenever observed causes appear to act stochastically. In Experiment 1, 4-year-olds saw a stochastic generative cause and inferred the existence of an unobserved inhibitory cause. Children traded off inferences about the presence of unobserved inhibitory causes and the absence of unobserved generative causes. In Experiment 2, 4-year-olds used the pattern of indeterminacy to decide whether unobserved variables were generative or inhibitory. Experiment 3 suggested that children (4 years old) resist believing that direct causes can act stochastically, although they accept that events can be stochastically associated. Children's deterministic assumptions seem to support inferences not obtainable from other cues. [source] Causal inference with generalized structural mean modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2003S. Vansteelandt Summary., We estimate cause,effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage ,nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance. [source] Causal inference: response to GagliardiACTA PAEDIATRICA, Issue 10 2010Olaf Dammann No abstract is available for this article. [source] A methodology for inferring the causes of observed impairments in aquatic ecosystems,ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 6 2002Glenn W. Suter II Abstract Biological surveys have become a common technique for determining whether aquatic communities have been injured. However, their results are not useful for identifying management options until the causes of apparent injuries have been identified. Techniques for determining causation have been largely informal and ad hoc. This paper presents a logical system for causal inference. It begins by analyzing the available information to generate causal evidence; available information may include spatial or temporal associations of potential cause and effect, field or laboratory experimental results, and diagnostic evidence from the affected organisms. It then uses a series of three alternative methods to infer the cause: Elimination of causes, diagnostic protocols, and analysis of the strength of evidence. If the cause cannot be identified with sufficient confidence, the reality of the effects is examined, and if the effects are determined to be real, more information is obtained to reiterate the process. [source] Increasing accuracy of causal inference in experimental analyses of biodiversityFUNCTIONAL ECOLOGY, Issue 6 2004L. BENEDETTI-CECCHI Summary 1Manipulative experiments are often used to identify causal linkages between biodiversity and productivity in terrestrial and aquatic habitats. 2Most studies have identified an effect of biodiversity, but their interpretation has stimulated considerable debate. The main difficulties lie in separating the effect of species richness from those due to changes in identity and relative density of species. 3Various experimental designs have been adopted to circumvent problems in the analysis of biodiversity. Here I show that these designs may not be able to maintain the probability of type I errors at the nominal level (, = 0·05) under a true null hypothesis of no effect of species richness, in the presence of effects of density and identity of species. 4Alternative designs have been proposed to discriminate unambiguously the effects of identity and density of species from those due to number of species. Simulations show that the proposed experiments may have increased capacity to control for type I errors when effects of density and identity of species are also present. These designs have enough flexibility to be useful in the experimental analysis of biodiversity in various assemblages and under a wide range of environmental conditions. [source] Reviews: A review of hereditary and acquired coagulation disorders in the aetiology of ischaemic strokeINTERNATIONAL JOURNAL OF STROKE, Issue 5 2010Lonneke M. L. De Lau The diagnostic workup in patients with ischaemic stroke often includes testing for prothrombotic conditions. However, the clinical relevance of coagulation abnormalities in ischaemic stroke is uncertain. Therefore, we reviewed what is presently known about the association between inherited and acquired coagulation disorders and ischaemic stroke, with a special emphasis on the methodological aspects. Good-quality data in this field are scarce, and most studies fall short on epidemiological criteria for causal inference. While inherited coagulation disorders are recognised risk factors for venous thrombosis, there is no substantial evidence for an association with arterial ischaemic stroke. Possible exceptions are the prothrombin G20210A mutation in adults and protein C deficiency in children. There is proof of an association between the antiphospholipid syndrome and ischaemic stroke, but the clinical significance of isolated mildly elevated antiphospholipid antibody titres is unclear. Evidence also suggests significant associations of increased homocysteine and fibrinogen concentrations with ischaemic stroke, but whether these associations are causal is still debated. Data on other acquired coagulation abnormalities are insufficient to allow conclusions regarding causality. For most coagulation disorders, a causal relation with ischaemic stroke has not been definitely established. Hence, at present, there is no valid indication for testing all patients with ischaemic stroke for these conditions. Large prospective population-based studies allowing the evaluation of interactive and subgroup effects are required to appreciate the role of coagulation disorders in the pathophysiology of arterial ischaemic stroke and to guide the management of individual patients. [source] Families With Children and Adolescents: A Review, Critique, and Future AgendaJOURNAL OF MARRIAGE AND FAMILY, Issue 3 2010Robert Crosnoe This decade's literature on families with children and adolescents can be broadly organized around the implications for youth of family statuses (e.g., family structure) and family processes (e.g., parenting). These overlapping bodies of research built on past work by emphasizing the dynamic nature of family life and the intersection of families with other ecological settings, exploring race/ethnic diversity, identifying mechanisms connecting family and child/adolescent factors, and taking steps to address the threats to causal inference that have long been a problem for family studies. Continuing these trends in the future will be valuable, as will increasing the number of international comparisons, exploring "new" kinds of family diversity, and capturing the convergence of multiple statuses and processes over time. [source] Workers are people too: Societal aspects of occupational health disparities,an ecosocial perspectiveAMERICAN JOURNAL OF INDUSTRIAL MEDICINE, Issue 2 2010Nancy Krieger PhD Abstract Workers are people too. What else is new? This seemingly self-evident proposition, however, takes on new meaning when considering the challenging and deeply important issue of occupational health disparities,the topic that is the focus of 12 articles in this special issue of the American Journal of Industrial Medicine. In this commentary, I highlight some of the myriad ways that societal determinants of health intertwine with each and every aspect of occupation-related health inequities, as analyzed from an ecosocial perspective. The engagement extends from basic surveillance to etiologic research, from conceptualization and measurement of variables to analysis and interpretation of data, from causal inference to preventive action, and from the political economy of work to the political economy of health. A basic point is that who is employed (or not) in what kinds of jobs, with what kinds of exposures, what kinds of treatment, and what kinds of job stability, benefits, and pay,as well as what evidence exists about these conditions and what action is taken to address them,depends on societal context. At issue are diverse aspects of people's social location within their societies, in relation to their jointly experienced,and embodied,realities of socioeconomic position, race/ethnicity, nationality, nativity, immigration and citizen status, age, gender, and sexuality, among others. Reviewing the papers' findings, I discuss the scientific and real-world action challenges they pose. Recommendations include better conceptualization and measurement of socioeconomic position and race/ethnicity and also use of the health and human rights framework to further the public health mission of ensuring the conditions that enable people,including workers,to live healthy and dignified lives. Am. J. Ind. Med. 53:104,115 2010. © 2009 Wiley-Liss, Inc. [source] Untangling the Causal Effects of Sex on JudgingAMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2010Christina L. Boyd We explore the role of sex in judging by addressing two questions of long-standing interest to political scientists: whether and in what ways male and female judges decide cases distinctly,"individual effects",and whether and in what ways serving with a female judge causes males to behave differently,"panel effects." While we attend to the dominant theoretical accounts of why we might expect to observe either or both effects, we do not use the predominant statistical tools to assess them. Instead, we deploy a more appropriate methodology: semiparametric matching, which follows from a formal framework for causal inference. Applying matching methods to 13 areas of law, we observe consistent gender effects in only one,sex discrimination. For these disputes, the probability of a judge deciding in favor of the party alleging discrimination decreases by about 10 percentage points when the judge is a male. Likewise, when a woman serves on a panel with men, the men are significantly more likely to rule in favor of the rights litigant. These results are consistent with an informational account of gendered judging and are inconsistent with several others. [source] CAUSAL REFUTATIONS OF IDEALISMTHE PHILOSOPHICAL QUARTERLY, Issue 240 2010Andrew Chignell In the ,Refutation of Idealism' chapter of the first Critique, Kant argues that the conditions required for having certain kinds of mental episodes are sufficient to guarantee that there are ,objects in space' outside us. A perennially influential way of reading this compressed argument is as a kind of causal inference: in order for us to make justified judgements about the order of our inner states, those states must be caused by the successive states of objects in space outside us. Here I consider the best recent versions of this reading, and argue that each suffers from apparently fatal flaws. [source] Multiple Imputation Methods for Treatment Noncompliance and Nonresponse in Randomized Clinical TrialsBIOMETRICS, Issue 1 2009L. Taylor Summary Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine. [source] A Comparison of Eight Methods for the Dual-Endpoint Evaluation of Efficacy in a Proof-of-Concept HIV Vaccine TrialBIOMETRICS, Issue 3 2006Devan V. Mehrotra Summary To support the design of the world's first proof-of-concept (POC) efficacy trial of a cell-mediated immunity-based HIV vaccine, we evaluate eight methods for testing the composite null hypothesis of no-vaccine effect on either the incidence of HIV infection or the viral load set point among those infected, relative to placebo. The first two methods use a single test applied to the actual values or ranks of a burden-of-illness (BOI) outcome that combines the infection and viral load endpoints. The other six methods combine separate tests for the two endpoints using unweighted or weighted versions of the two-part z, Simes', and Fisher's methods. Based on extensive simulations that were used to design the landmark POC trial, the BOI methods are shown to have generally low power for rejecting the composite null hypothesis (and hence advancing the vaccine to a subsequent large-scale efficacy trial). The unweighted Simes' and Fisher's combination methods perform best overall. Importantly, this conclusion holds even after the test for the viral load component is adjusted for bias that can be introduced by conditioning on a postrandomization event (HIV infection). The adjustment is derived using a selection bias model based on the principal stratification framework of causal inference. [source] Are Statistical Contributions to Medicine Undervalued?BIOMETRICS, Issue 1 2003Norman E. Breslow Summary. Econometricians Daniel McFadden and James Heckman won the 2000 Nobel Prize in economics for their work on discrete choice models and selection bias. Statisticians and epidemiologists have made similar contributions to medicine with their work on case-control studies, analysis of incomplete data, and causal inference. In spite of repeated nominations of such eminent figures as Bradford Hill and Richard Doll, however, the Nobel Prize in physiology and medicine has never been awarded for work in biostatistics or epidemiology. (The "exception who proves the rule" is Ronald Ross, who, in 1902, won the second medical Nobel for his discovery that the mosquito was the vector for malaria. Ross then went on to develop the mathematics of epidemic theory,which he considered his most important scientific contribution,and applied his insights to malaria control programs.) The low esteem accorded epidemiology and biostatistics in some medical circles, and increasingly among the public, correlates highly with the contradictory results from observational studies that are displayed so prominently in the lay press. In spite of its demonstrated efficacy in saving lives, the "black box" approach of risk factor epidemiology is not well respected. To correct these unfortunate perceptions, statisticians would do well to follow more closely their own teachings: conduct larger, fewer studies designed to test specific hypotheses, follow strict protocols for study design and analysis, better integrate statistical findings with those from the laboratory, and exercise greater caution in promoting apparently positive results. [source] Survival Analysis in Clinical Trials: Past Developments and Future DirectionsBIOMETRICS, Issue 4 2000Thomas R. Fleming Summary. The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association53, 457,481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report50, 163,170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B34, 187,220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods. [source] Prediction and causal inferenceACTA PAEDIATRICA, Issue 12 2009Luigi Gagliardi No abstract is available for this article. [source] Community-based programmes to prevent falls in children: A systematic reviewJOURNAL OF PAEDIATRICS AND CHILD HEALTH, Issue 9-10 2005Rod McClure Objective: We systematically reviewed the literature to examine the evidence for the effectiveness of community-based interventions to reduce fall-related injury in children aged 0,16 years. Methods: We performed a comprehensive search of the literature using the following study selection criteria: community-based intervention study; target population was children aged 0,16 years; outcome measure was fall-related injury rates; and either a community control or historical control was used in the study design. Quality assessment and data abstraction were guided by a standardized procedure and performed independently by two authors. Results: Only six studies fitting the inclusion criteria were identified in our search and only two of these used a trial design with a contemporary community control. Neither of the high quality evaluation studies showed an effect from the intervention and while authors of the remaining studies reported effective falls prevention programmes, the pre- and post-intervention design, uncontrolled for background secular trends, makes causal inferences from these studies difficult. Conclusion: There is a paucity of research studies from which evidence regarding the effectiveness of community-based intervention programmes for the prevention of fall-related injury in children could be based. [source] Kontrolliert und repräsentativ: Beispiele zur Komplementarität von Labor- und FelddatenPERSPEKTIVEN DER WIRTSCHAFTSPOLITIK, Issue 2009Armin Falk Experiments offer highly controlled environments that allow precise testing and causal inferences. Survey and field data on the other hand provide information on large and representative samples of people interacting in their natural environment. We discuss several concrete examples how to combine lab and field data and how to exploit potential complementarities. One example describes an experiment, which is run with a representative sample to guarantee control and representativeness. The second example is based on the idea to experimentally validate survey instruments to ensure behavioral validity of instruments that can be used in existing panel data sets. The third example describes the possibility to use the lab to identify causal effects, which are tested in large data sets. Topics discussed in this article comprise the relation of cognitive skills (IQ) and risk and time preferences, determinants, prevalence and economic consequences of risk attitudes, selection into incentive schemes and the impact of unfair pay on stress. [source] Counterfactuals and the Study of the American PresidencyPRESIDENTIAL STUDIES QUARTERLY, Issue 2 2002Jeffrey M. Chwieroth King (1993) argues that with the small number of presidents available for study, presidency scholars cannot construct reliable causal inferences by employing the presidency as the unit of observation. He goes on to suggest that presidency scholars should abandon such research designs and search for ways to increase their number of observations. This article evaluates King's methodological prescription by examining the utility of counterfactuals for presidency scholars. Relying on recent scholarship on counterfactuals, the author argues that presidency scholars may be able to produce reliable causal inferences while continuing to rely on the presidency as the unit of observation. He goes on to specify the conditions under which counterfactuals may be useful and illustrate the application of this method by examining Woodrow Wilson's failed effort to secure ratification of the Versailles Treaty. The author concludes that if presidency scholars' research dictates relying on the president as the unit of observation, then they are left with little choice but to rely on this method to test their theories with an adequate degree of certainty. [source] Estimating the Causal Effects of Media Coverage on Policy-Specific KnowledgeAMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 1 2009Jason Barabas Policy facts are among the most relevant forms of knowledge in a democracy. Although the mass media seem like an obvious source of policy-specific information, past research in this area has been plagued by design and methodological problems that have hindered causal inferences. Moreover, few studies include measures of media content, preventing researchers from being able to say what it is about media coverage that influences learning. We advance the literature by employing a simple but underutilized approach for estimating the causal effects of news coverage. Drawing upon a unique collection of cross-sectional survey data, we make within-survey/within-subjects comparisons under conditions of high and low media coverage. We show how the volume, breadth, and prominence of news media coverage increase policy-specific knowledge above and beyond common demographic factors. [source] A path-analytic strategy to analyze psychoanalytic treatment effectsTHE INTERNATIONAL JOURNAL OF PSYCHOANALYSIS, Issue 5 2003James Crouse This paper introduces a path-analytic strategy to analyze psychoanalytic treatment effects. A simple causal model is used to analyze a well-known case study by Charles Brenner. Application of even this simple model to the case study sharpens causal inferences that may be validly made, highlights important aspects of the psychoanalytic process and builds a foundation for further model development. [source] Statistical issues on the analysis of change in follow-up studies in dental researchCOMMUNITY DENTISTRY AND ORAL EPIDEMIOLOGY, Issue 6 2007Andrew Blance Abstract Objective:, To provide an overview to the problems in study design and associated analyses of follow-up studies in dental research, particularly addressing three issues: treatment-baselineinteractions; statistical power; and nonrandomization. Background:, Our previous work has shown that many studies purport an interacion between change (from baseline) and baseline values, which is often based on inappropriate statistical analyses. A priori power calculations are essential for randomized controlled trials (RCTs), but in the pre-test/post-test RCT design it is not well known to dental researchers that the choice of statistical method affects power, and that power is affected by treatment-baseline interactions. A common (good) practice in the analysis of RCT data is to adjust for baseline outcome values using ancova, thereby increasing statistical power. However, an important requirement for ancova is there to be no interaction between the groups and baseline outcome (i.e. effective randomization); the patient-selection process should not cause differences in mean baseline values across groups. This assumption is often violated for nonrandomized (observational) studies and the use of ancova is thus problematic, potentially giving biased estimates, invoking Lord's paradox and leading to difficulties in the interpretation of results. Methods:, Baseline interaction issues can be overcome by use of statistical methods; not widely practiced in dental research: Oldham's method and multilevel modelling; the latter is preferred for its greater flexibility to deal with more than one follow-up occasion as well as additional covariates To illustrate these three key issues, hypothetical examples are considered from the fields of periodontology, orthodontics, and oral implantology. Conclusion:, Caution needs to be exercised when considering the design and analysis of follow-up studies. ancova is generally inappropriate for nonrandomized studies and causal inferences from observational data should be avoided. [source] Theory, Stylized Heuristic or Self-Fulfilling Prophecy?PUBLIC ADMINISTRATION, Issue 1 2004The Status of Rational Choice Theory in Public Administration Rational choice is intimately associated with positivism and naturalism, its appeal to scholars of public administration lying in its ability to offer a predictive science of politics that is parsimonious in its analytical assumptions, rigorous in its deductive reasoning and overarching in its apparent applicability. In this paper I re-examine the ontology and epistemology which underpins this distinctive approach to public administration, challenging the necessity of the generally unquestioned association between rational choice and both positivism and naturalism. Rational choice, I contend, can only defend its claim to offer a predictive science of politics on the basis of an ingenious, paradoxical, and seldom acknowledged structuralism and a series of analytical assumptions incapable of capturing the complexity and contingency of political systems. I argue that analytical parsimony, though itself a condition of naturalism, is in fact incompatible with the deduction of genuinely explanatory/causal inferences. This suggests that the status of rational choice as an explanatory/predictive theory needs to be reassessed. Yet this is no reason to reject rational choice out of hand. For, deployed not as a theory in its own right, but as a heuristic analytical strategy for exploring hypothetical scenarios, it is a potent and powerful resource in post-positivist public administration. [source] |