| |||
Binary Outcome (binary + outcome)
Selected AbstractsRe-Formulating Non-inferiority Trials as Superiority Trials: The Case of Binary OutcomesBIOMETRICAL JOURNAL, Issue 1 2009Valerie L. Durkalski Abstract Non-inferiority trials are conducted for a variety of reasons including to show that a new treatment has a negligible reduction in efficacy or safety when compared to the current standard treatment, or a more complex setting of showing that a new treatment has a negligible reduction in efficacy when compared to the current standard yet is superior in terms of other treatment characteristics. The latter reason for conducting a non-inferiority trial presents the challenge of deciding on a balance between a suitable reduction in efficacy, known as the non-inferiority margin, in return for a gain in other important treatment characteristics/findings. It would be ideal to alleviate the dilemma on the choice of margin in this setting by reverting to a traditional superiority trial design where a single p -value for superiority of both the most important endpoint (efficacy) and the most important finding (treatment characteristic) is provided. We discuss how this can be done using the information-preserving composite endpoint (IPCE) approach and consider binary outcome cases in which the combination of efficacy and treatment characteristics, but not one itself, paints a clear picture that the novel treatment is superior to the active control (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Evaluating uses of data mining techniques in propensity score estimation: a simulation study,PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 6 2008DrPH, Soko Setoguchi MD Abstract Background In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. Method We simulated data for a hypothetical cohort study (n,=,2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Results Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR],=,,0.3, p,=,0.1) but increased SE (COR,=,0.7, p,<,0.001). Conclusions Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modeling missing binary outcome data in a successful web-based smokeless tobacco cessation programADDICTION, Issue 6 2010Keith Smolkowski ABSTRACT Aim To examine various methods to impute missing binary outcome from a web-based tobacco cessation intervention. Design The ChewFree randomized controlled trial used a two-arm design to compare tobacco abstinence at both the 3- and 6-month follow-up for participants randomized to either an enhanced web-based intervention condition or a basic information-only control condition. Setting Internet in the United States and Canada. Participants Secondary analyses focused upon 2523 participants in the ChewFree trial. Measurements Point-prevalence tobacco abstinence measured at 3- and 6-month follow-up. Findings The results of this study confirmed the findings for the original ChewFree trial and highlighted the use of different missing-data approaches to achieve intent-to-treat analyses when confronted with substantial attrition. The use of different imputation methods yielded results that differed in both the size of the estimated treatment effect and the standard errors. Conclusions The choice of imputation model used to analyze missing binary outcome data can affect substantially the size and statistical significance of the treatment effect. Without additional information about the missing cases, they can overestimate the effect of treatment. Multiple imputation methods are recommended, especially those that permit a sensitivity analysis of their impact. [source] Evaluating bias due to population stratification in case-control association studies of admixed populations,GENETIC EPIDEMIOLOGY, Issue 1 2004Yiting Wang Abstract The potential for bias from population stratification (PS) has raised concerns about case-control studies involving admixed ethnicities. We evaluated the potential bias due to PS in relating a binary outcome with a candidate gene under simulated settings where study populations consist of multiple ethnicities. Disease risks were assigned within the range of prostate cancer rates of African Americans reported in SEER registries assuming k=2, 5, or 10 admixed ethnicities. Genotype frequencies were considered in the range of 5,95%. Under a model assuming no genotype effect on disease (odds ratio (OR)=1), the range of observed OR estimates ignoring ethnicity was 0.64,1.55 for k=2, 0.72,1.33 for k=5, and 0.81,1.22 for k=10. When genotype effect on disease was modeled to be OR=2, the ranges of observed OR estimates were 1.28,3.09, 1.43,2.65, and 1.62,2.42 for k=2, 5, and 10 ethnicities, respectively. Our results indicate that the magnitude of bias is small unless extreme differences exist in genotype frequency. Bias due to PS decreases as the number of admixed ethnicities increases. The biases are bounded by the minimum and maximum of all pairwise baseline disease odds ratios across ethnicities. Therefore, bias due to PS alone may be small when baseline risk differences are small within major categories of admixed ethnicity, such as African Americans. © 2004 Wiley-Liss, Inc. [source] A Randomized, Double-Blind, Placebo-Controlled Pilot Study of Naltrexone in Outpatients With Bipolar Disorder and Alcohol DependenceALCOHOLISM, Issue 11 2009E. Sherwood Brown Background:, Alcohol dependence is extremely common in patients with bipolar disorder and is associated with unfavorable outcomes including treatment nonadherence, violence, increased hospitalization, and decreased quality of life. While naltrexone is a standard treatment for alcohol dependence, no controlled trials have examined its use in patients with co-morbid bipolar disorder and alcohol dependence. In this pilot study, the efficacy of naltrexone in reducing alcohol use and on mood symptoms was assessed in bipolar disorder and alcohol dependence. Methods:, Fifty adult outpatients with bipolar I or II disorders and current alcohol dependence with active alcohol use were randomized to 12 weeks of naltrexone (50 mg/d) add-on therapy or placebo. Both groups received manual-driven cognitive behavioral therapy designed for patients with bipolar disorder and substance-use disorders. Drinking days and heavy drinking days, alcohol craving, liver enzymes, and manic and depressed mood symptoms were assessed. Results:, The 2 groups were similar in baseline and demographic characteristics. Naltrexone showed trends (p < 0.10) toward a greater decrease in drinking days (binary outcome), alcohol craving, and some liver enzyme levels than placebo. Side effects were similar in the 2 groups. Response to naltrexone was significantly related to medication adherence. Conclusions:, Results suggest the potential value and acceptable tolerability of naltrexone for alcohol dependence in bipolar disorder patients. A larger trial is needed to establish efficacy. [source] Reliability of the assessment of preventable adverse drug events in daily clinical practice,PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 7 2008Jasperien E. van Doormaal PharmD Abstract Purpose To determine the reliability of the assessment of preventable adverse drug events (ADEs) in daily practice and to explore the impact of the assessors' professional background and the case characteristics on reliability. Methods We used a combination of the simplified Yale algorithm and the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) scheme to assess on the one hand the causal relationship between medication errors (MEs) and adverse events in hospitalised patients and on the other hand the severity of the clinical consequence of MEs. Five pharmacists and five physicians applied this algorithm to 30 potential MEs. After individual assessment, the pharmacists reached consensus and so did the physicians. Outcome was both MEs' severity (ordinal scale, NCC MERP categories A,I) and the occurrence of preventable harm (binary outcome, NCC MERP categories A,D vs. E,I). Kappa statistics was used to assess agreement. Results The overall agreement on MEs' severity was fair for the pharmacists (,,=,0.34) as well as for the physicians (,,=,0.25). Overall agreement for the 10 raters was fair (,,=,0.25) as well as the agreement between both consensus outcomes (,,=,0.30). Agreement on the occurrence of preventable harm was higher, ranging from ,,=,0.36 for the physicians through ,,=,0.49 for the pharmacists. Overall agreement for the 10 raters was fair (,,=,0.36). The agreement between both consensus outcomes was moderate (,,=,0.47). None of the included case characteristics had a significant impact on agreement. Conclusions Individual assessment of preventable ADEs in real patients is difficult, possibly because of the difficult assessment of contextual information. Best approach seems to be a consensus method including both pharmacists and physicians. Copyright © 2008 John Wiley & Sons, Ltd. [source] Meta-analysis of a binary outcome using individual participant data and aggregate dataRESEARCH SYNTHESIS METHODS, Issue 1 2010Richard D. Riley Abstract In this paper, we develop meta-analysis models that synthesize a binary outcome from health-care studies while accounting for participant-level covariates. In particular, we show how to synthesize the observed event-risk across studies while accounting for the within-study association between participant-level covariates and individual event probability. The models are adapted for situations where studies provide individual participant data (IPD), or a mixture of IPD and aggregate data. We show that the availability of IPD is crucial in at least some studies; this allows one to model potentially complex within-study associations and separate them from across-study associations, so as to account for potential ecological bias and study-level confounding. The models can produce pertinent population-level and individual-level results, such as the pooled event-risk and the covariate-specific event probability for an individual. Application is made to 14 studies of traumatic brain injury, where IPD are available for four studies and the six-month mortality risk is synthesized in relation to individual age. The results show that as individual age increases the probability of six-month mortality also increases; further, the models reveal clear evidence of ecological bias, with the mean age in each study additionally influencing an individual's mortality probability. Copyright © 2010 John Wiley & Sons, Ltd. [source] CUMULATIVE SUM TECHNIQUES FOR SURGEONS: A BRIEF REVIEWANZ JOURNAL OF SURGERY, Issue 7 2007Cheng-Hon Yap There has been increasing awareness of the need for monitoring the quality of health care, particularly in the area of surgery. The Cumulative Summation (Cusum) techniques have emerged as a popular tool for performance monitoring in surgery. They allow one to judge whether a given variation in performance is probably due to chance or greater than could be expected from random variation and thus a cause for concern. The Cusum techniques are simple to carry out and can be applied to any surgical process with a binary outcome. Four parameters need to be set in advance: acceptable outcome rate, unacceptable outcome rate, Type I and Type II error rates. In this article, we review the history, statistical methods and potential applications for the Cusum techniques in the field of surgery and illustrate the two common forms of charting (cumulative failure and Cusum charting) by using unadjusted outcome data from the Geelong Hospital and St Vincent's Hospital cardiac surgery databases. [source] A COMPARISON OF THE IMPRECISE BETA CLASS, THE RANDOMIZED PLAY-THE-WINNER RULE AND THE TRIANGULAR TEST FOR CLINICAL TRIALS WITH BINARY RESPONSESAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2007Lyle C. Gurrin Summary This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play-the-winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures. [source] Blinded Sample Size Reestimation in Non-Inferiority Trials with Binary EndpointsBIOMETRICAL JOURNAL, Issue 6 2007Tim Friede Abstract Sample size calculations in the planning of clinical trials depend on good estimates of the model parameters involved. When the estimates of these parameters have a high degree of uncertainty attached to them, it is advantageous to reestimate the sample size after an internal pilot study. For non-inferiority trials with binary outcome we compare the performance of Type I error rate and power between fixed-size designs and designs with sample size reestimation. The latter design shows itself to be effective in correcting sample size and power of the tests when misspecification of nuisance parameters occurs with the former design. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Association Models for Clustered Data with Binary and Continuous ResponsesBIOMETRICS, Issue 1 2010Lanjia Lin Summary We consider analysis of clustered data with mixed bivariate responses, i.e., where each member of the cluster has a binary and a continuous outcome. We propose a new bivariate random effects model that induces associations among the binary outcomes within a cluster, among the continuous outcomes within a cluster, between a binary outcome and a continuous outcome from different subjects within a cluster, as well as the direct association between the binary and continuous outcomes within the same subject. For the ease of interpretations of the regression effects, the marginal model of the binary response probability integrated over the random effects preserves the logistic form and the marginal expectation of the continuous response preserves the linear form. We implement maximum likelihood estimation of our model parameters using standard software such as PROC NLMIXED of SAS. Our simulation study demonstrates the robustness of our method with respect to the misspecification of the regression model as well as the random effects model. We illustrate our methodology by analyzing a developmental toxicity study of ethylene glycol in mice. [source] Conditioning in 2 × 2 TablesBIOMETRICS, Issue 1 2009Michael A. Proschan Summary Two-by-two tables arise in a number of diverse settings in biomedical research, including analysis of data from a clinical trial with a binary outcome and gating methods in flow cytometry to separate antigen-specific immune responses from general immune responses. These applications offer interesting challenges concerning what we should really be conditioning on,the total number of events, the number of events in the control condition, etc. We give several biostatistics examples to illustrate the complexities of analyzing what appear to be simple data. [source] Determining a Maximum-Tolerated Schedule of a Cytotoxic AgentBIOMETRICS, Issue 2 2005Thomas M. Braun Summary Most phase I clinical trials are designed to determine a maximum-tolerated dose (MTD) for one initial administration or treatment course of a cytotoxic experimental agent. Toxicity usually is defined as the indicator of whether one or more particular adverse events occur within a short time period from the start of therapy. However, physicians often administer an agent to the patient repeatedly and monitor long-term toxicity due to cumulative effects. We propose a new method for such settings. It is based on the time to toxicity rather than a binary outcome, and the goal is to determine a maximum-tolerated schedule (MTS) rather than a conventional MTD. The model and method account for a patient's entire sequence of administrations, with the overall hazard of toxicity modeled as the sum of a sequence of hazards, each associated with one administration. Data monitoring and decision making are done continuously throughout the trial. We illustrate the method with an allogeneic bone marrow transplantation (BMT) trial to determine how long a recombinant human growth factor can be administered as prophylaxis for acute graft-versus-host disease (aGVHD), and we present a simulation study in the context of this trial. [source] Interpreting Parameters in the Logistic Regression Model with Random EffectsBIOMETRICS, Issue 3 2000Klaus Larsen Summary. Logistic regression with random effects is used to study the relationship between explanatory variables and a binary outcome in cases with nonindependent outcomes. In this paper, we examine in detail the interpretation of both fixed effects and random effects parameters. As heterogeneity measures, the random effects parameters included in the model are not easily interpreted. We discuss different alternative measures of heterogeneity and suggest using a median odds ratio measure that is a function of the original random effects parameters. The measure allows a simple interpretation, in terms of well-known odds ratios, that greatly facilitates communication between the data analyst and the subject-matter researcher. Three examples from different subject areas, mainly taken from our own experience, serve to motivate and illustrate different aspects of parameter interpretation in these models. [source] Is the placebo powerless?JOURNAL OF INTERNAL MEDICINE, Issue 2 2004Update of a systematic review with 52 new randomized trials comparing placebo with no treatment Abstract. Background., It is widely believed that placebo interventions induce powerful effects. We could not confirm this in a systematic review of 114 randomized trials that compared placebo-treated with untreated patients. Aim., To study whether a new sample of trials would reproduce our earlier findings, and to update the review. Methods., Systematic review of trials that were published since our last search (or not previously identified), and of all available trials. Results., Data was available in 42 out of 52 new trials (3212 patients). The results were similar to our previous findings. The updated review summarizes data from 156 trials (11 737 patients). We found no statistically significant pooled effect in 38 trials with binary outcomes, relative risk 0.95 (95% confidence interval 0.89,1.01). The effect on continuous outcomes decreased with increasing sample size, and there was considerable variation in effect also between large trials; the effect estimates should therefore be interpreted cautiously. If this bias is disregarded, the pooled standardized mean difference in 118 trials with continuous outcomes was ,0.24 (,0.31 to ,0.17). For trials with patient-reported outcomes the effect was ,0.30 (,0.38 to ,0.21), but only ,0.10 (,0.20 to 0.01) for trials with observer-reported outcomes. Of 10 clinical conditions investigated in three trials or more, placebo had a statistically significant pooled effect only on pain or phobia on continuous scales. Conclusion., We found no evidence of a generally large effect of placebo interventions. A possible small effect on patient-reported continuous outcomes, especially pain, could not be clearly distinguished from bias. [source] Joint generalized estimating equations for multivariate longitudinal binary outcomes with missing data: an application to acquired immune deficiency syndrome dataJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009Stuart R. Lipsitz Summary., In a large, prospective longitudinal study designed to monitor cardiac abnormalities in children born to women who are infected with the human immunodeficiency virus, instead of a single outcome variable, there are multiple binary outcomes (e.g. abnormal heart rate, abnormal blood pressure and abnormal heart wall thickness) considered as joint measures of heart function over time. In the presence of missing responses at some time points, longitudinal marginal models for these multiple outcomes can be estimated by using generalized estimating equations (GEEs), and consistent estimates can be obtained under the assumption of a missingness completely at random mechanism. When the missing data mechanism is missingness at random, i.e. the probability of missing a particular outcome at a time point depends on observed values of that outcome and the remaining outcomes at other time points, we propose joint estimation of the marginal models by using a single modified GEE based on an EM-type algorithm. The method proposed is motivated by the longitudinal study of cardiac abnormalities in children who were born to women infected with the human immunodeficiency virus, and analyses of these data are presented to illustrate the application of the method. Further, in an asymptotic study of bias, we show that, under a missingness at random mechanism in which missingness depends on all observed outcome variables, our joint estimation via the modified GEE produces almost unbiased estimates, provided that the correlation model has been correctly specified, whereas estimates from standard GEEs can lead to substantial bias. [source] Estimating the effect of treatment in a proportional hazards model in the presence of non-compliance and contaminationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2007Jack Cuzick Summary., Methods for adjusting for non-compliance and contamination, which respect the randomization, are extended from binary outcomes to time-to-event analyses by using a proportional hazards model. A simple non-iterative method is developed when there are no covariates, which is a generalization of the Mantel,Haenszel estimator. More generally, a ,partial likelihood' is developed which accommodates covariates under the assumption that they are independent of compliance. A key feature is that the proportion of contaminators and non-compliers in the risk set is updated at each failure time. When covariates are not independent of compliance, a full likelihood is developed and explored, but this leads to a complex estimator. Estimating equations and information matrices are derived for these estimators and they are evaluated by simulation studies. [source] Maximum likelihood estimation of bivariate logistic models for incomplete responses with indicators of ignorable and non-ignorable missingnessJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2002Nicholas J. Horton Summary. Missing observations are a common problem that complicate the analysis of clustered data. In the Connecticut child surveys of childhood psychopathology, it was possible to identify reasons why outcomes were not observed. Of note, some of these causes of missingness may be assumed to be ignorable, whereas others may be non-ignorable. We consider logistic regression models for incomplete bivariate binary outcomes and propose mixture models that permit estimation assuming that there are two distinct types of missingness mechanisms: one that is ignorable; the other non-ignorable. A feature of the mixture modelling approach is that additional analyses to assess the sensitivity to assumptions about the missingness are relatively straightforward to incorporate. The methods were developed for analysing data from the Connecticut child surveys, where there are missing informant reports of child psychopathology and different reasons for missingness can be distinguished. [source] Association Models for Clustered Data with Binary and Continuous ResponsesBIOMETRICS, Issue 1 2010Lanjia Lin Summary We consider analysis of clustered data with mixed bivariate responses, i.e., where each member of the cluster has a binary and a continuous outcome. We propose a new bivariate random effects model that induces associations among the binary outcomes within a cluster, among the continuous outcomes within a cluster, between a binary outcome and a continuous outcome from different subjects within a cluster, as well as the direct association between the binary and continuous outcomes within the same subject. For the ease of interpretations of the regression effects, the marginal model of the binary response probability integrated over the random effects preserves the logistic form and the marginal expectation of the continuous response preserves the linear form. We implement maximum likelihood estimation of our model parameters using standard software such as PROC NLMIXED of SAS. Our simulation study demonstrates the robustness of our method with respect to the misspecification of the regression model as well as the random effects model. We illustrate our methodology by analyzing a developmental toxicity study of ethylene glycol in mice. [source] Joint Regression Analysis of Correlated Data Using Gaussian CopulasBIOMETRICS, Issue 1 2009Peter X.-K. Summary This article concerns a new joint modeling approach for correlated data analysis. Utilizing Gaussian copulas, we present a unified and flexible machinery to integrate separate one-dimensional generalized linear models (GLMs) into a joint regression analysis of continuous, discrete, and mixed correlated outcomes. This essentially leads to a multivariate analogue of the univariate GLM theory and hence an efficiency gain in the estimation of regression coefficients. The availability of joint probability models enables us to develop a full maximum likelihood inference. Numerical illustrations are focused on regression models for discrete correlated data, including multidimensional logistic regression models and a joint model for mixed normal and binary outcomes. In the simulation studies, the proposed copula-based joint model is compared to the popular generalized estimating equations, which is a moment-based estimating equation method to join univariate GLMs. Two real-world data examples are used in the illustration. [source] Testing for Spatial Correlation in Nonstationary Binary Data, with Application to Aberrant Crypt Foci in Colon CarcinogenesisBIOMETRICS, Issue 4 2003Tatiyana V. Apanasovich Summary. In an experiment to understand colon carcinogenesis, all animals were exposed to a carcinogen, with half the animals also being exposed to radiation. Spatially, we measured the existence of what are referred to as aberrant crypt foci (ACF), namely, morphologically changed colonic crypts that are known to be precursors of colon cancer development. The biological question of interest is whether the locations of these ACFs are spatially correlated: if so, this indicates that damage to the colon due to carcinogens and radiation is localized. Statistically, the data take the form of binary outcomes (corresponding to the existence of an ACF) on a regular grid. We develop score-type methods based upon the Matern and conditionally autoregressive (CAR) correlation models to test for the spatial correlation in such data, while allowing for nonstationarity. Because of a technical peculiarity of the score-type test, we also develop robust versions of the method. The methods are compared to a generalization of Moran's test for continuous outcomes, and are shown via simulation to have the potential for increased power. When applied to our data, the methods indicate the existence of spatial correlation, and hence indicate localization of damage. [source] |