Treatment Allocation (treatment + allocation)

Distribution by Scientific Domains


Selected Abstracts


Tacrolimus as secondary intervention vs. cyclosporine continuation in patients at risk for chronic renal allograft failure

CLINICAL TRANSPLANTATION, Issue 5 2005
Thomas Waid
Abstract:, Background:, Chronic renal allograft failure (CRAF) is the leading cause of graft loss post-renal transplantation. This study evaluated the efficacy and safety of tacrolimus as secondary intervention in cyclosporine-treated kidney transplantation patients with impaired allograft function as indicated by elevated serum creatinine (SCr) levels. Methods:, Patients receiving cyclosporine-based immunosuppression who had an elevated SCr at least 3 months post-renal transplantation were enrolled. Treatment allocation was 2:1 to switch to tacrolimus or continue cyclosporine. This analysis was performed after 2 yr; patients will be followed for an additional 3 yr. Results:, There were 186 enrolled and evaluable patients. On baseline biopsy, 90% of patients had chronic allograft nephropathy. Baseline median SCr was 2.5 mg/dL in both treatment groups. For patients with graft function at month 24, SCr had decreased to 2.3 mg/dL in the tacrolimus-treated patients and increased to 2.6 mg/dL in the cyclosporine-treated patients (p = 0.01). Acute rejection occurred in 4.8% of tacrolimus-treated patients and 5.0% of cyclosporine-treated patients during follow-up. Two-year allograft survival was comparable between groups (tacrolimus 69%, cyclosporine 67%; p = 0.70). Tacrolimus-treated patients had significantly lower cholesterol and low-density lipoprotein levels and also had fewer new-onset infections. Cardiac conditions developed in significantly fewer tacrolimus-treated patients (5.6%) than cyclosporine-treated patients (24.3%; p = 0.004). Glucose levels and the incidences of new-onset diabetes and new-onset hyperglycemia did not differ between treatment groups. Conclusions:, Conversion from cyclosporine to tacrolimus results in improved renal function and lipid profiles, and significantly fewer cardiovascular events with no differences in the incidence of acute rejection or new-onset hyperglycemia. [source]


Effect of insulin infusion on electrocardiographic findings following acute myocardial infarction: importance of glycaemic control

DIABETIC MEDICINE, Issue 2 2009
R. M. Gan
Abstract Aims, To determine the effects of insulin infusion and blood glucose levels during acute myocardial infarction (AMI) on electrocardiographic (ECG) features of myocardial electrical activity. Methods, ECGs at admission and 24 h were examined in a randomized study of insulin infusion vs. routine care for AMI patients with diabetes or hyperglycaemia. Results were analysed according to treatment allocation and also according to average blood glucose level. Results, ECG characteristics were similar at admission in both groups. Patients allocated to conventional treatment had prolongation of the QT interval (QTc) after 24 h but those receiving infused insulin did not. In patients with a mean blood glucose in the first 24 h > 8.0 mmol/l, new ECG conduction abnormalities were significantly more common than in patients with mean blood glucose , 8.0 mmol/l (15.0% vs. 6.0%, P < 0.05). Conclusions, Prevention of QTc prolongation by administration of insulin may reflect a protective effect on metabolic and electrical activity in threatened myocardial tissue. Abnormalities of cardiac electrical conduction may also be influenced by blood glucose. [source]


Protective Effect of HOE642, a Selective Blocker of Na+ -H+ Exchange, Against the Development of Rigor Contracture in Rat Ventricular Myocytes

EXPERIMENTAL PHYSIOLOGY, Issue 1 2000
Marisol Ruiz-Meana
The objective of this study was to investigate the effect of Na+ -H+ exchange (NHE) and HCO3, -Na+ symport inhibition on the development of rigor contracture. Freshly isolated adult rat cardiomyocytes were subjected to 60 min metabolic inhibition (MI) and 5 min re-energization (Rx). The effects of perfusion of HCO3, or HCO3, -free buffer with or without the NHE inhibitor HOE642 (7 ,M) were investigated during MI and Rx. In HCO3, -free conditions, HOE642 reduced the percentage of cells developing rigor during MI from 79 ± 1% to 40 ± 4% (P < 0.001) without modifying the time at which rigor appeared. This resulted in a 30% reduction of hypercontracture during Rx (P < 0.01). The presence of HCO3, abolished the protective effect of HOE642 against rigor. Cells that had developed rigor underwent hypercontracture during Rx independently of treatment allocation. Ratiofluorescence measurement demonstrated that the rise in cytosolic Ca2+ (fura-2) occurred only after the onset of rigor, and was not influenced by HOE642. NHE inhibition did not modify Na+ rise (SBFI) during MI, but exaggerated the initial fall of intracellular pH (BCEFC). In conclusion, HOE642 has a protective effect against rigor during energy deprivation, but only when HCO3, -dependent transporters are inhibited. This effect is independent of changes in cytosolic Na+ or Ca2+ concentrations. [source]


Regular or "Super-Aspirins"?

JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 4 2001
A Review of Thienopyridines or Aspirin to Prevent Stroke
PURPOSE: To review the evidence for the effectiveness and safety of the thienopyridines (ticlopidine and clopidogrel) compared with aspirin for the prevention of vascular events among patients at high risk of vascular disease. BACKGROUND: Atherosclerosis and resultant cardiovascular disease are important causes of morbidity and mortality in older people. In particular, atherosclerosis of the cerebral arteries can lead to transient ischemic attacks (TIAs) and stroke. Stroke ranks as the third-leading cause of death in the United States and in 1997 was responsible for over 150,000 fatalities.1 In addition to the mortality associated with this disease, stroke is also a leading source of long-term disability in survivors. Nearly 4.5 million stroke survivors are alive today,1 highlighting the fact that primary, but also secondary, prevention are extremely important for minimizing the complications of this illness. DATA SOURCES: Specialized trial registers of the Cochrane Stroke Group and the Antithrombotic Trialist's Collaboration, MEDLINE, and Embase were searched. Additional unpublished information and data were sought from Sanofi, the pharmaceutical company that developed and manufactures ticlopidine and clopidogrel, as well as the principal investigators of the Clopidogrel versus Aspirin in Patients at Risk of Ischemic Events (CAPRIE) trial,7 the largest of the trials identified. STUDY SELECTION CRITERIA: All unconfounded randomized trials comparing either ticlopidine or clopidogrel with aspirin among patients at high risk of vascular disease (those with symptoms of ischemia of the cerebral, coronary, or peripheral circulations) who were followed for at least 1 month for the recurrence of vascular events were included. DATA EXTRACTION: Data were extracted from four completed randomized trials completed in the past 20 years, which included 22,656 patients.7,10 Two authors independently extracted the data from these trials for the following information: the types of patients enrolled; the entry and exclusion criteria; the randomization method; the number of patients originally allocated to the treatment and control groups; the method and duration of follow-up; the number of patients in each group lost to follow-up; information on compliance with the treatment allocated; the definitions of outcome events; the number of outcome events in each treatment group; and any method used for blinding patients, treating clinicians, and outcome assessors to treatment allocation. MAIN RESULTS: Four completed trials involving a total of 22,656 patients were identified. Aspirin was compared with ticlopidine in three trials (3,471 patients)8,10 and with clopidogrel in one trial (19,185 patients).7 A recent TIA or ischemic stroke was the qualifying event in 9,840 patients, a recent myocardial infarction in 6,302 patients, and symptomatic peripheral arterial disease in 6,514 patients. The average age of the patients was approximately 63, with approximately two-thirds of the patients being male and white. The duration of follow-up ranged from 12 to 40 months. CONCLUSIONS: This systematic review demonstrates that, compared with aspirin, thienopyridines are only modestly more effective in preventing serious vascular events in high-risk patients. For patients who are intolerant of, or allergic to aspirin, the available safety and efficacy data suggest that clopidogrel is an appropriate, but more-expensive, alternative antiplatelet drug. It appears safer than ticlopidine and as safe as aspirin but it should not replace aspirin as the first-choice antiplatelet agent for all patients. Further studies are necessary to determine which, if any, particular types of patients would benefit most and least from clopidogrel instead of aspirin. [source]


Balancing treatment allocations by clinician or center in randomized trials allows unacceptable levels of treatment prediction

JOURNAL OF EVIDENCE BASED MEDICINE, Issue 3 2009
Robert K Hills
Abstract Objective Randomized controlled trials are the standard method for comparing treatments because they avoid the selection bias that might arise if clinicians were free to choose which treatment a patient would receive. In practice, allocation of treatments in randomized controlled trials is often not wholly random with various ,pseudo-randomization' methods, such as minimization or balanced blocks, used to ensure good balance between treatments within potentially important prognostic or predictive subgroups. These methods avoid selection bias so long as full concealment of the next treatment allocation is maintained. There is concern, however, that pseudo-random methods may allow clinicians to predict future treatment allocations from previous allocation history, particularly if allocations are balanced by clinician or center. We investigate here to what extent treatment prediction is possible. Methods Using computer simulations of minimization and balanced block randomizations, the success rates of various prediction strategies were investigated for varying numbers of stratification variables, including the patient's clinician. Results Prediction rates for minimization and balanced block randomization typically exceed 60% when clinician is included as a stratification variable and, under certain circumstances, can exceed 80%. Increasing the number of clinicians and other stratification variables did not greatly reduce the prediction rates. Without clinician as a stratification variable, prediction rates are poor unless few clinicians participate. Conclusion Prediction rates are unacceptably high when allocations are balanced by clinician or by center. This could easily lead to selection bias that might suggest spurious, or mask real, treatment effects. Unless treatment is blinded, randomization should not be balanced by clinician (or by center), and clinician,center effects should be allowed for instead by retrospectively stratified analyses. [source]


Cisapride treatment for gastro-oesophageal reflux in children: A systematic review of randomized controlled trials

JOURNAL OF PAEDIATRICS AND CHILD HEALTH, Issue 6 2000
R E Gilbert
Abstract: The aim of the systematic review was to determine the effect of cisapride compared with placebo or other non-surgical therapies for the treatment of symptoms of gastro-oesophageal reflux in children. We searched MEDLINE, EMBASE, the Cochrane Controlled Trials Register, Science Citation Index and reference lists for randomized controlled trials which compared cisapride with placebo or other non-surgical therapy in children. We included only trials which reported reflux-related symptoms as an outcome, provided that cisapride was administered orally for at least 1 week. Seven trials (286 children in total) compared cisapride with placebo. Two trials reported good concealment of treatment allocation. The pooled odds ratio for the ,same or worse' symptoms was 0.34 (95%CI 0.10, 1.19). There was substantial heterogeneity between studies (P < 0.00001) and the funnel plot was asymmetrical. Adverse effects (mainly diarrhoea) were not significantly increased with cisapride (pooled odds ratio (OR) 1.80: 0.87, 3.70). The reflux index was significantly reduced in children treated with cisapride (weighted mean difference ,6.49: ,10.13, ,2.85). One study (50 children) compared cisapride with gaviscon plus carobel: the OR for the ,same or worse' symptoms was 3.26 (0.93, 11.38). There was no clear evidence that cisapride reduced symptoms of gastro-oesophageal reflux. As smaller, poorer quality studies were biased in favour of a positive treatment effect, the pooled OR overestimated the potential benefits of cisapride. There was some evidence to suggest that gaviscon plus carobel may be a more effective option than cisapride. [source]


Post-thrombotic syndrome, recurrence, and death 10 years after the first episode of venous thromboembolism treated with warfarin for 6 weeks or 6 months

JOURNAL OF THROMBOSIS AND HAEMOSTASIS, Issue 4 2006
S. SCHULMAN
Summary.,Background: The influence of the duration of anticoagulant therapy after venous thromboembolism (VTE) on the long-term morbidity and mortality is unclear. Aim: To investigate the long-term sequelae of VTE in patients randomized to different duration of secondary prophylaxis. Methods: In a multicenter trial comparing secondary prophylaxis with vitamin K antagonists for 6 weeks or 6 months, we extended the originally planned 2 years follow-up to 10 years. The patients had annual visits and at the last visit clinical assessment of the post-thrombotic syndrome (PTS) was performed. Recurrent thromboembolism was adjudicated by a radiologist, blinded to treatment allocation. Causes of death were obtained from the Swedish Death Registry. Results: Of the 897 patients randomized, 545 could be evaluated at the 10 years follow-up. The probability of developing severe PTS was 6% and any sign of PTS was seen in 56.3% of the evaluated patients. In multivariate analysis, old age and signs of impaired circulation at discharge from the hospital were independent risk factors at baseline for development of PTS after 10 years. Recurrent thromboembolism occurred in 29.1% of the patients with a higher rate among males, older patients, those with permanent triggering risk factor , especially with venous insufficiency at baseline , signs of impaired venous circulation at discharge, proximal deep vein thrombosis, or pulmonary embolism. Death occurred in 28.5%, which was a higher mortality than expected with a standardized incidence ratio (SIR) of 1.43 (95% CI 1.28,1.58), mainly because of a higher mortality than expected from cancer (SIR 1.83; 95% CI 1.44,2.23) or from myocardial infarction or stroke (SIR 1.28; 95% CI 1.00,1.56). The duration of anticoagulation did not have a statistically significant effect on any of the long-term outcomes. Conclusion: The morbidity and mortality during 10 years after the first episode of VTE is high and not reduced by extension of secondary prophylaxis from 6 weeks to 6 months. A strategy to reduce recurrence of VTE as well as mortality from arterial disease is needed. [source]


Use of simulation to compare the performance of minimization with stratified blocked randomization

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2009
Robert Toorawa
Abstract Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Major reduction in spinal inflammation in patients with ankylosing spondylitis after treatment with infliximab: Results of a multicenter, randomized, double-blind, placebo-controlled magnetic resonance imaging study

ARTHRITIS & RHEUMATISM, Issue 5 2006
Jürgen Braun
Objective To determine whether the effects of anti,tumor necrosis factor , (TNF,) in reducing the signs and symptoms of ankylosing spondylitis (AS) coincide with a reduction in spinal inflammation as detected by magnetic resonance imaging (MRI). Methods Pre- and postgadolinium T1 and STIR MR images of the spine were acquired at baseline and at week 24 in patients with AS who participated in a multicenter, randomized, double-blind, placebo-controlled study. Patients were randomly assigned at an 8:3 ratio to receive infusions of infliximab (5 mg/kg) or placebo at weeks 0, 2, and 6 and then every 6 weeks thereafter. MR images were obtained and evaluated independently by 2 readers who were blinded to the treatment allocation and time sequence of the images. Results A total of 194 patients in the infliximab group and 72 patients in the placebo group had evaluable images at baseline and week 24. About 80% of the patients had at least 1 active spinal lesion at baseline, as assessed by MRI. The improvement in the MRI Activity Score after 6 months was significantly greater in the patients who received infliximab (mean 5.02, median 2.72) than in those who received placebo (mean 0.60, median 0.0) (P < 0.001). Almost complete resolution of spinal inflammation was seen in most patients who received infliximab, irrespective of baseline activity. Conclusion Patients with AS who received infliximab therapy showed a decrease in spinal inflammation as detected by MRI, whereas those who received placebo showed persistent inflammatory spondylitis. [source]


COBRA combination therapy in patients with early rheumatoid arthritis: Long-term structural benefits of a brief intervention

ARTHRITIS & RHEUMATISM, Issue 2 2002
Robert B. M. Landewé
Objective The Combinatietherapie Bij Reumatoide Artritis (COBRA) trial demonstrated that step-down combination therapy with prednisolone, methotrexate, and sulfasalazine (SSZ) was superior to SSZ monotherapy for suppressing disease activity and radiologic progression of rheumatoid arthritis (RA). The current study was conducted to investigate whether the benefits of COBRA therapy were sustained over time, and to determine which baseline factors could predict outcome. Methods All patients had participated in the 56-week COBRA trial. During followup, they were seen by their own rheumatologists and were also assessed regularly by study nurses; no treatment protocol was specified. Disease activity, radiologic damage, and functional ability were the primary outcome domains. Two independent assessors scored radiographs in sequence according to the Sharp/van der Heijde method. Outcomes were analyzed by generalized estimating equations on the basis of intent-to-treat, starting with data obtained at the last visit of the COBRA trial (56 weeks after baseline). Results At the beginning of followup, patients in the COBRA group had a significantly lower mean time-averaged 28-joint disease activity score (DAS28) and a significantly lower median radiologic damage (Sharp) score compared with those in the SSZ monotherapy group. The functional ability score (Health Assessment Questionnaire [HAQ]) was similar in both groups. During the 4,5 year followup period, the time-averaged DAS28 decreased 0.17 points per year in the SSZ group and 0.07 in the COBRA group. The Sharp progression rate was 8.6 points per year in the SSZ group and 5.6 in the COBRA group. After adjustment for differences in treatment and disease activity during followup, the between-group difference in the rate of radiologic progression was 3.7 points per year. The HAQ score did not change significantly over time. Independent baseline predictors of radiologic progression over time (apart from treatment allocation) were rheumatoid factor positivity, Sharp score, and DAS28. Conclusion An initial 6-month cycle of intensive combination treatment that includes high-dose corticosteroids results in sustained suppression of the rate of radiologic progression in patients with early RA, independent of subsequent antirheumatic therapy. [source]


A COMPARISON OF THE IMPRECISE BETA CLASS, THE RANDOMIZED PLAY-THE-WINNER RULE AND THE TRIANGULAR TEST FOR CLINICAL TRIALS WITH BINARY RESPONSES

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2007
Lyle C. Gurrin
Summary This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play-the-winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures. [source]


Response Adaptive Designs with a Variance-penalized Criterion

BIOMETRICAL JOURNAL, Issue 5 2009
Yanqing Yi
Abstract We consider a response adaptive design of clinical trials with a variance-penalized criterion. It is shown that this criterion evaluates the performance of a response adaptive design based on both the number of patients assigned to the better treatment and the power of the statistical test. A new proportion of treatment allocation is proposed and the doubly biased coin procedure is used to target the proposed proportion. Under reasonable assumptions, the proposed design is demonstrated to generate an asymptotic variance of allocation proportions, which is smaller than that of the drop-the-loser design. Simulation comparisons of the proposed design with some existing designs are presented. [source]


Oral cimetidine gives effective symptom relief in painful bladder disease: a prospective, randomized, double-blind placebo-controlled trial

BJU INTERNATIONAL, Issue 3 2001
R. Thilagarajah
Objective To evaluate the efficacy of oral cimetidine as a treatment for painful bladder disease (PBD, variously described as a ,symptom complex' of suprapubic pain, frequency, dysuria and nocturia in the absence of overt urine infection) by assessing symptom relief and histological changes in the bladder wall tissue components, compared with placebo. Patients and methods The study comprised 36 patients with PBD enrolled into a double-blind clinical study with two treatment arms, i.e. oral cimetidine or placebo, for a 3-month trial. Patients were asked to complete a symptom questionnaire (maximum score 35), and underwent cystoscopy and bladder biopsy before treatment allocation. On completing treatment the patients were re-evaluated by the questionnaire and biopsy. The symptom scores and bladder mucosal histology were compared before and after treatment, and the results analysed statistically to assess the efficacy of cimetidine. Results Of the 36 patients recruited, 34 (94%) completed the study. Those receiving cimetidine had a significant improvement in symptoms, with median symptom scores decreasing from 19 to 11 (P < 0.001). Suprapubic pain and nocturia decreased markedly (P = 0.009 and 0.006, respectively). However, histologically the bladder mucosa showed no qualitative change in the glycosaminoglycan layer or basement membrane, or in muscle collagen deposition, in either group. The T cell infiltrate was marginally decreased in the cimetidine group (median 203 before and 193 after) and increased in the placebo group (median 243 and 250, P > 0.3 and > 0.2, respectively). Angiogenesis remained relatively unchanged. The incidence of mast cells and B cells was sporadic in both groups. Conclusions Oral cimetidine is very effective in relieving symptoms in patients with PBD but there is no apparent histological change in the bladder mucosa after treatment; the mechanism of symptom relief remains to be elucidated. [source]


Balancing treatment allocations by clinician or center in randomized trials allows unacceptable levels of treatment prediction

JOURNAL OF EVIDENCE BASED MEDICINE, Issue 3 2009
Robert K Hills
Abstract Objective Randomized controlled trials are the standard method for comparing treatments because they avoid the selection bias that might arise if clinicians were free to choose which treatment a patient would receive. In practice, allocation of treatments in randomized controlled trials is often not wholly random with various ,pseudo-randomization' methods, such as minimization or balanced blocks, used to ensure good balance between treatments within potentially important prognostic or predictive subgroups. These methods avoid selection bias so long as full concealment of the next treatment allocation is maintained. There is concern, however, that pseudo-random methods may allow clinicians to predict future treatment allocations from previous allocation history, particularly if allocations are balanced by clinician or center. We investigate here to what extent treatment prediction is possible. Methods Using computer simulations of minimization and balanced block randomizations, the success rates of various prediction strategies were investigated for varying numbers of stratification variables, including the patient's clinician. Results Prediction rates for minimization and balanced block randomization typically exceed 60% when clinician is included as a stratification variable and, under certain circumstances, can exceed 80%. Increasing the number of clinicians and other stratification variables did not greatly reduce the prediction rates. Without clinician as a stratification variable, prediction rates are poor unless few clinicians participate. Conclusion Prediction rates are unacceptably high when allocations are balanced by clinician or by center. This could easily lead to selection bias that might suggest spurious, or mask real, treatment effects. Unless treatment is blinded, randomization should not be balanced by clinician (or by center), and clinician,center effects should be allowed for instead by retrospectively stratified analyses. [source]


Quantifying the Magnitude of Baseline Covariate Imbalances Resulting from Selection Bias in Randomized Clinical Trials

BIOMETRICAL JOURNAL, Issue 2 2005
Vance W. Berger
Abstract Selection bias is most common in observational studies, when patients select their own treatments or treatments are assigned based on patient characteristics, such as disease severity. This first-order selection bias, as we call it, is eliminated by randomization, but there is residual selection bias that may occur even in randomized trials which occurs when, subconsciously or otherwise, an investigator uses advance knowledge of upcoming treatment allocations as the basis for deciding whom to enroll. For example, patients more likely to respond may be preferentially enrolled when the active treatment is due to be allocated, and patients less likely to respond may be enrolled when the control group is due to be allocated. If the upcoming allocations can be observed in their entirety, then we will call the resulting selection bias second-order selection bias. Allocation concealment minimizes the ability to observe upcoming allocations, yet upcoming allocations may still be predicted (imperfectly), or even determined with certainty, if at least some of the previous allocations are known, and if restrictions (such as randomized blocks) were placed on the randomization. This mechanism, based on prediction but not observation of upcoming allocations, is the third-order selection bias that is controlled by perfectly successful masking, but without perfect masking is not controlled even by the combination of advance randomization and allocation concealment. Our purpose is to quantify the magnitude of baseline imbalance that can result from third-order selection bias when the randomized block procedure is used. The smaller the block sizes, the more accurately one can predict future treatment assignments in the same block as known previous assignments, so this magnitude will depend on the block size, as well as on the level of certainty about upcoming allocations required to bias the patient selection. We find that a binary covariate can, on average, be up to 50% unbalanced by third-order selection bias. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]