Home About us Contact | |||
Stratification Variable (stratification + variable)
Selected AbstractsA randomized phase 2 trial comparing 3-hour versus 96-hour infusion schedules of paclitaxel for the treatment of metastatic breast cancerCANCER, Issue 4 2010MSCI, Stacy L. Moulder MD Abstract BACKGROUND: This study was performed to compare efficacy and toxicity profiles of paclitaxel using 3-hour versus 96-hour infusion schedules. METHODS: Patients with metastatic breast cancer (MBC) were randomly assigned to receive paclitaxel starting at a dose of 250 mg/m2 intravenously (iv) over 3 hours every 21 days or paclitaxel starting at a dose of 140 mg/m2 iv over 96 hours every 21 days. Stratification variables included number of prior chemotherapy regimens and previous response to anthracyclines. Response was assessed every 2 cycles using bidimensional measurements. Patients were allowed to cross over at disease progression or therapy intolerance. RESULTS: A total of 214 patients received therapy (107 patients per arm). Response rates were similar: 23.4% in the 3-hour arm and 29.9% in the 96-hour arm (P = .28). The median duration of response (8.9 months vs 5.7 months; P = .75) and progression-free survival (5.0 months vs 3.8 months; P = .17) slightly favored the 96-hour arm. Overall survival was slightly longer in the 3-hour arm (14.2 months vs 12.7 months; P = .57). One patient who crossed over to the 96-hour arm (N = 18) developed a partial response; no response was noted with crossover to the 3-hour arm (N = 10). Myalgia/arthralgia and neuropathy were more frequent in the 3-hour arm, whereas mucositis, neutropenic fever/infection, and diarrhea were more common in the 96-hour arm. CONCLUSIONS: Paclitaxel given by 3-hour or 96-hour infusion was active in MBC. The 96-hour paclitaxel regimen did not significantly improve response or time to disease progression, was more cumbersome to administer, and was associated with greater myelosuppression (but less neuropathy and myalgia) compared with the 3-hour schedule. Cancer 2010. © 2010 American Cancer Society. [source] A General Algorithm for Univariate StratificationINTERNATIONAL STATISTICAL REVIEW, Issue 3 2009Sophie Baillargeon Summary This paper presents a general algorithm for constructing strata in a population using,X, a univariate stratification variable known for all the units in the population. Stratum,h,consists of all the units with an,X,value in the interval[bh,1,,bh). The stratum boundaries{bh}are obtained by minimizing the anticipated sample size for estimating the population total of a survey variable,Y,with a given level of precision. The stratification criterion allows the presence of a take-none and of a take-all stratum. The sample is allocated to the strata using a general rule that features proportional allocation, Neyman allocation, and power allocation as special cases. The optimization can take into account a stratum-specific anticipated non-response and a model for the relationship between the stratification variable,X,and the survey variable,Y. A loglinear model with stratum-specific mortality for,Y,given,X,is presented in detail. Two numerical algorithms for determining the optimal stratum boundaries, attributable to Sethi and Kozak, are compared in a numerical study. Several examples illustrate the stratified designs that can be constructed with the proposed methodology. All the calculations presented in this paper were carried out with stratification, an R package that will be available on CRAN (Comprehensive R Archive Network). Résumé Cet article présente un algorithme général pour construire des strates dans une population à l'aide de,X, une variable de stratification unidimensionnelle connue pour toutes les unités de la population. La strate,h,contient toutes les unités ayant une valeur de,X,dans l'intervalle [bh,1,,bh). Les frontières des strates {bh} sont obtenues en minimisant la taille d'échantillon anticipée pour l'estimation du total de la variable d'intérêt,Y,avec un niveau de précision prédéterminé. Le critère de stratification permet la présence d'une strate à tirage nul et de strates recensement. L'échantillon est réparti dans les strates à l'aide d'une règle générale qui inclut l'allocation proportionnelle, l'allocation de Neyman et l'allocation de puissance comme des cas particuliers. L'optimisation peut tenir compte d'un taux de non réponse spécifique à la strate et d'un modèle reliant la variable de stratification,X,à la variable d'intérêt,Y. Un modèle loglinéaire avec un taux de mortalité propre à la strate est présenté en détail. Deux algorithmes numériques pour déterminer les frontières de strates optimales, dus à Sethi et Kozak, sont comparés dans une étude numérique. Plusieurs exemples illustrent les plans stratifiés qui peuvent être construits avec la méthodologie proposée. Tous les calculs présentés dans l'article ont été effectués avec stratification, un package R disponible auprès des auteurs. [source] Balancing treatment allocations by clinician or center in randomized trials allows unacceptable levels of treatment predictionJOURNAL OF EVIDENCE BASED MEDICINE, Issue 3 2009Robert K Hills Abstract Objective Randomized controlled trials are the standard method for comparing treatments because they avoid the selection bias that might arise if clinicians were free to choose which treatment a patient would receive. In practice, allocation of treatments in randomized controlled trials is often not wholly random with various ,pseudo-randomization' methods, such as minimization or balanced blocks, used to ensure good balance between treatments within potentially important prognostic or predictive subgroups. These methods avoid selection bias so long as full concealment of the next treatment allocation is maintained. There is concern, however, that pseudo-random methods may allow clinicians to predict future treatment allocations from previous allocation history, particularly if allocations are balanced by clinician or center. We investigate here to what extent treatment prediction is possible. Methods Using computer simulations of minimization and balanced block randomizations, the success rates of various prediction strategies were investigated for varying numbers of stratification variables, including the patient's clinician. Results Prediction rates for minimization and balanced block randomization typically exceed 60% when clinician is included as a stratification variable and, under certain circumstances, can exceed 80%. Increasing the number of clinicians and other stratification variables did not greatly reduce the prediction rates. Without clinician as a stratification variable, prediction rates are poor unless few clinicians participate. Conclusion Prediction rates are unacceptably high when allocations are balanced by clinician or by center. This could easily lead to selection bias that might suggest spurious, or mask real, treatment effects. Unless treatment is blinded, randomization should not be balanced by clinician (or by center), and clinician,center effects should be allowed for instead by retrospectively stratified analyses. [source] Duloxetine vs placebo in the treatment of stress urinary incontinence: a four-continent randomized clinical trialBJU INTERNATIONAL, Issue 3 2004R.J. Millard OBJECTIVES To further assess, in a phase 3 study, treatment with duloxetine for women with stress urinary incontinence (SUI) in other geographical regions, including Argentina, Australia, Brazil, Finland, Poland, South Africa and Spain, as previous trials in North America and Europe provided evidence for the safety and efficacy of duloxetine as a pharmacological treatment for SUI in women. PATIENTS AND METHODS The study included 458 women aged 27,79 years enrolled in a double-blind, placebo-controlled trial. The patients with predominantly SUI were identified using a validated clinical algorithm. They were randomly assigned to receive placebo (231) or duloxetine 40 mg twice daily (227) for 12 weeks. The primary outcome variables included the incontinence episode frequency (IEF) and the Incontinence Quality of Life (I-QOL) questionnaire. Van Elteren's test was used to analyse the percentage changes in IEF where the stratification variable was weekly baseline IEF (IEF <14 and ,14). Analysis of covariance was used to analyse I-QOL scores. RESULTS The mean baseline IEF was 18.4/week; 55% of patients had a baseline IEF of ,,14. There was a significantly greater median decrease in IEF with duloxetine with placebo (54% vs 40%, P = 0.05), with comparable significant improvements in quality of life (I-QOL score increases of 10.3 vs 6.4, P = 0.007). The improvements with duloxetine were associated with significantly greater increases in voiding intervals than with placebo (20.4 vs 8.5 min, P < 0.001). The placebo response was 10.7% and 12.5% higher than those reported in two European and North American phase 3 trials. This may have been related to more patients being naïve for incontinence management in the current trial. Discontinuation rates for adverse events were 1.7% for placebo and 17.2% for duloxetine (P < 0.001), with nausea being the most common reason for discontinuation (3.1%); it was the most common adverse event with duloxetine, but was mild or moderate in most (81%), did not worsen in any patient and resolved within 7 days in 60% and within 1 month in 86% of continuing patients; 88% of women who experienced nausea while taking duloxetine completed the trial. CONCLUSIONS These results show improvements in incontinence and quality of life with duloxetine 40 mg twice daily for 12 weeks that are in keeping with those reported in two other recently completed phase 3 trials in Europe and North America. [source] Balancing treatment allocations by clinician or center in randomized trials allows unacceptable levels of treatment predictionJOURNAL OF EVIDENCE BASED MEDICINE, Issue 3 2009Robert K Hills Abstract Objective Randomized controlled trials are the standard method for comparing treatments because they avoid the selection bias that might arise if clinicians were free to choose which treatment a patient would receive. In practice, allocation of treatments in randomized controlled trials is often not wholly random with various ,pseudo-randomization' methods, such as minimization or balanced blocks, used to ensure good balance between treatments within potentially important prognostic or predictive subgroups. These methods avoid selection bias so long as full concealment of the next treatment allocation is maintained. There is concern, however, that pseudo-random methods may allow clinicians to predict future treatment allocations from previous allocation history, particularly if allocations are balanced by clinician or center. We investigate here to what extent treatment prediction is possible. Methods Using computer simulations of minimization and balanced block randomizations, the success rates of various prediction strategies were investigated for varying numbers of stratification variables, including the patient's clinician. Results Prediction rates for minimization and balanced block randomization typically exceed 60% when clinician is included as a stratification variable and, under certain circumstances, can exceed 80%. Increasing the number of clinicians and other stratification variables did not greatly reduce the prediction rates. Without clinician as a stratification variable, prediction rates are poor unless few clinicians participate. Conclusion Prediction rates are unacceptably high when allocations are balanced by clinician or by center. This could easily lead to selection bias that might suggest spurious, or mask real, treatment effects. Unless treatment is blinded, randomization should not be balanced by clinician (or by center), and clinician,center effects should be allowed for instead by retrospectively stratified analyses. [source] Assessing risk indicators for dental caries in the primary dentitionCOMMUNITY DENTISTRY AND ORAL EPIDEMIOLOGY, Issue 6 2001Jackie Vanobbergen Abstract , The aim of the present study was to assess indicators shown to be associated with the prevalence of caries in the primary dentition of 7-year-old Flemish schoolchildren. Cross-sectional first year data of the longitudinal Signal-Tandmobiel® survey were analysed (n=4468). Gender, age, oral hygiene habits, use of fluorides, dietary habits, geographical factors and parental modelling were the considered predictors. From the multiple logistic regression analysis, including schools as a random effect, and after adjusting for the confounding variables,educational system and province (stratification variables), gender and age,it became clear that the following risk indicators remained significant (at 5% level) for the presence of caries: frequency of toothbrushing (P=0.05) with an OR 1.24 for brushing less than once a day, age at start of brushing (P<0.001) with an OR=1.22 for a delay of 1 year, regular use of fluoride supplements (P<0.001) with an OR=1.54 for no use, daily use of sugar-containing drinks between meals (P<0.001) with an OR=1.38, and number of between-meals snacks (P=0.012) with an OR=1.22 for using more than 2 between-meal snacks. There was a significant difference (P<0.05) in caries experience determined by the geographical spread, with an explicit trend of caries declining from the east to the west. In a model with an ordinal response outcome, the daily use of sugar-containing drinks between meals had a more pronounced effect when caries levels were high. From this study it became obvious that, in Flemish children, an early start of brushing and a brushing frequency of at least once a day need to be encouraged, while the use of sugar-containing drinks and snacks between meals needs to be restricted to a maximum of 2 per day. Geographical differences need to be investigated in more detail. [source] |