Home About us Contact | |||
Sample Size Formulas (sample + size_formula)
Selected AbstractsA Sample Size Formula for the Supremum Log-Rank StatisticBIOMETRICS, Issue 1 2005Kevin Hasegawa Eng Summary An advantage of the supremum log-rank over the standard log-rank statistic is an increased sensitivity to a wider variety of stochastic ordering alternatives. In this article, we develop a formula for sample size computation for studies utilizing the supremum log-rank statistic. The idea is to base power on the proportional hazards alternative, so that the supremum log rank will have the same power as the standard log rank in the setting where the standard log rank is optimal. This results in a slight increase in sample size over that required for the standard log rank. For example, a 5.733% increase occurs for a two-sided test having type I error 0.05 and power 0.80. This slight increase in sample size is offset by the significant gains in power the supremum log-rank test achieves for a wide range of nonproportional hazards alternatives. A small simulation study is used for illustration. These results should facilitate the wider use of the supremum log-rank statistic in clinical trials. [source] Calculating Sample Size for Studies with Expected All-or-None Nonadherence and Selection BiasBIOMETRICS, Issue 2 2009Michelle D. Shardell Summary We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics63, 465,474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data. [source] Effects of correlation and missing data on sample size estimation in longitudinal clinical trialsPHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 1 2010Song Zhang Abstract In longitudinal clinical trials, a common objective is to compare the rates of changes in an outcome variable between two treatment groups. Generalized estimating equation (GEE) has been widely used to examine if the rates of changes are significantly different between treatment groups due to its robustness to misspecification of the true correlation structure and randomly missing data. The sample size formula for repeated outcomes is based on the assumption of missing completely at random and a large sample approximation. A simulation study is conducted to investigate the performance of GEE sample size formula with small sample sizes, damped exponential family of correlation structure and non-ignorable missing data. Copyright © 2008 John Wiley & Sons, Ltd. [source] Sample size for post-marketing safety studies based on historical controls,PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 8 2010Yu-te Wu Abstract Purpose As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Methods Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. Results The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. Conclusions The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. Copyright © 2010 John Wiley & Sons, Ltd. [source] Comparison of sample size formulae for 2 × 2 cross-over designs applied to bioequivalence studiesPHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2005Arminda Lucia Siqueira Abstract We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (,0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd. [source] Comparing Accuracy in an Unpaired Post-market Device Study with Incomplete Disease AssessmentBIOMETRICAL JOURNAL, Issue 3 2009Todd A. Alonzo Abstract The sensitivity and specificity of a new medical device are often compared relative to that of an existing device by calculating ratios of sensitivities and specificities. Although it would be ideal for all study subjects to receive the gold standard so true disease status was known for all subjects, it is often not feasible or ethical to obtain disease status for everyone. This paper proposes two unpaired designs where each subject is only administered one of the devices and device results dictate which subjects are to receive disease verification. Estimators of the ratio of accuracy and corresponding confidence intervals are proposed for these designs as well as sample size formulae. Simulation studies are performed to investigate the small sample bias of the estimators and the performance of the variance estimators and sample size formulae. The sample size formulae are applied to the design of a cervical cancer study to compare the accuracy of a new device with the conventional Pap smear. [source] Calculating Sample Size for Studies with Expected All-or-None Nonadherence and Selection BiasBIOMETRICS, Issue 2 2009Michelle D. Shardell Summary We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics63, 465,474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data. [source] Sample Size Determination for Establishing Equivalence/Noninferiority via Ratio of Two Proportions in Matched,Pair DesignBIOMETRICS, Issue 4 2002Man-Lai Tang Summary. In this article, we propose approximate sample size formulas for establishing equivalence or noninferiority of two treatments in match-pairs design. Using the ratio of two proportions as the equivalence measure, we derive sample size formulas based on a score statistic for two types of analyses: hypothesis testing and confidence interval estimation. Depending on the purpose of a study, these formulas can be used to provide a sample size estimate that guarantees a prespecified power of a hypothesis test at a certain significance level or controls the width of a confidence interval with a certain confidence level. Our empirical results confirm that these score methods are reliable in terms of true size, coverage probability, and skewness. A liver scan detection study is used to illustrate the proposed methods. [source] |