Size Requirements (size + requirement)

Distribution by Scientific Domains

Kinds of Size Requirements

  • sample size requirement


  • Selected Abstracts


    New pathways for evaluating potential acute stroke therapies

    INTERNATIONAL JOURNAL OF STROKE, Issue 2 2006
    Marc Fisher
    Many neuroprotective drugs and a few other thrombolytics were evaluated in clinical trials, but none demonstrated unequivocal success and were approved by regulatory agencies. The development paradigm for such therapies needs to provide convincing evidence of efficacy and safety to obtain approval by the Food and Drug Administration (FDA). The FDA modernization act of 1997 stated that such evidence could be derived from one large phase III trial with a clinical endpoint and supportive evidence. Drugs being developed for acute ischemic stroke can potentially be approved under this act by coupling a major phase III trial with supportive evidence provided by a phase IIB trial demonstrating an effect on a relevant biomarker such as magnetic resonance imaging or computed tomography assessment of ischemic lesion growth. Statistical approaches have been developed to optimize the design of such an imaging-based phase IIB study, for example approaches that modify randomization probabilities to assign larger proportions of patients to the ,winning' strategy (i.e. ,pick the winner' strategies) with an interim assessment to reduce the sample size requirement. Demonstrating a treatment effect on a relevant imaging-based biomarker should provide supportive evidence for a new drug application, if a subsequent phase III trial with a clinical outcome demonstrates a significant treatment effect. [source]


    DIET OF HARBOR PORPOISES IN THE KATTEGAT AND SKAGERRAK SEAS: ACCOUNTING FOR INDIVIDUAL VARIATION AND SAMPLE SIZE

    MARINE MAMMAL SCIENCE, Issue 1 2003
    Patrik Börjesson
    Abstract Stomach contents of 112 bycaught harbor porpoises (Phocoena phocoena) collected between 1989 and 1996 in the Kattegat and Skagerrak seas were analyzed to describe diet composition and estimate prey size, to examine sample size requirements, and to compare juvenile and adult diets. Although porpoises preyed on a variety of species, only a few contributed substantially to the diet. Atlantic herring (Clupea harengus) was the dominating prey species for both juveniles and adults. Our results, in combination with those from previous studies, suggest that where herring is a dominant food source, porpoises prey primarily on size classes containing mature or maturing individuals. Further, we also show that Atlantic hagfish (Myxine glutinosa) may be an important food resource, at least for adult porpoises. Examination of sample size requirement showed that, depending on the taxonomic level used to describe the diet, a minimum of 35,71 stomachs are needed to be confident that all common prey species will be found. [source]


    An optimal adaptive design to address local regulations in global clinical trials,

    PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 3 2010
    Xiaolong Luo
    Abstract After multi-regional clinical trials (MRCTs) have demonstrated overall significant effects, evaluation for a region-specific effect is often important. Recent guidance (see, e.g. 1) from regulatory authorities regarding evaluation for possible country-specific effects has led to research on statistical designs that incorporate such evaluations in MRCTs. These statistical designs are intended to use the MRCTs to address the requirements for global registration of a medicinal product. Adding a regional requirement could change the probability for declaring positive effect for the region when there is indeed no treatment difference as well as when there is in fact a true difference within the region. In this paper, we first quantify those probability structures based on the guidance issued by the Ministry of Health, Labour and Welfare (MHLW) of Japan. An adaptive design is proposed to consider those probabilities and to optimize the efficiency for regional objectives. This two-stage approach incorporates comprehensive global objectives into an integrated study design and may mitigate the need for a separate local bridging study. A procedure is used to optimize region-specific enrollment based on an objective function. The overall sample size requirement is assessed. We will use simulation analyses to illustrate the performance of the proposed study design. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Comparison of the A,Cc curve fitting methods in determining maximum ribulose 1·5-bisphosphate carboxylase/oxygenase carboxylation rate, potential light saturated electron transport rate and leaf dark respiration

    PLANT CELL & ENVIRONMENT, Issue 2 2009
    ZEWEI MIAO
    ABSTRACT A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2,chloroplastic CO2 concentration (A,Cc) curves, resulting in considerable differences in estimating the A,Cc parameters [including maximum ribulose 1·5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU -limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A,Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with Vcmax, Rd and/or gm held as constants. [source]


    A review of potential methods of determining critical effect size for designing environmental monitoring programs,

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 7 2009
    Kelly R. Munkittrick
    Abstract The effective design of field studies requires that sample size requirements be estimated for important endpoints before conducting assessments. This a priori calculation of sample size requires initial estimates for the variability of the endpoints of interest, decisions regarding significance levels and the power desired, and identification of an effect size to be detected. Although many programs have called for use of critical effect sizes (CES) in the design of monitoring programs, few attempts have been made to define them. This paper reviews approaches that have been or could be used to set specific CES. The ideal method for setting CES would be to define the level of protection that prevents ecologically relevant impacts and to set a warning level of change that would be more sensitive than that CES level to provide a margin of safety; however, few examples of this approach being applied exist. Program-specific CES could be developed through the use of numbers based on regulatory or detection limits, a number defined through stakeholder negotiation, estimates of the ranges of reference data, or calculation from the distribution of data using frequency plots or multivariate techniques. The CES that have been defined often are consistent with a CES of approximately 25%, or two standard deviations, for many biological or ecological monitoring endpoints, and this value appears to be reasonable for use in a wide variety of monitoring programs and with a wide variety of endpoints. [source]


    Case-only genome-wide interaction study of disease risk, prognosis and treatment

    GENETIC EPIDEMIOLOGY, Issue 1 2010
    Brandon L. Pierce
    Abstract Case-control genome-wide association (GWA) studies have facilitated the identification of susceptibility loci for many complex diseases; however, these studies are often not adequately powered to detect gene-environment (G×E) and gene-gene (G×G) interactions. Case-only studies are more efficient than case-control studies for detecting interactions and require no data on control subjects. In this article, we discuss the concept and utility of the case-only genome-wide interaction (COGWI) study, in which common genetic variants, measured genome-wide, are screened for association with environmental exposures or genetic variants of interest. An observed G-E (or G-G) association, as measured by the case-only odds ratio (OR), suggests interaction, but only if the interacting factors are unassociated in the population from which the cases were drawn. The case-only OR is equivalent to the interaction risk ratio. In addition to risk-related interactions, we discuss how the COGWI design can be used to efficiently detect G×G, G×E and pharmacogenetic interactions related to disease outcomes in the context of observational clinical studies or randomized clinical trials. Such studies can be conducted using only data on individuals experiencing an outcome of interest or individuals not experiencing the outcome of interest. Sharing data among GWA and COGWI studies of disease risk and outcome can further enhance efficiency. Sample size requirements for COGWI studies, as compared to case-control GWA studies, are provided. In the current era of genome-wide analyses, the COGWI design is an efficient and straightforward method for detecting G×G, G×E and pharmacogenetic interactions related to disease risk, prognosis and treatment response. Genet. Epidemiol. 34:7,15, 2010. © 2009 Wiley-Liss, Inc. [source]


    Standardizing Emergency Department,based Migraine Research: An Analysis of Commonly Used Clinical Trial Outcome Measures

    ACADEMIC EMERGENCY MEDICINE, Issue 1 2010
    Benjamin W. Friedman MD
    Abstract Objectives:, Although many high-quality migraine clinical trials have been performed in the emergency department (ED) setting, almost as many different primary outcome measures have been used, making data aggregation and meta-analysis difficult. The authors assessed commonly used migraine trial outcomes in two ways. First, the authors examined the association of each commonly used outcome versus the following patient-centered variable: the research subject's wish, when asked 24 hours after investigational medication administration, to receive the same medication the next time they presented to an ED with migraine ("would take again"). This variable was chosen as the criterion standard because it provides a simple, dichotomous, clinically sensible outcome, which allows migraineurs to factor important intangibles of efficacy and adverse effects of treatment into an overall assessment of care. The second part of the analysis assessed how sensitive to true efficacy each outcome measure was by calculating sample size requirements based on results observed in previously conducted clinical trials. Methods:, This was a secondary analysis of data previously collected in four ED-based migraine randomized trials performed between 2003 and 2007. In each of these trials, subjects were asked 24 hours after administration of an investigational medication whether or not they would want to receive the same medication the next time they came to the ED with a migraine. Odds ratios (ORs) with 95% confidence intervals (CIs), adjusted for sex and medication received, were calculated as measures of association between the most commonly used outcome measures and "would take again." The sensitivity of each outcome measure to treatment efficacy was determined by calculating the sample size that would be required to detect a statistically significant result using estimates of that outcome obtained in two clinical trials. Results:, Data from 378 subjects were used for this analysis. Adjusted ORs for association of "would take again" and other commonly used primary headache outcomes are as follows: achieving a pain-free state by 2 hours, OR = 3.1 (95% CI = 1.8 to 5.4); sustained pain-free status, OR = 4.5 (95% CI = 1.9 to 11.0); and no need for rescue medication, OR = 3.7 (95% CI = 2.1 to 6.6). An improvement on a standardized 11-point pain scale of ,33% had an adjusted OR = 5.2 (95% CI = 2.2 to 12.4). The best performing alternate outcome, ,33% improvement, correctly classified 288 subjects and misclassified 77 subjects when compared to "would take again." At least 33% improvement and pain-free by 2 hours required the smallest sample sizes, while sustained pain-free and "would take again" required many more subjects. Conclusions:, "Would take again" was associated with all migraine outcome measures we examined. No individual outcome was more closely associated with "would take again" than any other. Even the best-performing alternate outcome misclassified more than 20% of subjects. However, sample sizes based on "would take again" tended to be larger than other outcome measures. On the basis of these findings and this outcome measure's inherent patient-centered focus, "would take again," included as a secondary outcome in all ED migraine trials, is proposed. ACADEMIC EMERGENCY MEDICINE 2010; 17:72,79 © 2010 by the Society for Academic Emergency Medicine [source]


    DIET OF HARBOR PORPOISES IN THE KATTEGAT AND SKAGERRAK SEAS: ACCOUNTING FOR INDIVIDUAL VARIATION AND SAMPLE SIZE

    MARINE MAMMAL SCIENCE, Issue 1 2003
    Patrik Börjesson
    Abstract Stomach contents of 112 bycaught harbor porpoises (Phocoena phocoena) collected between 1989 and 1996 in the Kattegat and Skagerrak seas were analyzed to describe diet composition and estimate prey size, to examine sample size requirements, and to compare juvenile and adult diets. Although porpoises preyed on a variety of species, only a few contributed substantially to the diet. Atlantic herring (Clupea harengus) was the dominating prey species for both juveniles and adults. Our results, in combination with those from previous studies, suggest that where herring is a dominant food source, porpoises prey primarily on size classes containing mature or maturing individuals. Further, we also show that Atlantic hagfish (Myxine glutinosa) may be an important food resource, at least for adult porpoises. Examination of sample size requirement showed that, depending on the taxonomic level used to describe the diet, a minimum of 35,71 stomachs are needed to be confident that all common prey species will be found. [source]


    Accelerator mass spectrometry offers new opportunities for microdosing of peptide and protein pharmaceuticals

    RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 10 2010
    Mehran Salehpour
    Accelerator Mass Spectrometry (AMS) is an ultra-sensitive analytical method which has been instrumental in developing microdosing as a strategic tool in early drug development. Considerable data is available for AMS microdosing using typical pharmaceutical drugs with a molecular weight of a few hundred Daltons. The so-called biopharmaceuticals such as proteins offer interesting possibilities as drug candidates; however, experimental data for protein microdosing and AMS is scarce. The analysis of proteins in conjunction with early drug development and microdosing is overviewed and three case studies are presented on the topic. In the first case study AMS experimental data is presented, for the measured concentration of orally administered recombinant insulin in the blood stream of laboratory rabbits. Case study 2 concerns minimum sample size requirements. AMS samples normally require about 1,mg of carbon (10,µL of blood) which makes AMS analysis unsuitable in some applications due to the limited availability of samples such as human biopsies or DNA from specific cells. Experimental results are presented where the sample size requirements have been reduced by about two orders of magnitude. The third case study concerns low concentration studies. It is generally accepted that protein pharmaceuticals may be potentially more hazardous than smaller molecules because of immunological reactions. Therefore, future first-in-man microdosing studies might require even lower exposure concentrations than is feasible today, in order to increase the safety margin. This issue is discussed based on the current available analytical capabilities. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Automated QT Analysis That Learns from Cardiologist Annotations

    ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 2009
    Iain Guy David Strachan Ph.D.
    Background: Reliable, automated QT analysis would allow the use of all the ECG data recorded during continuous Holter monitoring, rather than just intermittent 10-second ECGs. Methods: BioQT is an automated ECG analysis system based on a Hidden Markov Model, which is trained to segment ECG signals using a database of thousands of annotated waveforms. Each sample of the ECG signal is encoded by its wavelet transform coefficients. BioQT also produces a confidence measure which can be used to identify unreliable segmentations. The automatic generation of templates based on shape descriptors allows an entire 24 hours of QT data to be rapidly reviewed by a human expert, after which the template annotations can automatically be applied to all beats in the recording. Results: The BioQT software has been used to show that drug-related perturbation of the T wave is greater in subjects receiving sotalol than in those receiving moxifloxacin. Chronological dissociation of T-wave morphology changes from the QT prolonging effect of the drug was observed with sotalol. In a definitive QT study, the percentage increase of standard deviation of QTc for the standard manual method with respect to that obtained with BioQT analysis was shown to be 44% and 30% for the placebo and moxifloxacin treatments, respectively. Conclusions: BioQT provides fully automated analysis, with confidence values for self-checking, on very large data sets such as Holter recordings. Automatic templating and expert reannotation of a small number of templates lead to a reduction in the sample size requirements for definitive QT studies. [source]