Rigorous Standards (rigorous + standards)

Distribution by Scientific Domains


Selected Abstracts


The effect of fixed-count subsampling on macroinvertebrate biomonitoring in small streams

FRESHWATER BIOLOGY, Issue 2 2000
Craig P. Doberstein
Summary 1When rigorous standards of collecting and analysing data are maintained, biological monitoring adds valuable information to water resource assessments. Decisions, from study design and field methods to laboratory procedures and data analysis, affect assessment quality. Subsampling - a laboratory procedure in which researchers count and identify a random subset of field samples - is widespread yet controversial. What are the consequences of subsampling? 2To explore this question, random subsamples were computer generated for subsample sizes ranging from 100 to 1000 individuals as compared with the results of counting whole samples. The study was done on benthic invertebrate samples collected from five Puget Sound lowland streams near Seattle, WA, USA. For each replicate subsample, values for 10 biological attributes (e.g. total number of taxa) and for the 10-metric benthic index of biological integrity (B-IBI) were computed. 3Variance of each metric and B-IBI for each subsample size was compared with variance associated with fully counted samples generated using the bootstrap algorithm. From the measures of variance, we computed the maximum number of distinguishable classes of stream condition as a function of sample size for each metric and for B-IBI. 4Subsampling significantly decreased the maximum number of distinguishable stream classes for B-IBI, from 8.2 for fully counted samples to 2.8 classes for 100-organism subsamples. For subsamples containing 100,300 individuals, discriminatory power was low enough to mislead water resource decision makers. [source]


Functional connexin "hemichannels": A critical appraisal

GLIA, Issue 7 2006
David C. Spray
Abstract "Hemichannels" are defined as the halves of gap junction channels (also termed connexons) that are contributed by one cell; "hemichannels" are considered to be functional if they are open in nonjunctional membranes in the absence of pairing with partners from adjacent cells. Several recent reviews have summarized the blossoming literature regarding functional "hemichannels", in some cases encyclopedically. However, most of these previous reviews have been written with the assumption that all data reporting "hemichannel" involvement really have studied phenomena in which connexons actually form the permeability or conductance pathway. In this review, we have taken a slightly different approach. We review the concept of "hemichannels", summarize properties that might be expected of half gap junctions and evaluate the extent to which the properties of presumptive "hemichannels" match expectations. Then we consider functions attributed to hemichannels, provide an overview of other channel types that might fulfill similar roles and provide sets of criteria that might be applied to verify involvement of connexin hemichannels in cell and tissue function. One firm conclusion is reached. The study of hemichannels is technically challenging and fraught with opportunities for misinterpretation, so that future studies must apply rigorous standards for detection of hemichannel expression and function. At the same time there are reasons to expect surprises, including the possibility that some time honored techniques for studying gap junctions may prove unsuitable for detecting hemichannels. We advise hemichannel researchers to proceed with caution and an open mind. © 2006 Wiley-Liss, Inc. [source]


Cutting through the statistical fog: understanding and evaluating non-inferiority trials

INTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 10 2010
W. S. Weintraub
Summary Every year, results from many important randomised, controlled trials are published. Knowing the elements of trial design and having the skills to critically read and incorporate results are important to medical practitioners. The goal of this article is to help physicians determine the validity of trial conclusions to improve patient care through more informed medical decision making. This article includes a review of 162 randomised, controlled non-inferiority (n = 116) and equivalence (n = 46) hypothesis studies as well as the larger Stroke Prevention using Oral Thrombin Inhibitor in atrial Fibrillation V study and the Ongoing Telmisartan Alone and in Combination with Ramipril Global Endpoint Trial. Evaluation of data from small and large trials uncovers significant flaws in design and models employed and uncertainty about calculations of statistical measures. As one example of questionable study design, discussion includes a large (n = 3922), double-blind, randomised, multicentre trial comparing the efficacy of ximelagatran with warfarin for prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation and additional stroke risk factors. Investigators concluded that ximelagatran was effective compared with well-controlled warfarin for prevention of thromboembolism. However, deficiencies in design, as well as concerns about liver toxicity, resulted in the rejection of the drug by the US Food and Drug Administration. Many trials fail to follow good design principles, resulting in conclusions of questionable validity. Well-designed non-inferiority trials can provide valuable data and demonstrate efficacy for beneficial new therapies. Objectives and primary end-points must be clearly stated and rigorous standards met for sample size, establishing the margin, patient characteristics and adherence to protocol. [source]


Impact of baseline ECG collection on the planning, analysis and interpretation of ,thorough' QT trials

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 2 2009
Venkat Sethuraman
Abstract The current guidelines, ICH E14, for the evaluation of non-antiarrhythmic compounds require a ,thorough' QT study (TQT) conducted during clinical development (ICH Guidance for Industry E14, 2005). Owing to the regulatory choice of margin (10,ms), the TQT studies must be conducted to rigorous standards to ensure that variability is minimized. Some of the key sources of variation can be controlled by use of randomization, crossover design, standardization of electrocardiogram (ECG) recording conditions and collection of replicate ECGs at each time point. However, one of the key factors in these studies is the baseline measurement, which if not controlled and consistent across studies could lead to significant misinterpretation. In this article, we examine three types of baseline methods widely used in the TQT studies to derive a change from baseline in QTc (time-matched, time-averaged and pre-dose-averaged baseline). We discuss the impact of the baseline values on the guidance-recommended ,largest time-matched' analyses. Using simulation we have shown the impact of these baseline approaches on the type I error and power for both crossover and parallel group designs. In this article, we show that the power of study decreases as the number of time points tested in TQT study increases. A time-matched baseline method is recommended by several authors (Drug Saf. 2005; 28(2):115,125, Health Canada guidance document: guide for the analysis and review of QT/QTc interval data, 2006) due to the existence of the circadian rhythm in QT. However, the impact of the time-matched baseline method on statistical inference and sample size should be considered carefully during the design of TQT study. The time-averaged baseline had the highest power in comparison with other baseline approaches. Copyright © 2008 John Wiley & Sons, Ltd. [source]