Royal Statistical Society (royal + statistical_society)

Distribution by Scientific Domains


Selected Abstracts


Statistical issues in first-in-man studies

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2007
Professor Stephen Senn
Preface., In March 2006 a first-in-man trial took place using healthy volunteers involving the use of monoclonal antibodies. Within hours the subjects had suffered such adverse effects that they were admitted to intensive care at Northwick Park Hospital. In April 2006 the Secretary of State for Health announced the appointment of Professor (now Sir) Gordon Duff, who chairs the UK's Commission on Human Medicines, to chair a scientific expert group on phase 1 clinical trials. The group reported on December 7th, 2006 (Expert Scientific Group on Clinical Trials, 2006a). Clinical trials have a well-established regulatory basis both in the UK and worldwide. Trials have to be approved by the regulatory authority and are subject to a detailed protocol concerning, among other things, the study design and statistical analyses that will form the basis of the evaluation. In fact, a cornerstone of the regulatory framework is the statistical theory and methods that underpin clinical trials. As a result, the Royal Statistical Society established an expert group of its own to look in detail at the statistical issues that might be relevant to first-in-man studies. The group mainly comprised senior Fellows of the Society who had expert knowledge of the theory and application of statistics in clinical trials. However, the group also included an expert immunologist and clinicians to ensure that the interface between statistics and clinical disciplines was not overlooked. In addition, expert representation was sought from Statisticians in the Pharmaceutical Industry (PSI), an organization with which the Royal Statistical Society has very close links. The output from the Society's expert group is contained in this report. It makes a number of recommendations directed towards the statistical aspects of clinical trials. As such it complements the report by Professor Duff's group and will, I trust, contribute to a safer framework for first-in-man trials in the future. Tim Holt (President, Royal Statistical Society) [source]


Transactions of the Statistical Society of London (1837)

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2002
Sidney Rosenbaum
Summary. The Transactions of the Statistical Society of London (1837) appeared before the journal of the Royal Statistical Society began publication and represents the substantial statistical work that had been undertaken in the early years of the existence of the Society. The contents of this publication are summarized here against the historical background of the time. [source]


Statistical review by research ethics committees

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2000
P. Williamson
This paper discusses some of the issues surrounding statistical review by research ethics committees (RECs). A survey of local RECs in 1997 revealed that only 27/184 (15%) included a statistician member at that time, although 70/175 (40%) recognized the need for such. The role of the statistician member is considered and the paper includes a summary of a meeting of the Royal Statistical Society to discuss statistical issues that frequently arise in the review of REC applications. A list of minimum qualifications which RECs should expect from anyone claiming to be a statistician would be useful, together with a list of statisticians who are well qualified and willing to serve on RECs, and a list of training courses for REC members covering the appropriate statistical issues. [source]


Discovering the false discovery rate

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2010
Yoav Benjamini
Summary., I describe the background for the paper ,Controlling the false discovery rate: a new and powerful approach to multiple comparisons' by Benjamini and Hochberg that was published in the Journal of the Royal Statistical Society, Series B, in 1995. I review the progress since made on the false discovery rate, as well as the major conceptual developments that followed. [source]


Screening for Partial Conjunction Hypotheses

BIOMETRICS, Issue 4 2008
Yoav Benjamini
Summary We consider the problem of testing for partial conjunction of hypothesis, which argues that at least u out of n tested hypotheses are false. It offers an in-between approach to the testing of the conjunction of null hypotheses against the alternative that at least one is not, and the testing of the disjunction of null hypotheses against the alternative that all hypotheses are not null. We suggest powerful test statistics for testing such a partial conjunction hypothesis that are valid under dependence between the test statistics as well as under independence. We then address the problem of testing many partial conjunction hypotheses simultaneously using the false discovery rate (FDR) approach. We prove that if the FDR controlling procedure in Benjamini and Hochberg (1995, Journal of the Royal Statistical Society, Series B 57, 289,300) is used for this purpose the FDR is controlled under various dependency structures. Moreover, we can screen at all levels simultaneously in order to display the findings on a superimposed map and still control an appropriate FDR measure. We apply the method to examples from microarray analysis and functional magnetic resonance imaging (fMRI), two application areas where the need for partial conjunction analysis has been identified. [source]


Stepwise Confidence Intervals for Monotone Dose,Response Studies

BIOMETRICS, Issue 3 2008
Jianan Peng
Summary In dose,response studies, one of the most important issues is the identification of the minimum effective dose (MED), where the MED is defined as the lowest dose such that the mean response is better than the mean response of a zero-dose control by a clinically significant difference. Dose,response curves are sometimes monotonic in nature. To find the MED, various authors have proposed step-down test procedures based on contrasts among the sample means. In this article, we improve upon the method of Marcus and Peritz (1976, Journal of the Royal Statistical Society, Series B38, 157,165) and implement the dose,response method of Hsu and Berger (1999, Journal of the American Statistical Association94, 468,482) to construct the lower confidence bound for the difference between the mean response of any nonzero-dose level and that of the control under the monotonicity assumption to identify the MED. The proposed method is illustrated by numerical examples, and simulation studies on power comparisons are presented. [source]


Generalized Hierarchical Multivariate CAR Models for Areal Data

BIOMETRICS, Issue 4 2005
Xiaoping Jin
Summary In the fields of medicine and public health, a common application of areal data models is the study of geographical patterns of disease. When we have several measurements recorded at each spatial location (for example, information on p, 2 diseases from the same population groups or regions), we need to consider multivariate areal data models in order to handle the dependence among the multivariate components as well as the spatial dependence between sites. In this article, we propose a flexible new class of generalized multivariate conditionally autoregressive (GMCAR) models for areal data, and show how it enriches the MCAR class. Our approach differs from earlier ones in that it directly specifies the joint distribution for a multivariate Markov random field (MRF) through the specification of simpler conditional and marginal models. This in turn leads to a significant reduction in the computational burden in hierarchical spatial random effect modeling, where posterior summaries are computed using Markov chain Monte Carlo (MCMC). We compare our approach with existing MCAR models in the literature via simulation, using average mean square error (AMSE) and a convenient hierarchical model selection criterion, the deviance information criterion (DIC; Spiegelhalter et al., 2002, Journal of the Royal Statistical Society, Series B64, 583,639). Finally, we offer a real-data application of our proposed GMCAR approach that models lung and esophagus cancer death rates during 1991,1998 in Minnesota counties. [source]


Models for Estimating Bayes Factors with Applications to Phylogeny and Tests of Monophyly

BIOMETRICS, Issue 3 2005
Marc A. Suchard
Summary Bayes factors comparing two or more competing hypotheses are often estimated by constructing a Markov chain Monte Carlo (MCMC) sampler to explore the joint space of the hypotheses. To obtain efficient Bayes factor estimates, Carlin and Chib (1995, Journal of the Royal Statistical Society, Series B57, 473,484) suggest adjusting the prior odds of the competing hypotheses so that the posterior odds are approximately one, then estimating the Bayes factor by simple division. A byproduct is that one often produces several independent MCMC chains, only one of which is actually used for estimation. We extend this approach to incorporate output from multiple chains by proposing three statistical models. The first assumes independent sampler draws and models the hypothesis indicator function using logistic regression for various choices of the prior odds. The two more complex models relax the independence assumption by allowing for higher-lag dependence within the MCMC output. These models allow us to estimate the uncertainty in our Bayes factor calculation and to fully use several different MCMC chains even when the prior odds of the hypotheses vary from chain to chain. We apply these methods to calculate Bayes factors for tests of monophyly in two phylogenetic examples. The first example explores the relationship of an unknown pathogen to a set of known pathogens. Identification of the unknown's monophyletic relationship may affect antibiotic choice in a clinical setting. The second example focuses on HIV recombination detection. For potential clinical application, these types of analyses must be completed as efficiently as possible. [source]


Sensitivity Analysis for Nonrandom Dropout: A Local Influence Approach

BIOMETRICS, Issue 1 2001
Geert Verbeke
Summary. Diggle and Kenward (1994, Applied Statistics43, 49,93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B48, 133,169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. [source]


Survival Analysis in Clinical Trials: Past Developments and Future Directions

BIOMETRICS, Issue 4 2000
Thomas R. Fleming
Summary. The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association53, 457,481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report50, 163,170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B34, 187,220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods. [source]