Statisticians

Distribution by Scientific Domains


Selected Abstracts


Decision Makers as Statisticians: Diversity, Ambiguity, and Learning

ECONOMETRICA, Issue 5 2009
Nabil I. Al-Najjar
I study individuals who use frequentist models to draw uniform inferences from independent and identically distributed data. The main contribution of this paper is to show that distinct models may be consistent with empirical evidence, even in the limit when data increases without bound. Decision makers may then hold different beliefs and interpret their environment differently even though they know each other's model and base their inferences on the same evidence. The behavior modeled here is that of rational individuals confronting an environment in which learning is hard, rather than individuals beset by cognitive limitations or behavioral biases. [source]


A Matrix Handbook for Statisticians by George A. F. Seber

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2008
Erkki P. Liski
No abstract is available for this article. [source]


Statistical issues in first-in-man studies

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2007
Professor Stephen Senn
Preface., In March 2006 a first-in-man trial took place using healthy volunteers involving the use of monoclonal antibodies. Within hours the subjects had suffered such adverse effects that they were admitted to intensive care at Northwick Park Hospital. In April 2006 the Secretary of State for Health announced the appointment of Professor (now Sir) Gordon Duff, who chairs the UK's Commission on Human Medicines, to chair a scientific expert group on phase 1 clinical trials. The group reported on December 7th, 2006 (Expert Scientific Group on Clinical Trials, 2006a). Clinical trials have a well-established regulatory basis both in the UK and worldwide. Trials have to be approved by the regulatory authority and are subject to a detailed protocol concerning, among other things, the study design and statistical analyses that will form the basis of the evaluation. In fact, a cornerstone of the regulatory framework is the statistical theory and methods that underpin clinical trials. As a result, the Royal Statistical Society established an expert group of its own to look in detail at the statistical issues that might be relevant to first-in-man studies. The group mainly comprised senior Fellows of the Society who had expert knowledge of the theory and application of statistics in clinical trials. However, the group also included an expert immunologist and clinicians to ensure that the interface between statistics and clinical disciplines was not overlooked. In addition, expert representation was sought from Statisticians in the Pharmaceutical Industry (PSI), an organization with which the Royal Statistical Society has very close links. The output from the Society's expert group is contained in this report. It makes a number of recommendations directed towards the statistical aspects of clinical trials. As such it complements the report by Professor Duff's group and will, I trust, contribute to a safer framework for first-in-man trials in the future. Tim Holt (President, Royal Statistical Society) [source]


Stereology for Statisticians by A.Baddeley and E. B. Vedel Jensen

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
Ian Dryden
No abstract is available for this article. [source]


Biomarker as a classifier in pharmacogenomics clinical trials: a tribute to 30th anniversary of PSI,,

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2007
Sue-Jane Wang
Abstract Pharmacogenetics is one of many evolving sciences that have come to the fore since the formation of the Statisticians in the Pharmaceutical Industry (PSI) 30 years ago. Following the completion of the human genome project and the HapMap in the early 21st century, pharmacogenetics has gradually focused on studies of whole-genome single-nucleotide-polymorphisms screening associating disease pathophysiology with potential therapeutic interventions. Around this time, transcription profiling aiming at similar objectives has also been actively pursued, known as pharmacogenomics. It has become increasingly apparent that treatment effects between different genomic patient subsets can be dissimilar, and the value and need for genomic biomarkers to help predict effects, particularly in cancer clinical studies, have become issues of paramount importance. Pharmacogenomics/pharmaogenetics has thus become intensely focused on the search for genomic biomarkers for use as classifiers to select patients in randomized-controlled trials. We highlight that the predictive utility of a genomic classifier has tremendous clinical appeal and that there will be growing examples in which use of a companion diagnostic will need to be considered and may become an integral part in the utilization of drugs in medical practice. The credible mechanism to test the clinical utility of a genomic classifier is to employ the study results from a prospective trial that recruits all patients. Such investigations, if well designed, will allow analysis of all relevant performance factors in the drug and diagnostic combination including the sensitivity, specificity, positive and negative predictive values of the diagnostic test and the efficacy of the drug. Published in 2007 by John Wiley & Sons, Ltd. [source]


Leading Business Improvement: a New Role for Statisticians and Quality Professionals

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2005
Ronald D. Snee
Abstract Following the successes of Motorola, Allied-Signal, General Electric and others, many companies are implementing the Six Sigma approach to business improvement. Millions of dollars are being saved in the process. Active leadership by management and others involved is integral to the method and critical to its success. This development provides a unique opportunity for statisticians and quality professionals to be leaders in their organizations. The leadership roles are discussed and it is shown how statisticians and quality professionals can assume leadership roles throughout the deployment process. As a result statisticians and quality professionals can expand their roles as internal trainers and consultants to include being leaders of business improvement. In the process their focus moves from facilitation of technical applications to the implementation of Six Sigma, skill deployment and delivery of bottom line business results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Post-financial meltdown: What do the services industries need from us now?

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 5 2009
Roger W. Hoerl
Abstract In 2008 the global economy was rocked by a crisis that began on Wall Street, but quickly spread to Main Street U.S.A., and then to side streets around the world. Statisticians working in the service sector are not immune, with many concerned about losing their jobs. Given this dramatic course of events, how should statisticians respond? What, if anything, can we do to help our struggling organizations survive this recession, in order to prosper in the future? This expository article describes some approaches that we feel can help service industries deal with aftereffects of the financial meltdown. Based on an understanding of current needs of the service industries, we emphasize three approaches in particular: a greater emphasis on statistical engineering relative to statistical science, ,embedding' statistical methods and principles into key business processes, and the reinvigoration of Lean Six Sigma to drive immediate, tangible business results. Copyright © 2009 John Wiley & Sons, Ltd. [source]


,Post-financial meltdown: What do the services industries need from us now?' by Roger W. Hoerl and Ronald D. Snee: Discussion 2

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 5 2009
Ron S. Kenett
Hoerl and Snee have done three important things in their excellent paper. First of all, they address the current work environment conditions head-on, describing the facts about the 2008 economic meltdown. Secondly, they provide a retrospective about the role of Statistics and Statisticians in service industries, thus providing a context for their third contribution, which is to lay out a clear road map with specific recommendations. This discussion paper expands their comments addressing the question they posed, namely ,what do the service industries need from us now?' It will discuss some aspects of the causes of the economic meltdown and present some methodological implications. It will then take a wide-angle view of the role of Statistics and Statisticians in business and industry and finally, revisit the recommendations of Hoerl and Snee with some add-ons and emphasis. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Challenges in Implementing Adaptive Designs: Comments on the Viewpoints Expressed by Regulatory Statisticians

BIOMETRICAL JOURNAL, Issue 4 2006
Paul Gallo
Abstract This is a discussion of the following three papers appearing in this special issue on adaptive designs: ,FDA's critical path initiative: A perspective on contributions of biostatistics' by Robert T. O'Neill; ,A regulatory view on adaptive/flexible clinical trial design' by H. M. James Hung, Robert T. O'Neill, Sue-Jane Wang and John Lawrence; and ,Confirmatory clinical trials with an adaptive design' by Armin Koch. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Are Statistical Contributions to Medicine Undervalued?

BIOMETRICS, Issue 1 2003
Norman E. Breslow
Summary. Econometricians Daniel McFadden and James Heckman won the 2000 Nobel Prize in economics for their work on discrete choice models and selection bias. Statisticians and epidemiologists have made similar contributions to medicine with their work on case-control studies, analysis of incomplete data, and causal inference. In spite of repeated nominations of such eminent figures as Bradford Hill and Richard Doll, however, the Nobel Prize in physiology and medicine has never been awarded for work in biostatistics or epidemiology. (The "exception who proves the rule" is Ronald Ross, who, in 1902, won the second medical Nobel for his discovery that the mosquito was the vector for malaria. Ross then went on to develop the mathematics of epidemic theory,which he considered his most important scientific contribution,and applied his insights to malaria control programs.) The low esteem accorded epidemiology and biostatistics in some medical circles, and increasingly among the public, correlates highly with the contradictory results from observational studies that are displayed so prominently in the lay press. In spite of its demonstrated efficacy in saving lives, the "black box" approach of risk factor epidemiology is not well respected. To correct these unfortunate perceptions, statisticians would do well to follow more closely their own teachings: conduct larger, fewer studies designed to test specific hypotheses, follow strict protocols for study design and analysis, better integrate statistical findings with those from the laboratory, and exercise greater caution in promoting apparently positive results. [source]


Assessment of abstracts submitted to the annual scientific meeting of the Royal Australian and New Zealand College of Radiologists

JOURNAL OF MEDICAL IMAGING AND RADIATION ONCOLOGY, Issue 4 2006
S Bydder
Summary The process for selecting abstracts submitted for presentation at annual scientific meetings should ensure both the quality of these meetings and fairness to prospective presenters. The aim of the present study was to review the assessment of radiation oncology abstracts submitted for oral presentation to the 2004 Royal Australian and New Zealand College of Radiologists annual scientific meeting. Selection criteria were developed that were primarily focused on the subjective aspects of abstract quality. All research abstracts were reviewed blindly by five individual reviewers (four radiation oncologists and a statistician), scoring each abstract in five categories. The scores of three reviewers were used to select the top 30 general and top eight trainee entries. For comparison, cluster analysis using the scores of all five reviewers was used to group papers into two ranks. There was a strong correlation in total scores for each paper, between all reviewers. Similarly, the study design subscale was strongly correlated between all reviewers. Abstracts belonging to the first-rank cluster were generally selected. Most trainee entries would have been successful in being accepted into the general programme. The selection process described appears feasible and fair and may improve the quality of meetings. [source]


Recollections of Irving H. Sher 1924,1996: Polymath/information scientist extraordinaire

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 14 2001
Eugene Garfield
Over a 35-year period, Irving H. Sher played a critical role in the development and implementation of the Science Citation Index® and other ISI® products. Trained as a biochemist, statistician, and linguist, Sher brought a unique combination of talents to ISI as Director of Quality Control and Director of Research and Development. His talents as a teacher and mentor evoked loyalty. He was a particularly inventive but self-taught programmer. In addition to the SCI,® Social Sciences Citation Index,® and Arts and Humanities Citation Index,® Sher was involved with the development of the first commercial SDI system, the Automatic Subject Citation Alert, now called Research Alert,® and Request-A-Print Cards. Together we developed the journal impact factor and the Journal Citation Reports.® Sher was also the inventor of the SYSTABAR System of coding references and Sherhand. He was involved in key reports on citation-based historiography, forecasting Nobel prizes, and served as a referee for JASIS over a 20-year period. [source]


Statistical review by research ethics committees

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2000
P. Williamson
This paper discusses some of the issues surrounding statistical review by research ethics committees (RECs). A survey of local RECs in 1997 revealed that only 27/184 (15%) included a statistician member at that time, although 70/175 (40%) recognized the need for such. The role of the statistician member is considered and the paper includes a summary of a meeting of the Royal Statistical Society to discuss statistical issues that frequently arise in the review of REC applications. A list of minimum qualifications which RECs should expect from anyone claiming to be a statistician would be useful, together with a list of statisticians who are well qualified and willing to serve on RECs, and a list of training courses for REC members covering the appropriate statistical issues. [source]


Is there a discrepancy between patient and physician quality of life assessment?,

NEUROUROLOGY AND URODYNAMICS, Issue 3 2009
Sushma Srikrishna
Abstract Aims Quality of Life (QoL) assessment remains integral in the investigation of women with lower urinary tract dysfunction. Previous work suggests that physicians tend to underestimate patients' symptoms and the bother that they cause. The aim of this study was to assess the relationship between physician and patient assessed QoL using the Kings Health Questionnaire (KHQ). Methods Patients complaining of troublesome lower urinary tract symptoms (LUTS) were recruited from a tertiary referral urodynamic clinic. Prior to their clinic appointment they were sent a KHQ, which was completed before attending. After taking a detailed urogynecological history, a second KHQ was filled in by the physician, blinded to the patient responses, on the basis of their impression of the symptoms elicited during the interview. These data were analyzed by an independent statistician. Concordance between patient and physician assessment for individual questions was assessed using weighted kappa analysis. QoL scores were compared using Wilcoxons signed rank test. Results Seventy-five patients were recruited over a period of 5 months. Overall, the weighted kappa showed relatively poor concordance between the patient and physician responses; mean kappa: 0.33 (range 0.18,0.57). The physician underestimated QoL score in 4/9 domains by a mean of 5.5% and overestimated QoL score in 5/9 domains by a mean of 6.9%. In particular, physicians underestimated the impact of LUTS on social limitations and emotions (P,<,0.05). Conclusion This study confirms that physicians often differ from patients in the assessment of QoL. This is most likely due to a difference in patient,physician perception of "significant" LUTS and clearly demonstrates the importance of patient evaluated QoL in routine clinical assessment. Neurourol. Urodynam. 28:179,182, 2009. © 2008 Wiley-Liss, Inc. [source]


Design and statistical analysis of oral medicine studies: common pitfalls

ORAL DISEASES, Issue 3 2010
L Baccaglini
Oral Diseases (2010) 16, 233,241 A growing number of articles are emerging in the medical and statistics literature that describe epidemiologic and statistical flaws of research studies. Many examples of these deficiencies are encountered in the oral, craniofacial, and dental literature. However, only a handful of methodologic articles have been published in the oral literature warning investigators of potential errors that may arise early in the study and that can irreparably bias the final results. In this study, we briefly review some of the most common pitfalls that our team of epidemiologists and statisticians has identified during the review of submitted or published manuscripts and research grant applications. We use practical examples from the oral medicine and dental literature to illustrate potential shortcomings in the design and analysis of research studies, and how these deficiencies may affect the results and their interpretation. A good study design is essential, because errors in the analysis can be corrected if the design was sound, but flaws in study design can lead to data that are not salvageable. We recommend consultation with an epidemiologist or a statistician during the planning phase of a research study to optimize study efficiency, minimize potential sources of bias, and document the analytic plan. [source]


Using statistical techniques to detect fraud: a test case

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2004
Michael O'Kelly
Abstract In an experiment to test the effectiveness of statistical measures in detecting fraud, three physicians fabricated scores on the Montgomery,Åsberg Depression Rating Scale (MADRS) for a number of subjects in three sites. The fabricated data were then planted among MADRS data from 18 genuine sites. A statistician blinded as to the identity and quantity of the fabricated data attempted to detect the ,fraudulent' data by searching for unusual means and correlations. One of the three fabricated sites was correctly identified, and one genuine site was incorrectly identified as a potential fabrication. In addition, inlying and/or outlying means and correlations found in the genuine data suggested the possibility of using statistical checks for unusual data early in a study so that sites with unusual patterns could be prioritized for monitoring, training and, if necessary, auditing. Copyright © 2004 John Wiley & Sons Ltd. [source]


Validation of multi-detector computed tomography as a non-invasive method for measuring ovarian volume in macaques (Macaca fascicularis)

AMERICAN JOURNAL OF PRIMATOLOGY, Issue 6 2010
Jeryl C. Jones
Abstract The purpose of this study was to validate low radiation dose, contrast-enhanced, multi-detector computed tomography (MDCT) as a non-invasive method for measuring ovarian volume in macaques. Computed tomography scans of four known-volume phantoms and nine mature female cynomolgus macaques were acquired using a previously described, low radiation dose scanning protocol, intravenous contrast enhancement, and a 32-slice MDCT scanner. Immediately following MDCT, ovaries were surgically removed and the ovarian weights were measured. The ovarian volumes were determined using water displacement. A veterinary radiologist who was unaware of actual volumes measured ovarian CT volumes three times, using a laptop computer, pen display tablet, hand-traced regions of interest, and free image analysis software. A statistician selected and performed all tests comparing the actual and CT data. Ovaries were successfully located in all MDCT scans. The iliac arteries and veins, uterus, fallopian tubes, cervix, ureters, urinary bladder, rectum, and colon were also consistently visualized. Large antral follicles were detected in six ovaries. Phantom mean CT volume was 0.702±SD 0.504,cc and the mean actual volume was 0.743±SD 0.526,cc. Ovary mean CT volume was 0.258±SD 0.159,cc and mean water displacement volume was 0.257±SD 0.145,cc. For phantoms, the mean coefficient of variation for CT volumes was 2.5%. For ovaries, the least squares mean coefficient of variation for CT volumes was 5.4%. The ovarian CT volume was significantly associated with actual ovarian volume (ICC coefficient 0.79, regression coefficient 0.5, P=0.0006) and the actual ovarian weight (ICC coefficient 0.62, regression coefficient 0.6, P=0.015). There was no association between the CT volume accuracy and mean ovarian CT density (degree of intravenous contrast enhancement), and there was no proportional or fixed bias in the CT volume measurements. Findings from this study indicate that MDCT is a valid non-invasive technique for measuring the ovarian volume in macaques. Am. J. Primatol. 72:530,538, 2010. © 2010 Wiley-Liss, Inc. [source]


A web-based tool for teaching neural network concepts

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2010
Aybars Ugur
Abstract Although neural networks (NN) are important especially for engineers, scientists, mathematicians and statisticians, they may also be hard to understand. In this article, application areas of NN are discussed, basic NN components are described and it is explained how an NN work. A web-based simulation and visualization tool (EasyLearnNN) is developed using Java and Java 2D for teaching NN concepts. Perceptron, ADALINE, Multilayer Perceptron, LVQ and SOM models and related training algorithms are implemented. As a result, comparison with other teaching methods of NN concepts is presented and discussed. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 449,457, 2010; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20184 [source]


Seeing and Doing: the Concept of Causation

INTERNATIONAL STATISTICAL REVIEW, Issue 2 2002
Dennis V. Lindley
Summary This note is an extended review of the book by Judea Pearl (2000) on causality, in which the basic concepts therein are explained in a form that statisticians will hopefully appreciate, including some comments on their relevance to inference and decision-making. [source]


The Role of Statistics in the Data Revolution?

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2001
Jerome H. Friedman
Summary The nature of data is rapidly changing. Data sets are becoming increasingly large and complex. Modern methodology for analyzing these new types of data are emerging from the fields of Data Base Managment, Artificial Intelligence, Machine Learning, Pattern Recognition, and Data Visualization. So far Statistics as a field has played a minor role. This paper explores some of the reasons for this, and why statisticians should have an interest in participating in the development of new methods for large and complex data sets. [source]


Future Directions for the Teaching and Learning of Statistics at the Tertiary Level

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2001
Des F. Nicholl
Summary Significant advances in, and the resultant impact of, Information Technology (IT) during the last fifteen years has resulted in a much more data based society, a trend that can be expected to continue into the foreseeable future. This phenomenon has had a real impact on the Statistics discipline and will continue to result in changes in both content and course delivery. Major research directions have also evolved during the last ten years directly as a result of advances in IT. The impact of these advances has started to flow into course content, at least for advanced courses. One question which arises relates to what impact will this have on the future training of statisticians, both with respect to course content and mode of delivery. At the tertiary level the last 40 years has seen significant advances in theoretical aspects of the Statistics discipline. Universities have been outstanding at producing scholars with a strong theoretical background but questions have been asked as to whether this has, to some degree, been at the expense of appropriate training of the users of statistics (the ,tradespersons'). Future directions in the teaching and learning of Statistics must take into account the impact of IT together with the competing need to produce scholars as well as competent users of statistics to meet the future needs of the market place. For Statistics to survive as a recognizable discipline the need to be able to train statisticians with an ability to communicate is also seen as an areà of crucial importance. Satisfying the needs of society as well as meeting the needs of the profession are considered as the basic determinants which will derive the future teaching and training of statisticians at the tertiary level and will form the basis of this presentation. [source]


How you count counts: the importance of methods research in applied ecology

JOURNAL OF APPLIED ECOLOGY, Issue 5 2008
Chris S. Elphick
Summary 1Methods papers play a crucial role in advancing applied ecology. Counting organisms, in particular, has a rich history of methods development with many key advances both in field sampling and the treatment of resulting data. 2Most counts, however, have associated errors due to portions of the population of interest being unavailable for detection (e.g. target population not fully sampled; individuals present but not detectable), detection mistakes (e.g. detectable individuals missed; non-existent individuals recorded), or erroneous counts (e.g. large groups miscounted; individuals misidentified). 3Developments in field methods focus on reducing biases in the actual counts. Simultaneously, statisticians have developed many methods for improving inference by quantifying and correcting for biases retrospectively. Prominent examples of methods used to account for detection errors include distance sampling and multiple-observer methods. 4Simulations, in which population characteristics are set by the investigator, provide an efficient means of testing methods. With good estimates of sampling biases, computer simulations can be used to evaluate how much a given counting problem affects estimates of parameters such as population size and decline, thereby allowing applied ecologists to test the efficacy of sampling designs. Combined with cost estimates for each field method, such models would allow the cost-effectiveness of alternative protocols to be assessed. 5Synthesis and applications. Major advances are likely to come from research that looks for systematic patterns, across studies, in the effects of different types of bias and assumption violation on the ecological conclusions drawn. Specifically, determining how often, and under what circumstances, errors contribute to poor management and policy would greatly enhance future application of ecological knowledge. [source]


Making better biogeographical predictions of species' distributions

JOURNAL OF APPLIED ECOLOGY, Issue 3 2006
ANTOINE GUISAN
Summary 1Biogeographical models of species' distributions are essential tools for assessing impacts of changing environmental conditions on natural communities and ecosystems. Practitioners need more reliable predictions to integrate into conservation planning (e.g. reserve design and management). 2Most models still largely ignore or inappropriately take into account important features of species' distributions, such as spatial autocorrelation, dispersal and migration, biotic and environmental interactions. Whether distributions of natural communities or ecosystems are better modelled by assembling individual species' predictions in a bottom-up approach or modelled as collective entities is another important issue. An international workshop was organized to address these issues. 3We discuss more specifically six issues in a methodological framework for generalized regression: (i) links with ecological theory; (ii) optimal use of existing data and artificially generated data; (iii) incorporating spatial context; (iv) integrating ecological and environmental interactions; (v) assessing prediction errors and uncertainties; and (vi) predicting distributions of communities or collective properties of biodiversity. 4Synthesis and applications. Better predictions of the effects of impacts on biological communities and ecosystems can emerge only from more robust species' distribution models and better documentation of the uncertainty associated with these models. An improved understanding of causes of species' distributions, especially at their range limits, as well as of ecological assembly rules and ecosystem functioning, is necessary if further progress is to be made. A better collaborative effort between theoretical and functional ecologists, ecological modellers and statisticians is required to reach these goals. [source]


Mapping the evolutionary twilight zone: molecular markers, populations and geography

JOURNAL OF BIOGEOGRAPHY, Issue 5 2008
José Alexandre Felizola Diniz-Filho
Abstract Since evolutionary processes, such as dispersal, adaptation and drift, occur in a geographical context, at multiple hierarchical levels, biogeography provides a central and important unifying framework for understanding the patterns of distribution of life on Earth. However, the advent of molecular markers has allowed a clearer evaluation of the relationships between microevolutionary processes and patterns of genetic divergence among populations in geographical space, triggering the rapid development of many research programmes. Here we provide an overview of the interpretation of patterns of genetic diversity in geographical and ecological space, using both implicit and explicit spatial approaches. We discuss the actual or potential interaction of phylogeography, molecular ecology, ecological genetics, geographical genetics, landscape genetics and conservation genetics with biogeography, identifying their respective roles and their ability to deal with ecological and evolutionary processes at different levels of the biological hierarchy. We also discuss how each of these research programmes can improve strategies for biodiversity conservation. A unification of these research programmes is needed to better achieve their goals, and to do this it is important to develop cross-disciplinary communication and collaborations among geneticists, ecologists, biogeographers and spatial statisticians. [source]


Confirmatory factor analysis and the factor structure of expagg in context: A reply to forrest et al., 2002

AGGRESSIVE BEHAVIOR, Issue 2 2004
Steven Muncer
It has been suggested that confirmatory factor analysis (CFA) can be used to investigate the construct validity of psychometric scales and Forrest et al. [2000] specifically query the factor structure of Expagg using this technique. In this paper we report unsuccessful attempts to confirm the factor structure of three widely used scales using CFA criteria. In the fourth study, a two-factor model of Expagg, which has been derived from previous studies, is tested for fit on new data. The results suggest that from a CFA point of view, Expagg is best considered as two scales measuring expressivity and instrumentality with five items on each scale. This model satisfies four of the five fit criteria (CFI = 90, GFI = .94, RMSEA = .08, ECVI = .44), failing only on the chi square test, a benchmark that has attracted criticism from statisticians. Other concerns are raised about the meaning of CFA results and their importance. Aggr. Behav. 30:146,157, 2004. © 2004 Wiley-Liss, Inc. [source]


Evidence-based algorithms for diagnosing and treating ventilator-associated pneumonia,

JOURNAL OF HOSPITAL MEDICINE, Issue 5 2008
Richard J. Wall MD
Abstract BACKGROUND: Ventilator-associated pneumonia (VAP) is widely recognized as a serious and common complication associated with high morbidity and high costs. Given the complexity of caring for heterogeneous populations in the intensive care unit (ICU), however, there is still uncertainty regarding how to diagnose and manage VAP. OBJECTIVE: We recently conducted a national collaborative aimed at reducing health care,associated infections in ICUs of hospitals operated by the Hospital Corporation of America (HCA). As part of this collaborative, we developed algorithms for diagnosing and treating VAP in mechanically ventilated patients. In the current article, we (1) review the current evidence for diagnosing VAP, (2) describe our approach for developing these algorithms, and (3) illustrate the utility of the diagnostic algorithms using clinical teaching cases. DESIGN: This was a descriptive study, using data from a national collaborative focused on reducing VAP and catheter-related bloodstream infections. SETTING: The setting of the study was 110 ICUs at 61 HCA hospitals. INTERVENTION: None. MEASUREMENTS AND RESULTS: We assembled an interdisciplinary team that included infectious disease specialists, intensivists, hospitalists, statisticians, critical care nurses, and pharmacists. After reviewing published studies and the Centers for Disease Control and Prevention VAP guidelines, the team iteratively discussed the evidence, achieved consensus, and ultimately developed these practical algorithms. The diagnostic algorithms address infant, pediatric, immunocompromised, and adult ICU patients. CONCLUSIONS: We present practical algorithms for diagnosing and managing VAP in mechanically ventilated patients. These algorithms may provide evidence-based real-time guidance to clinicians seeking a standardized approach to diagnosing and managing this challenging problem. Journal of Hospital Medicine 2008;3:409,422. © 2008 Society of Hospital Medicine. [source]


Brief Motivational Interviewing for DWI Recidivists Who Abuse Alcohol and Are Not Participating in DWI Intervention: A Randomized Controlled Trial

ALCOHOLISM, Issue 2 2010
Thomas G. Brown
Background:, Driving while impaired (DWI) recidivists with unresolved alcohol use problems pose an ongoing risk for traffic safety. Following conviction, many do not participate in mandated alcohol evaluation and intervention programs, or continue to drink problematically after being relicensed. This study investigated if, in DWI recidivists with alcohol problems and not currently involved in DWI intervention, Brief Motivational Interviewing (BMI) produced greater reductions in risky drinking at 6- and 12-month follow-up compared to an information-advice control condition. Additional analyses explored whether BMI was associated with greater readiness to change, subsequent substance abuse treatment service utilization, and satisfaction compared to the control condition. Methods:, Male and female recidivists with drinking problems and not currently engaged in DWI intervention were recruited, evaluated, and then randomly assigned to receive 1 of 2 manualized interventions: 30-minute BMI session or information-advice. Participants, interviewers, researchers, and statisticians were blind to assignment. Outcomes were changed in: percent of risky drinking days (i.e., ,3 standard drinks/d for males; ,2 for females) in the previous 6 months derived from the Timeline Followback, biomarkers of alcohol abuse (GGT, AST, ALT, MCV) by blood assay, and alcohol abuse-related behaviors using the MMPI-Mac scale. Data from the Readiness to Change Questionnaire, a substance abuse service utilization questionnaire, and the Client Satisfaction Scale were also collected. Results:, Analyses revealed significant declines in risky drinking with both interventions. BMI (n = 92) resulted in a 25% reduction in risky drinking days at 12-month follow-up, which compared to the control intervention (n = 92) represented a significant decline from 6-month levels. Exposure to BMI also produced significantly greater improvement at 6-month follow-up in a biomarker of alcohol abuse and a behavioral measure related to recidivism risk. Exploration of readiness to change, substance abuse service utilization, and satisfaction with intervention indicated a perception of BMI being more useful in coping with problems. Conclusions:, Brief MI approaches warrant further implementation and effectiveness research as an opportunistic DWI intervention strategy to reduce risks associated with alcohol use outside of clinical and DWI relicensing settings. [source]


Cultural imagery and statistical models of the force of mortality: Addison, Gompertz and Pearson

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2010
Elizabeth L. Turner
Summary., We describe selected artistic and statistical depictions of the force of mortality (hazard or mortality rate), which is a concept that has long preoccupied actuaries, demographers and statisticians. We provide a more graphic form for the force-of-mortality function that makes the relationship between its constituents more explicit. The ,Bridge of human life' in Addison's allegorical essay of 1711 provides a particularly vivid image, with the forces depicted as external. The model that was used by Gompertz in 1825 appears to treat the forces as internal. In his 1897 essay Pearson mathematically modernized ,the medieval conception of the relation between Death and Chance' by decomposing the full mortality curve into five distributions along the age axis, the results of five ,marksmen' aiming at the human mass crossing this bridge. We describe Addison's imagery, comment briefly on Gompertz's law and the origin of the term ,force of mortality', describe the background for Pearson's essay, as well as his imagery and statistical model, and give the bridge of life a modern form, illustrating it via statistical animation. [source]


Report of the Council for the session 2006,2007

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 4 2007
Council Report
President's foreword., This year's annual report shows another very successful year for the Society. The range of the Society's new initiatives bears testament to our vigour and to the energy and enthusiasm of Fellows and staff. It is difficult to summarize all of these but I offer a brief overview of some of the highlights. This year we have awarded the first annual prize for ,Statistical excellence in journalism'. It is too easy to bemoan the general quality of coverage of statistical issues in the press and other media. But simply moaning does not improve the situation. As a positive step, on the instigation of Sheila Bird and Andrew Garratt, the Society decided to initiate an award for the best journalistic coverage of a statistical issue. This year first prize was awarded to Ben Goldacre of The Guardian. I hope that these annual awards will offer a positive focus on good coverage and help us to promote best practice. This year, also, we have set up the Professional Development Centre to act as a focus for statistical training both for statisticians and for others who use statistical methods as part of their work. It thus reflects our support for continuing professional development for our Fellows and at the same time provides outreach to members of the statistical user community who want to improve their statistical skills. We welcome Nicola Bright as the Director of the Centre and wish her every success. I am pleased to say that it is not just the Society centrally that has taken new activities this year. The Manchester Local Group have initiated a prize for final year undergraduates from any higher education institute in the north-west. At a time when there are concerns about the number of well-qualified graduates coming into the statistics profession this seems an excellent way to attract the attention of final year undergraduates. I wish this initiative every success. Another development to which the Society has contributed is the Higher Education Funding Council for England project ,more maths grads' which is designed to promote participation in undergraduate degrees in the mathematical sciences. A good supply of mathematically trained graduates is essential to the UK economy in general and to the health of the statistics discipline in particular. It is good that the Society is involved in practical developments that are aimed at increasing participation. The final new initiative that I shall draw attention to is the ,first-in-man' report which is concerned with the statistical design of drug trials aimed at testing novel treatment types. The working party was set up as a result of the adverse reactions suffered by healthy volunteers to a first-in-man trial of monoclonal antibodies and who were subsequently admitted to Northwick Park hospital. The report makes a series of recommendations about the design of such trials and will, I hope, contribute to the safety of future trials. I would like to thank Stephen Senn and the members of the working party for their considerable efforts. As well as these new initiatives there were, of course, many other continuing activities that are noteworthy. The annual conference in Belfast was a great success with many lively sessions and a good number of participants. In particular it was good to see a high number of young statisticians participating in the conference, reflecting the continuing impact of the Young Statisticians Forum on which I commented in the previous annual report. Another continuing activity for the Society is the statistical legislation going through Parliament as I write. The Society has long campaigned for legislation for official statistics. The issue now is to try to get good legislation which will have the required effect and will help the Government Statistical Service and other statistical producers to produce high quality, authoritative statistics in an environment that commands public confidence. As first published, the Society was disappointed with the Bill but we have worked to build support for amendments that, in our view, are essential. Time alone will tell how effective the final legislation will be in meeting our aims. I would like to draw attention to the success of the Membership Services team. We, although with other statistical Societies, have experienced a decline in membership in recent years but the team have turned this round. They are helping to recruit new Fellows and to retain the commitment of existing Fellows. This is a fine achievement and I would like to thank Nicola Emmerson, Ed Swires-Hennessy and the whole team. Finally we have, at last, reached a conclusion in our dealings with the Privy Council and will implement the second phase of constitutional changes. In future our business year, financial year and year for elected appointments will all coincide on a calendar year basis. There will be transitional arrangements but in due course all our administrative arrangements will coincide and will improve efficiency and co-ordination. This has been a long journey, steered effectively by our Director General, Ivor Goddard, and I congratulate him for a successful outcome on your behalf. As you read this report, I hope that you will share my impression of a Society that is lively and spawning many new programmes. We have a dual commitment: to the well-being of statistics as a discipline and to the promotion of statistical understanding and practice to the benefit of Society at large. In both respects I feel that the Society is in good health. This is due to the unstinting efforts of a large number of individual volunteers, including in particular our Honorary Officers and also, of course, the staff at Errol Street. On behalf of all Fellows, I wish to express my thanks to everyone involved. Tim Holt [source]


Commissioned analysis of surgical performance using routine data: lessons from the Bristol inquiry

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2002
David J. Spiegelhalter
The public inquiry into paediatric cardiac surgery at the Bristol Royal Infirmary commissioned the authors to design and conduct analyses of routine data sources to compare surgical outcomes between centres. Such analyses are necessarily complex in this context but were further hampered by the inherent inconsistencies and mediocre quality of the various sources of data. Three levels of analysis of increasing sophistication were carried out. The reasonable consistency of the results arising from different sources of data, together with a number of sensitivity analyses, led us to conclude that there had been excess mortality in Bristol in open heart operations on children under 1 year of age. We consider criticisms of our analysis and discuss the role of statisticians in this inquiry and their contribution to the final report of the inquiry. The potential statistical role in future programmes for monitoring clinical performance is highlighted. [source]