Relative Risk (relative + risk)

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Relative Risk

  • adjusted relative risk
  • genotype relative risk
  • haplotype relative risk
  • increased relative risk
  • pooled relative risk

  • Terms modified by Relative Risk

  • relative risk aversion
  • relative risk estimate
  • relative risk ratio
  • relative risk reduction

  • Selected Abstracts


    [Commentary] LOW-RISK DRINKING LIMITS: ABSOLUTE VERSUS RELATIVE RISK

    ADDICTION, Issue 8 2009
    DEBORAH A. DAWSON
    No abstract is available for this article. [source]


    The clinical significance of subjective memory complaints in the diagnosis of mild cognitive impairment and dementia: a meta-analysis

    INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 11 2008
    Alex J. Mitchell
    Abstract Background Subjective memory complaints (SMC) are frequently reported by individuals with objective evidence of cognitive decline although the exact rate of complaints and their diagnostic value is uncertain. Method A meta-analysis was conducted for all studies examining SMC and either concurrent dementia or mild cognitive impairment (MCI). Results Eight studies reported the rate of SMC in dementia, seven studies reported the rate of SMC in MCI and of these four compared the rate of SMC in dementia and MCI head-to-head. SMC were present in 42.8% of those with dementia and 38.2% of those with MCI. Across all levels of cognitive impairments 39.8% of people had SMC compared with 17.4% in healthy elderly controls (Relative Risk 2.3). In head-to-head studies there was a significantly higher rate of SMC in dementia vs MCI (48.4% vs 35.1%). Examining the diagnostic value of SMC in dementia, the meta-analytic pooled sensitivity was 43.0% and specificity was 85.8%. For MCI, meta-analytic pooled sensitivity was 37.4% and specificity was 86.9%. In community studies with a low prevalence the positive and negative predictive values were 18.5% and 93.7% for dementia and 31.4% and 86.9% for MCI. The clinical utility index which calculates the value of a diagnostic method suggested ,poor' value for ruling in a diagnosis of dementia but ,good' value for ruling out a diagnosis. Conclusions When assessed by simple questions, SMC appear to be present in the minority of those with mild cognitive impairment and dementia. In cross-sectional community settings, even when people agree that they have SMC there is only a 20% or 30% chance that dementia or MCI are present, respectively. Despite this, the absence of SMC may be a reasonable method of excluding dementia and MCI and could be incorporated into short screening programs for dementia and MCI but replication is required in clinical settings. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Increased risk of citrate reactions in patients with multiple myeloma during peripheral blood stem cell leukapheresis

    JOURNAL OF CLINICAL APHERESIS, Issue 4 2010
    Jill Adamski
    Abstract The citrate based anticoagulant ACD is commonly used in apheresis procedures. Due to its ability to decrease ionized calcium, citrate may cause unpleasant symptoms, such as paresthesias and muscle cramps, in patients undergoing therapeutic and donor apheresis. We noticed that patients with multiple myeloma (MM) undergoing autologous stem cell leukapheresis appeared to have more citrate reactions when compared to other patients undergoing the same procedure. A retrospective chart review was performed to evaluate 139 (of 151) consecutive patients with MM, amyloidosis, hematological and solid malignancies who had autologous peripheral blood stem cell collection between January 2007 and February 2008. Citrate reactions, ranging from mild (e.g., perioral tingling and parasthesias) to severe (e.g., nausea/vomiting and muscle cramps) were noted for 35 patients. Twenty-three of 63 patients with MM had documented citrate reactions, which was significantly higher than those with other hematological and solid malignancies (37% vs. 20%; P < 0.05, Relative Risk (RR) = 1.9). The severities of citrate reactions were the same in both groups; approximately 50% of patients in each group received i.v. calcium gluconate for treatment of hypocalcemia. No correlation between bisphosphonate therapy and citrate reactions were noted in our study group. Examination of available laboratory values related to calcium homeostasis, liver, and renal function failed to reveal a mechanism for the increase in citrate reactions observed. In summary, this single institution retrospective study indicates that patients with MM are more sensitive to citrate-induced hypocalcemia during leukapheresis when compared to patients with other hematological and solid malignancies. Strategies for decreasing citrate reactions (e.g., supplemental calcium and slowing return rates) should be considered for patient safety and comfort, especially in the MM population, on a prophylactic rather than reactive basis. J. Clin. Apheresis 25:188,194, 2010. © 2010 Wiley-Liss, Inc. [source]


    Current practice compared with the international guidelines: endoscopic surveillance of Barrett's esophagus

    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 5 2007
    Nassira Amamra MPH
    Abstract Rationale, aims and objectives, To describe the current practice for the surveillance of patients with Barrett's esophagus, to compare this practice with the national guidelines published by the French Society of Digestive Endoscopy in 1998 and to identify the factors associated with the compliance to guidelines. Method, To determine the attitudes of French hepatogastroenterologists to screening for Barrett's oesophagus, a postal anonymous questionnaire survey was undertaken. It was sent to 246 hepatogastroenterologists in the Rhone-Alpes area. We defined eight criteria allowed to assess the conformity of practices with the guidelines. We created three topics composed of several criterion. The topics analysed were ,Biopsies', ,Surveillance' and the diagnosis of high grade dysplasia. We studied the factors which could be associated with the compliance with the guidelines. Results, The response rate was 81.3%. For 58.0% of the gastroenterologists, endoscopic biopsy sampling were made according to French guidelines (four-quadrant biopsies at 2 cm intervals). Agreement was 78.0% regarding the interval of surveillance for no dysplasia (every 2 or 3 years) and 78.5% regarding the low-grade dysplasia (every 6 or 12 months). For the management of high-grade dysplasia, 28.6% actually confirm the diagnosis by a second anatomopathologist and 42.0% treated by proton pump inhibitor during 2 months. Concerning the biopsies, the young gastroenterologists and gastroenterologists practising in university hospitals had a better adherence to the guidelines (Relative Risk: 2.22, 95% CI 1.25,3.95 and 3.74, 95% CI 1.04,13.47, respectively). The other factors of risk were not statistically significant. Conclusions, The endoscopic follow-up is mostly realized in accordance with the national guidelines. However, there is a wide variability in individual current practice. [source]


    Similar Outcomes with Different Rates of Delayed Graft Function May Reflect Center Practice, Not Center Performance

    AMERICAN JOURNAL OF TRANSPLANTATION, Issue 6 2009
    S. K. Akkina
    To better understand the implications for considering delayed graft function (DGF) as a performance measure, we compared outcomes associated with a 2- to 3-fold difference in the incidence of DGF at two transplant centers. We analyzed 5072 kidney transplantations between 1984 and 2006 at the University of Minnesota Medical Center (UMMC) and Hennepin County Medical Center (HCMC). In logistic regression the adjusted odds ratio for DGF at HCMC versus UMMC was 3.11 (95% Confidence Interval [CI]= 2.49,3.89) for deceased donors and 2.24 (CI = 1.45,3.47) for living donors. In Cox analysis of 4957 transplantations, slow graft function (SGF; creatinine ,3.0 mg/dL [230 ,mol/L] on day 5 without dialysis) was associated with graft failure at UMMC (Relative Risk [RR]= 1.43, CI = 1.25,1.64), but not HCMC (RR = 0.99, CI = 0.77,1.28). RR's of DGF were similar at both centers. Thus, the lower incidence of DGF at UMMC likely resulted in a higher incidence and higher risk of SGF compared to HCMC. Indeed, graft survival for recipients with DGF at HCMC was similar (p = 0.3741) to that of recipients with SGF at UMMC. We conclude that dialysis per se is likely not a cause of worse graft outcomes. A better definition is needed to measure early graft dysfunction and its effects across transplant programs. [source]


    Survival Advantage of Pediatric Recipients of a First Kidney Transplant Among Children Awaiting Kidney Transplantation

    AMERICAN JOURNAL OF TRANSPLANTATION, Issue 12 2008
    D. L. Gillen
    The mortality rate in children with ESRD is substantially lower than the rate experienced by adults. However, the risk of death while awaiting kidney transplantation and the impact of transplantation on long-term survival has not been well characterized in the pediatric population. We performed a longitudinal study of 5961 patients under age 19 who were placed on the kidney transplant waiting list in the United States. Of these, 5270 received their first kidney transplant between 1990 and 2003. Survival was assessed via a time-varying nonproportional hazards model adjusted for potential confounders. Transplanted children had a lower mortality rate (13.1 deaths/1000 patient-years) compared to patients on the waiting list (17.6 deaths/1000 patient-years). Within the first 6 months of transplant, there was no significant excess in mortality compared to patients remaining on the waiting list (adjusted Relative Risk (aRR) = 1.01; p = 0.93). After 6 months, the risk of death was significantly lower: at 6,12 months (aRR = 0.37; p < 0.001) and at 30 months (aRR 0.26; p < 0.001). Compared to children who remain on the kidney transplant waiting list, those who receive a transplant have a long-term survival advantage. With the potential for unmeasured bias in this observational data, the results of the analysis should be interpreted conservatively. [source]


    European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007

    ANNALS OF HUMAN GENETICS, Issue 4 2007
    Article first published online: 28 MAY 200
    Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaďti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naďve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source]


    The human leucocyte antigen-G 14-basepair polymorphism correlates with graft-versus-host disease in unrelated bone marrow transplantation for thalassaemia

    BRITISH JOURNAL OF HAEMATOLOGY, Issue 2 2007
    Giorgio La Nasa
    Summary The presence of the 14-bp insertion polymorphism of the human leucocyte antigen (HLA)-G gene (HLA-G) promotes immune tolerance through increased synthesis of HLA-G molecules. We investigated this polymorphism in a large cohort of 53 thalassaemia patients transplanted from an unrelated donor. Sixteen patients (30·2%) homozygous for the 14-bp deletion had a higher risk of developing acute graft-versus-host disease (aGvHD) than patients homozygous for the 14-bp insertion (,14-bp/,14-bp vs +14-bp/+14-bp: Relative Risk = 15·0; 95% confidence interval 1·59,141·24; P = 0·008). Therefore, the 14-bp polymorphism could be an important predictive factor for aGvHD following bone marrow transplantation. [source]


    Population-Specific Susceptibility to Crohn's Disease and Ulcerative Colitis; Dominant and Recessive Relative Risks in the Japanese Population

    ANNALS OF HUMAN GENETICS, Issue 2 2010
    Shigeki Nakagome
    Summary Crohn's disease (CD), a type of chronic inflammatory bowel disease (IBD), is commonly found in European and East Asian countries. The calculated heritability of CD appears to be higher than that of ulcerative colitis (UC), another type of IBD. Recent genome-wide association studies (GWAS) have identified more than thirty CD-associated genes/regions in the European population. In the East Asian population, however, a clear association between CD and an associated gene has only been detected with TNFSF15. In order to determine if CD susceptibility differs geographically, nine SNPs from seven of the European CD-associated genomic regions were selected for analysis. The genotype frequencies for these SNPs were compared among the 380 collected Japanese samples, which consisted of 212 IBD cases and 168 controls. We detected a significant association of both CD and UC with only the TNFSF15 gene. Analysis by the modified genotype relative risk test (mGRR) indicated that the risk allele of TNFSF15 is dominant for CD, but is recessive for UC. These results suggest that CD and UC susceptibility differs between the Japanese and European populations. Furthermore, it is also likely that CD and UC share a causative factor which exhibits a different dominant/recessive relative risk in the Japanese population. [source]


    Using Case-parent Triads to Estimate Relative Risks Associated with a Candidate Haplotype

    ANNALS OF HUMAN GENETICS, Issue 3 2009
    Min Shi
    Summary Estimating haplotype relative risks in a family-based study is complicated by phase ambiguity and the many parameters needed to quantify relative risks for all possible diplotypes. This problem becomes manageable if a particular haplotype has been implicated previously as relevant to risk. We fit log-linear models to estimate the risks associated with a candidate haplotype relative to the aggregate of other haplotypes. Our approach uses existing haplotype-reconstruction algorithms but requires assumptions about the distribution of haplotypes among triads in the source population. We consider three levels of stringency for those assumptions: Hardy-Weinberg Equilibrium (HWE), random mating, and no assumptions at all. We assessed our method's performance through simulations encompassing a range of risk haplotype frequencies, missing data patterns, and relative risks for either offspring or maternal genetic effects. The unconstrained model provides robustness to bias from population structure but requires excessively large sample sizes unless there are few haplotypes. Assuming HWE accommodates many more haplotypes but sacrifices robustness. The model assuming random mating is intermediate, both in the number of haplotypes it can handle and in robustness. To illustrate, we reanalyze data from a study of orofacial clefts to investigate a 9-SNP candidate haplotype of the IRF6 gene. [source]


    Confidence Intervals for Relative Risks in Disease Mapping

    BIOMETRICAL JOURNAL, Issue 4 2003
    M.D. Ugarte
    Abstract Several analysis of the geographic variation of mortality rates in space have been proposed in the literature. Poisson models allowing the incorporation of random effects to model extra-variability are widely used. The typical modelling approach uses normal random effects to accommodate local spatial autocorrelation. When spatial autocorrelation is absent but overdispersion persists, a discrete mixture model is an alternative approach. However, a technique for identifying regions which have significant high or low risk in any given area has not been developed yet when using the discrete mixture model. Taking into account the importance that this information provides to the epidemiologists to formulate hypothesis related to the potential risk factors affecting the population, different procedures for obtaining confidence intervals for relative risks are derived in this paper. These methods are the standard information-based method and other four, all based on bootstrap techniques, namely the asymptotic-bootstrap, the percentile-bootstrap, the BC-bootstrap and the modified information-based method. All of them are compared empirically by their application to mortality data due to cardiovascular diseases in women from Navarra, Spain, during the period 1988,1994. In the small area example considered here, we find that the information-based method is sensible at estimating standard errors of the component means in the discrete mixture model but it is not appropriate for providing standard errors of the estimated relative risks and hence, for constructing confidence intervals for the relative risk associated to each region. Therefore, the bootstrap-based methods are recommended for this matter. More specifically, the BC method seems to provide better coverage probabilities in the case studied, according to a small scale simulation study that has been carried out using a scenario as encountered in the analysis of the real data. [source]


    Male drugs-related deaths in the fortnight after release from prison: Scotland, 1996,99

    ADDICTION, Issue 2 2003
    Sheila M. Bird
    ABSTRACT Aims, To assess if 15,35-year-old males released after 14 + days' imprisonment in Scotland, 1996,99, had a higher drugs-related death rate in 2 weeks after release than during subsequent 10 weeks; higher than expected death rate from other causes; and if drugs-related deaths in the first fortnight were three times as many as prison suicides. Design, Confidential linkage of ex-prisoner database against deaths. Setting, Scotland's male prisons and young offenders' institutions during July to December 1996,99; 19 486 index releases after 14+ days' incarceration. Measurements, Relative risk of drugs-related death in the first 2 weeks after release (34 deaths) versus subsequent 10 weeks (23). Other causes of death (21) relative to expectation. Drugs-related deaths in first 2 weeks after release relative to suicides in prison (12). Findings, Drugs-related mortality in 1996,99 was seven times higher (95% CI: 3.3,16.3) in the 2 weeks after release than at other times at liberty and 2.8 times higher than prison suicides (95% CI: 1.5,3.5) by males aged 15,35 years who had been incarcerated for 14+ days. We estimated one drugs-related death in the 2 weeks after release per 200 adult male injectors released from 14 + days' incarceration. Non-drugs-related deaths in the 12 weeks after release were 4.9 times (95% CI: 2.8,7.0) the 4.3 deaths expected. Conclusion, Investment in, and evaluation of, prison-based interventions is needed to reduce substantially recently released drugs-related deaths. [source]


    Significance of white blood cell count and its subtypes in patients with acute coronary syndrome

    EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 5 2009
    G. Huang
    Abstract Background, Inflammation plays a role in the pathogenesis of coronary atherosclerosis. Materials and methods, Six hundred twenty-three patients with acute coronary syndrome (ACS) referred for coronary angiography for the first time in our hospital were enrolled in this study. White blood cell and its subtypes were measured on admission. The study population was divided into three groups based on total white blood cell count and followed up. Clinical end points were major adverse cardiac events (MACEs), including cardiogenic death, stroke, heart failure, non-fatal myocardial infarction, rehospitalization for angina pectoris. Results, The median age was 68 years (range 31,92) and 64·2% of the patients were men. The median white blood cell count was 6·48 × 109 L,1 (range 2·34,27·10 × 109 L,1). The median follow-up duration was 21 months (range 1,116) and MACEs occurred in 167 patients. The multivariable Cox proportional hazards regression model revealed that neutrophil count [Relative risk = 1·098, 95% Confidence interval (CI): 1·010,1·193, P = 0·029) was a risk factor for MACEs. The logistic regression model revealed that lymphocyte count [Odds ratio (OR) = 1·075, 95% CI: 1·012,1·142, P = 0·018] and monocyte count (OR = 8·578, 95% CI: 2·687,27·381, P < 0·001) were predictive of stenosis , 75%; Neutrophil proportion (OR = 1·060, 95% CI: 1·007,1·115, P = 0·026), monocyte count (OR = 12·370, 95% CI: 1·298,118·761, P = 0·029) were predictive of the presence of multivessel disease. Kaplan,Meier analysis of short-term and long-term cumulative survival showed no significant statistical differences among three groups. Conclusions, Neutrophil count adds prognostic information to MACEs in ACS. Monocyte count and lymphocyte count are predictive of severity of coronary atherosclerosis. [source]


    Signs and symptoms at diagnosis of amyotrophic lateral sclerosis: a population-based study in southern Italy

    EUROPEAN JOURNAL OF NEUROLOGY, Issue 7 2006
    S. Zoccolella
    Amyotrophic lateral sclerosis (ALS) diagnostic criteria are used to select patients for clinical trials based on different levels of diagnostic certainty, according to the spread of upper (UMN) and lower motoneuron (LMN) signs in different anatomic regions. However, the clinical presentation of ALS patients is extremely variable and this can delay the time to diagnosis and decrease the likelihood for trial entry. The aims of the study were to describe the signs and symptoms of diagnosis in a population-based incident cohort of ALS cases, using the El Escorial (EEC) and the Revised Airlie Diagnostic Criteria (AHC). The source of the study was a prospective population-based registry established in Puglia, southern Italy, in 1997. The diagnosis and the classification of the cases were based on EEC and AHC. All incident ALS cases during the period 1998,1999 were enrolled and followed up. During the surveillance period, we identified 130 ALS incident cases, and bulbar-ALS represented 20% of our cohort. The highest risk for bulbar onset was among subjects aged >75 years [RR: 20.1, 95% confidence interval (CI) 3.4,118.0] compared with subjects aged <55 years and among females compared with males (Relative risk (RR): 2.75, 95% CI: 1,7.3). The vast majority of patients (72%) referred progressive muscle weakness in the limbs as the presenting symptom. Eighty percent of cases presented contemporary bulbar or spinal involvement; UMN signs in the bulbar region were present in 24% of cases and any motoneuronal sign in thoracic region in only 15% of the cases. In this population-based series, progressive muscle weakness was the most common presenting sign; bulbar onset was associated with advanced age and female sex. UMN signs in the bulbar region and any motoneuronal sign in the thoracic region were observed in 20% of our case series. This may represent the main limitation to show the spread of signs during diagnostic assessment for inclusion in epidemiological studies and clinical trials. [source]


    MNS16A minisatellite genotypes in relation to risk of glioma and meningioma and to glioblastoma outcome

    INTERNATIONAL JOURNAL OF CANCER, Issue 4 2009
    Ulrika Andersson
    Abstract The human telomerase reverse transcriptase (hTERT) gene is upregulated in a majority of malignant tumours. A variable tandem repeat, MNS16A, has been reported to be of functional significance for hTERT expression. Published data on the clinical relevance of MNS16A variants in brain tumours have been contradictory. The present population-based study in the Nordic countries and the United Kingdom evaluated brain-tumour risk and survival in relation to MNS16A minisatellite variants in 648 glioma cases, 473 meningioma cases and 1,359 age, sex and geographically matched controls. By PCR-based genotyping all study subjects with fragments of 240 or 271 bp were judged as having short (S) alleles and subjects with 299 or 331 bp fragments as having long (L) alleles. Relative risk of glioma or meningioma was estimated with logistic regression adjusting for age, sex and country. Overall survival was analysed using Kaplan,Meier estimates and equality of survival distributions using the log-rank test and Cox proportional hazard ratios. The MNS16A genotype was not associated with risk of occurrence of glioma, glioblastoma (GBM) or meningioma. For GBM there were median survivals of 15.3, 11.0 and 10.7 months for the LL, LS and SS genotypes, respectively; the hazard ratio for having the LS genotype compared with the LL was significantly increased HR 2.44 (1.56,3.82) and having the SS genotype versus the LL was nonsignificantly increased HR 1.46 (0.81,2.61). When comparing the LL versus having one of the potentially functional variants LS and SS, the HR was 2.10 (1.41,3.1). However, functionality was not supported as there was no trend towards increasing HR with number of S alleles. Collected data from our and previous studies regarding both risk and survival for the MNS16A genotypes are contradictory and warrant further investigations. © 2009 UICC [source]


    Abortions and breast cancer: Record-based case-control study

    INTERNATIONAL JOURNAL OF CANCER, Issue 5 2003
    Gunnar Erlandsson
    Abstract It has been suggested that abortions leave the breast epithelium in a proliferative state with an increased susceptibility to carcinogenesis. Results from previous studies of induced or spontaneous abortions and risk of subsequent breast cancer are contradictory, probably due to methodological considerations. We investigated the relationship between abortions and subsequent breast cancer risk in a case-control study using prospectively recorded exposure information. The study population comprised women recorded in the population-based Swedish Medical Birth Register between 1973,91. Cases were defined by linkage of the birth register to the Swedish Cancer Register and controls were randomly selected from the birth register. From the subjects' antenatal care records we abstracted prospectively collected information on induced and spontaneous abortions, as well as a number of potential confounding factors. Relative risk of breast cancer was estimated by odds ratios (OR) with 95% confidence intervals (95% CI). A reduced risk of breast cancer was observed for women with a history of at least 1 compared to no abortions (adjusted OR = 0.84, 95% CI = 0.72,0.99). The adjusted OR decreases step-wise with number of abortions to 0.59 (95% CI = 0.34,1.03) for 3 or more compared to no abortions. The patterns are similar for induced and spontaneous abortions. In conclusion, neither a history of induced nor spontaneous abortions is associated with an increased risk of breast cancer. Our data suggest a protective effect of pregnancies regardless of outcome. © 2002 Wiley-Liss, Inc. [source]


    Mothers' and fathers' birth characteristics and perinatal mortality in their offspring: a population-based cohort study

    PAEDIATRIC & PERINATAL EPIDEMIOLOGY, Issue 3 2010
    Tone I. Nordtveit
    Summary Nordtveit TI, Melve KK, Skjaerven R. Mothers' and fathers' birth characteristics and perinatal mortality in their offspring: a population-based cohort study. Paediatric and Perinatal Epidemiology 2010; 24: 282,292. There is increasing interest in the associations between parental birthweight and gestational age with their perinatal outcomes. We investigated perinatal mortality risk in offspring in relation to maternal and paternal gestational age and birthweight. We used population-based generational data from the Medical Birth Registry of Norway, 1967,2006. Singletons in both generations were included, forming 520 794 mother,offspring and 376 924 father,offspring units. Perinatal mortality in offspring was not significantly associated with paternal gestational age or birthweight, whereas it was inversely associated with maternal gestational age. A threefold increased risk in perinatal mortality was found among offspring of mothers born at 28,30 weeks of gestation relative to offspring of mothers born at term (37,43 weeks) (relative risk: 2.9, 95% CI 1.9, 4.6). There was also an overall association between maternal birthweight and offspring perinatal mortality. Relative risk for mothers whose birthweight was <2000 g was 1.5 (95% CI 1.1, 1.9), relative to mothers whose birthweight was 3500,3999 g. However, confined to mothers born at ,34 weeks of gestation, the birthweight association was not significant. Weight-specific perinatal mortality in offspring was dependent on the birthweight of the mother and the father, that is, offspring who were small relative to their mother's or father's birthweight had increased perinatal mortality. In conclusion, a mother's gestational age, and not her birthweight, was significantly associated with perinatal mortality in the offspring, while there was no such association for the father. [source]


    Relative risk of abnormal karyotype in fetuses found to have an atrioventricular septal defect (AVSD) on fetal echocardiography

    PRENATAL DIAGNOSIS, Issue 2 2005
    Kate Langford
    Abstract One hundred and twenty-five fetuses were identified as having an AVSD with normal venous connections, normal arterial connections and normal cardiac situs on fetal echocardiography. Fetal karyotype was known in 111 of these cases. The relative risk of fetal trisomy 21 at mid-trimester was 107 (95% CI 87,127) times the expected number of cases compared with risk from maternal age alone, and that for trisomy 21,18 or 13 was 95 (95% CI 79,109). This data may be useful in counselling pregnant women about risk of fetal karyotypic abnormality after a diagnosis of fetal AVSD. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Smoking and acute urinary retention: The Olmsted County study of urinary symptoms and health status among men

    THE PROSTATE, Issue 7 2009
    Aruna V. Sarma PhD
    Abstract BACKGROUND Previous reports have suggested an inverse relationship between smoking and surgery for benign prostatic hyperplasia (BPH). We hypothesized that acute urinary retention (AUR), an adverse outcome of this disease and indication for surgical treatment, may be related to smoking. METHODS Study subjects were randomly selected from Olmsted County men aged 40,79 identified through the Rochester Epidemiology Project. Of the 3,854 eligible men, 2,089 (54%) completed a questionnaire that included the American Urological Association Symptom Score and assessed smoking status. Community medical records were examined for occurrence of AUR with documented catheterization in the subsequent 10 years and occurrence of BPH surgery. Proportional hazard models were used to assess the relationship between baseline smoking status and subsequent retention. RESULTS In the 18,307 person-years of follow-up, 114 men had AUR. When compared to 727 never-smokers, there was a trend among the 336 current smokers to be at lower risk (Relative risk (RR),=,0.62, 95% Confidence Interval (CI),=,0.33, 1.18) whereas the 1,026 former smokers were at similar risk to non-smokers (RR,=,1.0, 95%CI,=,0.67, 1.46). Among men with moderate-severe symptoms at baseline, current smokers were at lower risk of retention compared to non-smokers (RR,=,0.65, 95%CI,=,0.22, 1.91) but the association approached the null among those with none-mild symptoms (RR,=,0.91, 95% CI,=,0.40, 2.06). CONCLUSIONS Community-dwelling men who currently smoke may be at a modestly reduced risk of AUR. The magnitude of this association is sufficiently small that it seems unlikely that this explains a sizable proportion of the inverse association between smoking and surgically treated BPH. Prostate 69: 699,705, 2009. © 2009 Wiley-Liss, Inc. [source]


    A prospective study of the effect of nursing home residency on mortality following hip fracture

    ANZ JOURNAL OF SURGERY, Issue 6 2010
    Ian A. Harris
    Abstract Background:, The strength of nursing home residence as a prognostic indicator of outcome following hip fracture has not previously been examined in Australia. The aim of the study was to examine the influence of nursing home residency on mortality after sustaining an acute hip fracture. Methods:, A prospective study of all adults aged 65 years and over presenting to a single tertiary referral hospital for management of a proximal femoral fracture between July 2003 and September 2006. Residential status was obtained at admission. Patients were followed up to September 2007 (minimum 12 months). Relative risk values for mortality were calculated comparing nursing home residents with non-nursing home residents. Survival analysis was performed. Results:, Relative risk of death was higher in nursing home patients compared with non-nursing home patients. The difference was greater in the immediate period (30 days) post-injury (relative risk 1.9, 95% confidence interval 1.0,3.6, P= 0.04) than after 12 months (relative risk 1.5, 95% confidence interval 1.2,1.8, P= 0.001). Survival analysis showed that 25% of patients in the nursing home group died by 96 days post-injury, compared with 435 days in the non-nursing home group. Conclusions:, Nursing home residence confers an increased risk of death following hip fracture; this difference is greater in the immediate post-injury period. The relative risk of death decreases over time to equal previously reported comparative mortality rates between nursing home residents and community dwellers without hip fracture. [source]


    The Association between Emergency Department Crowding and Analgesia Administration in Acute Abdominal Pain Patients

    ACADEMIC EMERGENCY MEDICINE, Issue 7 2009
    Angela M. Mills MD
    Abstract Objectives:, The authors assessed the effect of emergency department (ED) crowding on the nontreatment and delay in treatment for analgesia in patients who had acute abdominal pain. Methods:, This was a secondary analysis of prospectively enrolled nonpregnant adult patients presenting to an urban teaching ED with abdominal pain during a 9-month period. Each patient had four validated crowding measures assigned at triage. Main outcomes were the administration of and delays in time to analgesia. A delay was defined as waiting more than 1 hour for analgesia. Relative risk (RR) regression was used to test the effects of crowding on outcomes. Results:, A total of 976 abdominal pain patients (mean [±standard deviation] age = 41 [±16.6] years; 65% female, 62% black) were enrolled, of whom 649 (67%) received any analgesia. Of those treated, 457 (70%) experienced a delay in analgesia from triage, and 320 (49%) experienced a delay in analgesia after room placement. After adjusting for possible confounders of the ED administration of analgesia (age, sex, race, triage class, severe pain, final diagnosis of either abdominal pain not otherwise specified or gastroenteritis), increasing delays in time to analgesia from triage were independently associated with all four crowding measures, comparing the lowest to the highest quartile of crowding (total patient-care hours RR = 1.54, 95% confidence interval [CI] = 1.32 to 1.80; occupancy rate RR = 1.64, 95% CI = 1.42 to 1.91; inpatient number RR = 1.57, 95% CI = 1.36 to 1.81; and waiting room number RR = 1.53, 95% CI = 1.31 to 1.77). Crowding measures were not associated with the failure to treat with analgesia. Conclusions:, Emergency department crowding is associated with delays in analgesic treatment from the time of triage in patients presenting with acute abdominal pain. [source]


    Diclofenac and acute myocardial infarction in patients with no major risk factors

    BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 5 2007
    Susan S. Jick
    What is already known about this subject ,,We recently published the results of a study on the risk of acute myocardial infarction (AMI) in users of five nonsteroidal anti-inflammatory drugs during the years 2001 to 2005. ,,The results demonstrated, as has been reported in randomized trials, that rofecoxib and celecoxib increase the risk of AMI when taken for at least 10 months. ,,As expected, ibuprofen and naproxen did not materially increase the risk. ,,However, long-term users of diclofenac were at an increased risk of AMI similar to that of users of rofecoxib and celecoxib. What this study adds ,,Extensive use of diclofenac, similarly to rofecoxib and celecoxib, substantially increases the risk of AMI. ,,There is little suggestion of such an effect in users of ibuprofen and naproxen. Aims To explore further a recent finding that long-term users of diclofenac are at increased risk of acute myocardial infarction (AMI) similar to users of rofecoxib and celecoxib. Methods Using the General Practice Research Database, we conducted three separate nested case,control studies of three nonsteroidal anti-inflammatory drugs (NSAIDs) where use started after 1 January 1993 , diclofenac, ibuprofen and naproxen. Cases of AMI were identified between 1 January 1993 and 31 December 2000. Relative risk (RR) estimates for AMI in patients with no major clinical risk factors were determined for each NSAID according to number of prescriptions received compared with one prescription. Results were adjusted for variables possibly related to risk of AMI. Results There was no material elevation of AMI risk according to the number of prescriptions for ibuprofen [RRs and 95% confidence intervals (CIs) 1.0 (0.6, 1.6) and 1.7 (0.9, 3.1) for use of 10,19 and 20+ prescriptions, respectively, compared with one prescription] or naproxen [RRs 1.0 (0.5, 2.2) and 2.0 (0.9, 4.5) for use of 10,19 and 20+ prescriptions, respectively]. However, a substantial increased risk similar to that obtained in our prior study was found in patients who received ,10 prescriptions for diclofenac [RRs 1.9 (1.3, 2.7) and 2.0 (1.3, 3.0) for use of 10,19 and 20+ prescriptions, respectively]. Conclusions Extensive use of diclofenac substantially increases the risk of AMI. There is little suggestion of such an effect in users of ibuprofen and naproxen. [source]


    The impact of ageing on stroke subtypes, length of stay and mortality: study in the province of Teruel, Spain

    ACTA NEUROLOGICA SCANDINAVICA, Issue 6 2003
    P. J. Modrego
    Background and purpose , During the last three decades, there have been important advances in the diagnosis and treatment of stroke leading to a decline in mortality rates in western countries. However, the longer life expectancy and the higher proportion of elderly people in the structure of the population may partially counteract this positive trend in stroke-related mortality. The purpose of this study was to analyse the impact of a high ageing index of the population on stroke-related variables such as stroke subtypes, length of hospital stay and mortality from stroke. Methods , We analysed the data of 1850 consecutive patients with first-ever stroke retrieved from a prospective registry over a period of 8 years (1994,2001) in the province of Teruel, Spain, with two public hospitals in the catchment area. The mean age was 75.5 years (SD: 9.4) and the sex was male in 62% of cases. The variables included in the study were vascular risk factors, stroke subtypes, fatality rate, length of stay and mortality. Mortality was assessed from 1990 to 2000. Results , Arterial hypertension and atrial fibrillation were the most frequent risk factors, with an observed high frequency of cardioembolic stroke. The mean 28-day case fatality rate was 16.6%, ranging from 11.9% in 1994 to 23.4% in 1999. We found complications in 38% of patients, especially in the elderly. Fatality occurred in 20.3% of elderly subjects (65 or over) in comparison to 7.25% for those younger (Relative risk: 2.8; 95% CI: 1.47,5.3). Crude mortality rates were higher than for the general population in Spain and ranged from 169 in 1991 to 139/100,000 in 2000 with higher rates for women. However, the age-adjusted mortality rate to the standard European population was 56.6/100,000 (95% CI: 46,64) in 1999, which was similar to that found in Spain (61/100,000). Conclusions , The impact of ageing on case fatality and mortality by stroke was substantial. Whereas mortality by stroke stabilized after decreasing in our province and in Spain in the last decade, fatality rates have significantly increased in our province because of the high proportion of elderly people and to the high rate of post-stroke complications. [source]


    Clopidogrel versus low-dose aspirin as risk factors for epistaxis

    CLINICAL OTOLARYNGOLOGY, Issue 3 2009
    J.W. Rainsbury
    Objectives:, To quantify the relative risk of epistaxis for patients taking low-dose aspirin or clopidogrel compared to patients taking neither drug. Design:, Case-control study. Setting:, Primary care. Participants:, 10,241 patients from three GP practices in the West Midlands. Main outcome measures:, Epistaxis resulting in presentation to the GP, attendance at Accident & Emergency, or referral to ENT outpatients. Results:, There was a significant difference in the proportion of patients with epistaxis across the three groups (,2 = 84.1; 2 degrees of freedom; P < 0.000001). Relative risk of epistaxis was increased in both the aspirin (RR = 9.04; 95% CI = 5.13,15.96) and clopidogrel (RR = 6.40; 95% CI = 2.33,17.56) groups compared to the no drug group. There was no increased risk of epistaxis with aspirin compared to clopidogrel (RR = 1.4; 95% CI = 0.6,3.4). Conclusion:, There is an increased risk of troublesome epistaxis in patients taking aspirin or clopidogrel. There is no significant difference in risk of epistaxis between the two drug groups. [source]


    A prospective single-blind randomized-controlled trial comparing two surgical techniques for the treatment of snoring: laser palatoplasty versus uvulectomy with punctate palatal diathermy

    CLINICAL OTOLARYNGOLOGY, Issue 3 2004
    S. Uppal
    The aim of this study was to compare laser palatoplasty with uvulectomy with punctate palatal diathermy as treatment modalities for snoring. The study design was a prospective, single-blind, randomized-controlled trial. Eighty-three patients entered the trial. After a mean follow-up period of more than 18 months there was no statistically significant difference between the two groups regarding the patient perception of benefit from surgery or the subjective improvement in snoring. However, there was a statistically significant difference in the degree of pain in the immediate postoperative period (mean difference = 22.14, 95% CI = 7.98,36.31, P = 0.003), with the pain being worse in the laser palatoplasty group. Relative risk of complications for laser palatoplasty was 1.42 (95% CI = 0.93,2.17). The snoring scores and Glasgow Benefit Inventory scores decreased with time in both the groups but there was no statistically significant difference between the two groups. [source]


    Multiple adverse outcomes over 30 years following adolescent substance misuse treatment

    ACTA PSYCHIATRICA SCANDINAVICA, Issue 6 2009
    S. Hodgins
    Objective:, To compare outcomes over 30 years experienced by individuals who as adolescents entered substance misuse treatment and a general population sample. Method:, All 1992 individuals seen at the only clinic for substance misusing adolescents in Stockholm from 1968 to 1971 were compared to 1992 individuals randomly selected from the Swedish population, matched for sex, age and birthplace. Death, hospitalization for physical illness related to substance misuse, hospitalization for mental illness, substance misuse, criminal convictions and poverty were documented from national registers. Results:, Relative risks of death, physical illness, mental illness, substance misuse, criminal convictions and poverty were significantly elevated in the clinic compared to the general population sample. After adjustment for substance misuse in adulthood, the risks of death, physical and mental illness, criminality and poverty remained elevated. Conclusion:, Adolescents who consult for substance misuse problems are at high risk for multiple adverse outcomes over the subsequent 30 years. [source]


    Multiple primary cancer: an increasing health problem.

    EUROPEAN JOURNAL OF CANCER CARE, Issue 6 2009
    Strategies for prevention in cancer survivors
    LÓPEZ M.L., LANA A., DÍAZ S., FOLGUERAS M.V., SANCHEZ L., COMENDADOR M.A., BELYAKOVA E., RODRÍGUEZ J.M. & CUETO A. (2009) European Journal of Cancer Care Multiple primary cancer: an increasing health problem. Strategies for prevention in cancer survivors This study was set to look for associations between the sites of the first and subsequent tumours in patients with multiple primary cancer (MPC) diagnosed from 1975 to 2002 in the reference hospital of a Spanish northern region, and propose prevention strategies. Patient and tumour variables were measured. Crude and standardized incidence rates per 100 000 inhabitants were obtained, and the association between MPC incidence and time was analysed by means of lineal regression. Relative risks were calculated to analyse associations between tumour sites. A total of 2737 MPC cases were registered (male/female ratio = 2). The percentage of MPC with respect to the total cancer increased from 1.78% in the 1975,1979 period to 7.08% in the 2000,2002 period (R2 = 0.92; P = 0.003). Great increase of incidence by time was found (R2 = 0.90; P = 0.004). Breast, prostate and bladder cancers increase risk of second tumour in female genital organs [RR 4.78 (3.84,5.93)], urinary system [RR 3.69 (2.89,4.69)] and male genital organs [RR 3.76 (2.84,4.69)] respectively. The MPC incidence is increasing. Interventions for MPC prevention, according to the European Code against Cancer, should be implemented early after the first cancer principally if patients suffer breast, bladder, prostate, larynx and colon cancers. [source]


    Comparison of carbamazepine and lithium in treatment of bipolar disorder: A systematic review of randomized controlled trials,

    HUMAN PSYCHOPHARMACOLOGY: CLINICAL AND EXPERIMENTAL, Issue 1 2009
    Daniela Ceron-Litvoc
    Abstract Objectives To review data from randomized controlled trials (RCTs) assessing the comparative efficacy of carbamazepine and lithium in treatment of acute manic and maintenance phase of bipolar disorder (BD). Design RCTs were identified through a search strategy that included: electronic databases, reference cross-checking, hand search of non-indexed publications, and book chapters on the treatment of BD comparing carbamazepine with lithium. Outcomes investigated were antimanic effect, trial withdrawal, relapse, hospitalization, need for rescue medication, and presence of adverse effects. Selection of studies and data analysis were performed independently by authors. Whenever possible, data from trials were combined through meta-analyses. Relative risks (RR) were estimated for dichotomous data. Results In acute mania, carbamazepine was similar to lithium on the following outcomes: trial withdrawal due to adverse effects, number of participants with at least one adverse effect, improvement in the Clinical Global Impression (CGI). In acute mania, carbamazepine was associated with fewer trial withdrawals. In maintenance treatment, carbamazepine was similar to lithium in relapses and hospitalization, but there were fewer trial withdrawals due to adverse effects on lithium. Conclusion This review suggests that carbamazepine might be comparable to lithium in terms of efficacy and safety, and therefore a valuable option in the treatment of both manic and maintenance phases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Nonsteroidal antiinflammatory drug use and risk of bladder cancer in the health professionals follow-up study

    INTERNATIONAL JOURNAL OF CANCER, Issue 10 2007
    Jeanine M. Genkinger
    Abstract Nonsteroidal antiinflammatory drugs (NSAIDs) use, particularly aspirin, may lower the risk of several cancers, including bladder. NSAIDs may reduce development of bladder tumors by decreasing inflammation, inhibiting cycloxygenase-2, inhibiting proliferation and inducing apoptosis of cancer cells. However, acetaminophen, a major metabolite of phenacetin, may be positively associated with bladder cancer risk. Results from case-control studies on NSAIDs and acetaminophen use and bladder cancer risk are inconsistent. We investigated the association between NSAID and acetaminophen use and bladder cancer risk in a large cohort of US males. Among 49,448 men in the Health Professionals Follow-Up Study, 607 bladder cancer cases were confirmed during 18 years of follow-up. Relative risks (RR) and 95% confidence intervals (CI) were calculated by Cox proportional hazards models. Multivariate RR were adjusted for age, current smoking status, pack years, geographic region and fluid intake. No significant associations were observed for regular aspirin (,2 tablets per week), (RR = 0.99, 95% CI 0.83,1.18), ibuprofen (RR = 1.11, 95% CI 0.81,1.54), acetaminophen (RR = 0.96, 95% CI 0.67,1.39) or total NSAID use (not including acetaminophen; RR = 1.01, 95% CI 0.85,1.20) and bladder cancer risk compared with nonuse. Consistent use (over 6 years) of aspirin, ibuprofen, acetaminophen and total NSAIDs, compared to nonuse, was not associated with bladder cancer risk. No association was observed between aspirin frequency and dose and bladder cancer risk. We observed no effect-modification by smoking, age or fluid intake. Our results suggest that regular NSAID or acetaminophen use has no substantial impact on bladder cancer risk among men. © 2007 Wiley-Liss, Inc. [source]


    Physical activity and lung cancer risk in the European Prospective Investigation into Cancer and Nutrition Cohort

    INTERNATIONAL JOURNAL OF CANCER, Issue 10 2006
    Karen Steindorf
    Abstract Research conducted predominantly in male populations on physical activity and lung cancer has yielded inconsistent results. We examined this relationship among 416,277 men and women from the European Prospective Investigation into Cancer and Nutrition (EPIC). Detailed information on recent recreational, household and occupational physical activity, smoking habits and diet was assessed at baseline between 1992 and 2000. Relative risks (RR) were estimated using Cox regression. During 6.3 years of follow-up we identified 607 men and 476 women with incident lung cancer. We did not observe an inverse association between recent occupational, recreational or household physical activity and lung cancer risk in either males or females. However, we found some reduction in lung cancer risk associated with sports in males (adjusted RR = 0.71; 95% confidence interval 0.50,0.98; highest tertile vs. inactive group), cycling (RR = 0.73; 0.54,0.99) in females and non-occupational vigorous physical activity. For occupational physical activity, lung cancer risk was increased for unemployed men (adjusted RR = 1.57; 1.20,2.05) and men with standing occupations (RR = 1.35; 1.02,1.79) compared with sitting professions. There was no evidence of heterogeneity of physical activity associations across countries, or across any of the considered cofactors. For some histologic subtypes suggestive sex-specific reductions, limited by subgroup sizes, were observed, especially with vigorous physical activity. In total, our study shows no consistent protective associations of physical activity with lung cancer risk. It can be assumed that the elevated risks found for occupational physical activity are not produced mechanistically by physical activity itself but rather reflect exposure to occupation-related lung cancer risk factors. © 2006 Wiley-Liss, Inc. [source]