Home About us Contact | |||
Significant Findings (significant + finding)
Selected AbstractsClinical and microbial evaluation of a histatin-containing mouthrinse in humans with experimental gingivitis: a phase-2 multi-center studyJOURNAL OF CLINICAL PERIODONTOLOGY, Issue 2 2002Thomas Van Dyke Abstract Objective: P-113, a 12 amino acid histatin-based peptide, was evaluated in a mouthrinse formulation for safety and efficacy in a phase 2 multi-center clinical study. Method: 294 healthy subjects abstained from oral hygiene procedures and self-administered either 0.01% P-113, 0.03% P-113 or placebo mouthrinse formulations twice daily over a 4-week treatment period. During this time, the safety, anti-gingivitis, and anti-plaque effects of P-113 were evaluated. Results: There was a significant reduction in the change from baseline to Day 22 in bleeding on probing in the 0.01% P-113 treatment group of the intent to treat population (p=0.049). Non-significant trends in the reduction of the other parameters were observed in this population (p0.159). A sub-group of subjects which developed significant levels of disease within the four-week timeframe of the study was identified based on baseline gingival index scores 0.75. Significant findings were observed for bleeding on probing, gingival index and plaque index within this population (p<0.05). There were no treatment-related adverse events, and there were no adverse shifts in supragingival microflora during the study. Significant amounts of the peptide were retained in the oral cavity following rinsing. Conclusion: These data suggest that P-113 mouthrinse is safe and reduces the development of gingival bleeding, gingivitis and plaque in the human experimental gingivitis model. Zusammenfassung Ziel: In einer klinischen Phase-2 Multicenter-Studie wurde P-113, ein 12-Aminosäure-Histatin basierendes Peptid bezüglich Sicherheit und Effektivität in einer Mundspüllösung evaluiert. Methode:Für eine Behandlungszeitraum von 4 Monaten enthielten sich 294 Personen den Mundhygienemaßnahmen und spülten 2× täglich entweder mit 0.01% P-113, 0.03% P-113 oder mit einer Plazebolösung. Während dieser Zeit wurden die Sicherheit von P-113 sowie sein anti-Gingivitis- und sein anti-Plaqueeffekt evaluiert. Ergebnisse: In der 0.01% P-113-Behandlungsgruppe, der Population die behandelt werden sollte gab es eine signifikante Reduktion (p=0.049) der Veränderung zwischen Ausgangswert und Tag-22-Wert der Sondierungsblutung. In dieser Population wurden nichtsignifikante Trends in der Reduktion der anderen Parameter beobachtet (p0.159). Eine Untergruppe der Personen, welche während des vierwöchigen Zeitraumes ein signifikantes Erkrankungsniveau entwickelte, wurde auf der Grundlage eines Ausgangswertes für den Gingiva-Index von 0.75 identifiziert. Innerhalb dieser Population wurden signifikante Ergebnisse (p<0.05) für die Sondierungsblutung, den Gingiva-Index und den Plaque-Index beobachtet. Während dieser Studie gab es keine durch die Behandlung verursachten Nebenwirkungen und es gab keine unerwünschte Verschiebung der supragingivalen Mikroflora. Nach dem Spülen verblieb eine signifikante Menge des Peptids in der Mundhöhle. Schlussfolgerung: Diese Daten lassen annehmen, dass eine P-113-Mundspülössung sicher ist und die Entwicklung der Gingivablutung, Gingivitis und Plaque in einem humanen experimentellen Gingivitismodel hemmt. Résumé But: Le P-113, un peptide contenant de l'histatine a étéévalué dans un bain de bouche pour sa sécurité et son efficacité dans une étude multicentrique phase 2. Méthode. 294 sujets sains ont arrête toute hygiène buccale et se sont rincé avec soit 15 ml de P-113 0.01%, 15 ml de P-113 0.03% ou 15 ml d'une solution placebo 2× par jour durant une période de 4 semaines. Pendant ce temps, les effets anti-plaque et anti-gingivite, la sécurité ont étéévalués. Résultats: Il y avait une réduction significative entre l'examen de départ et le jour 22 dans le saignement au sondage dans le groupe P-113 0.01% (p=0.049). Aucune tendance significative dans la réduction des autres paramètres n'a été observée dans cette population (p0.159). Un sous-groupe de sujets qui développaient des niveaux significatifs de maladie durant ces 4 semaines a été identifié sur base de leur indice gingival initial 0.75. Des découvertes significatives ont été observées pour le saignement au sondage, l'indice gingival et l'indice de plaque dentaire dans cette population (p<0.05). Il n'y a eu aucun signe négatif dû au traitement ni aucune variation négative dans la flore sus-gingivale durant l'étude. Des quantités significatives du peptide ont été retenues dans la cavité buccale après rincage. Conclusion: Ces donnés suggèrent que le rincage au P-113 est sûr et réduit le dévelopement du saignement gingival, la gingivite et la plaque dentaire dans le modèle de la gingivite expérimentale humaine. [source] National study of information seeking behavior of academic researchers in the United StatesJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 5 2010Xi Niu As new technologies and information delivery systems emerge, the way in which individuals search for information to support research, teaching, and creative activities is changing. To understand different aspects of researchers' information-seeking behavior, this article surveyed 2,063 academic researchers in natural science, engineering, and medical science from five research universities in the United States. A Web-based, in-depth questionnaire was designed to quantify researchers' information searching, information use, and information storage behaviors. Descriptive statistics are reported. Additionally, analysis of results is broken out by institutions to compare differences among universities. Significant findings are reported, with the biggest changes because of increased utilization of electronic methods for searching, sharing, and storing scholarly content, as well as for utilizing library services. Generally speaking, researchers in the five universities had similar information-seeking behavior, with small differences because of varying academic unit structures and myriad library services provided at the individual institutions. [source] Polymorphisms in interleukin-1 receptor-associated kinase 4 are associated with total serum IgEALLERGY, Issue 5 2009M. A. Tewfik Background:, Serum immunoglobulin E (IgE) level is recognized to be under strong genetic control, but the causal and susceptibility genes remain to be identified. We sought to investigate the association between single nucleotide polymorphisms (SNPs) in the Toll-like receptor (TLR) signaling pathway and total serum IgE level. Methods:, A population of 206 patients with severe chronic rhinosinusitis (CRS) was used. Precise phenotyping of patients was accomplished by means of a questionnaire and clinical examination. Blood was drawn for measurement of total serum IgE, as well as DNA extraction. A maximally informative set of SNPs in the TLR1, 2, 3, 4, 6, 9, 10, CD14, MD2, MyD88, IRAK4, and TRAF6 genes were selected and genotyped. Significant findings were replicated in a second independent population of 956 subjects from 227 families with asthma. Results:, A total of 97 out of 104 SNPs were successfully genotyped. Three SNPs in IRAK4, rs1461567, rs4251513, and rs4251559 , were associated with total serum IgE levels (P < 0.004). In the replication sample, the same SNPs as well as the same orientation of the risk allele were associated with IgE levels (P < 0.031). Conclusions:, These results demonstrate a clear association between polymorphisms in the IRAK4 gene and serum IgE levels in patients with CRS and asthma. IRAK4 may be important in the regulation of IgE levels in patients with inflammatory diseases of the airways. [source] Genetic variation in Myzus persicae populations associated with host-plant and life cycle categoryENTOMOLOGIA EXPERIMENTALIS ET APPLICATA, Issue 3 2001Kiriaki Zitoudi Abstract Random amplified polymorphic DNA (RAPD) analysis was applied on 96 clones of Myzus persicae (Sulzer) (Homoptera: Aphididae) representing seven populations collected from different host-plants and regions of Greece. Ten decamer random primers were used to evaluate genetic variation among the examined samples. Despite the variability found between clones, no specific RAPD marker was detected to discriminate the different populations. A significant finding was that aphids from peach and pepper, which were collected far away from tobacco-growing regions, especially those from peach, showed genetic divergence from the tobacco-feeding clones. Moreover, data analysis revealed a significant genetic divergence between holocyclic and anholocyclic populations from tobacco. Lastly, holocyclic clones showed higher level of estimated heterozygosity than the nonholocyclic (anholocyclic, androcyclic and intermediate) ones. [source] Novel regions of acquired uniparental disomy discovered in acute myeloid leukemiaGENES, CHROMOSOMES AND CANCER, Issue 9 2008Manu Gupta The acquisition of uniparental disomy (aUPD) in acute myeloid leukemia (AML) results in homozygosity for known gene mutations. Uncovering novel regions of aUPD has the potential to identify previously unknown mutational targets. We therefore aimed to develop a map of the regions of aUPD in AML. Here, we have analyzed a large set of diagnostic AML samples (n = 454) from young adults (age: 15,55 years) using genotype arrays. Acquired UPD was found in 17% of the samples with a nonrandom distribution particularly affecting chromosome arms 13q, 11p, and 11q. Novel recurrent regions of aUPD were uncovered at 2p, 17p, 2q, 17q, 1p, and Xq. Overall, aUPDs were observed across all cytogenetic risk groups, although samples with aUPD13q (5.4% of samples) belonged exclusively to the intermediate-risk group as defined by cytogenetics. All cases with a high FLT3 -ITD level, measured previously, had aUPD13q covering the FLT3 gene. Significantly, none of the samples with FLT3 -ITD - /FLT3 -TKD+ mutation exhibited aUPD13q. Of the 119 aUPDs observed, the majority (87%) were due to mitotic recombination while only 13% were due to nondisjunction. This study demonstrates aUPD is a frequent and significant finding in AML and pinpoints regions that may contain novel mutational targets. © 2008 Wiley-Liss, Inc. [source] Magnetic-Field-Induced Locomotion of Glass Fibers on Water Surfaces: Towards the Understanding of How Much Force One Magnetic Nanoparticle Can DeliverADVANCED MATERIALS, Issue 19 2009Feng Shi The amount of force one magnetic nanoparticle (MNPs) can deliver is calculated using Fe3O4 MNPs building blocks to modify glass fibers. Our results demonstrate that one weight unit of Fe3O4 MNPs can eventually drag ,10,000 times its own weight on a water surface, a significant finding for the development of new magnetic delivery systems and micromanipulators. [source] A randomized, two-year study of the efficacy of cognitive intervention on elderly people: the Donostia Longitudinal StudyINTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 1 2008Cristina Buiza Abstract Background Research on non-pharmacological therapies (cognitive rehabilitation) in old age has been very limited, and most has not considered the effect of interventions of this type over extended periods of time. Objective To investigate a new cognitive therapy in a randomized study with elderly people who did not suffer cognitive impairment. Methods The efficacy of this therapy was evaluated by means of post-hoc analysis of 238 people using biomedical, cognitive, behavioural, quality of life (QoL), subjective memory, and affective assessments. Results Scores for learning potential and different types of memory (working memory, immediate memory, logic memory) for the treatment group improved significantly relative to the untreated controls. Conclusions The most significant finding in this study was that learning potential continued at enhanced levels in trained subjects over an intervention period lasting two years, thereby increasing rehabilitation potential and contributing to successful ageing. Copyright © 2007 John Wiley & Sons, Ltd. [source] The workplace and nurses with a mental illnessINTERNATIONAL JOURNAL OF MENTAL HEALTH NURSING, Issue 6 2009Terry Joyce ABSTRACT A qualitative approach was used to explore workplace experiences of nurses who have a mental illness. Interview transcripts from 29 nurses in New South Wales, Australia were subjected to discourse analysis. One significant finding was a theme depicting the need for support and trust. This superordinate theme encompassed four subelements: declaring mental illnesses, collegial support, managerial support, and enhancing support. Most of the participants portrayed their workplace as an unsupportive and negative environment. A number of colleagues were depicted as having little regard for the codes for professional nursing practice. This paper shows how nurses in the study dealt with the workplace support associated with mental illness. [source] Nurses with mental illness: Their workplace experiencesINTERNATIONAL JOURNAL OF MENTAL HEALTH NURSING, Issue 6 2007Terry Joyce ABSTRACT:, This qualitative study explored the workplace experiences of nurses who have a mental illness. The ultimate goal of the study was to gain insights that would lead to the development of more supportive environments for these nurses. Interviews were conducted with 29 nurses in New South Wales, Australia. The interview transcripts were subjected to discourse analysis. One significant finding was the theme ,Crossing the boundary , from nurse to patient'. This encompassed three sub-themes: ,Developing a mental illness', ,Hospital admission', and ,Being managed'. For most of the participants, being a nurse with a mental illness was largely a negative experience. Often, nurses without a mental illness actively sought to reform the participants' behaviour to enforce what was seen as appropriate conduct for a professional nurse. This paper shows how nurses in this study dealt with the early concerns associated with mental illness. [source] Fit for purpose: the relevance of Masters preparation for the professional practice of nursing.JOURNAL OF ADVANCED NURSING, Issue 5 2000A 10-year follow-up study of postgraduate nursing courses in the University of Edinburgh Fit for purpose: the relevance of Masters preparation for the professional practice of nursing. A 10-year follow-up study of postgraduate nursing courses in the University of Edinburgh Continuing education is now recognized as essential if nursing is to develop as a profession. United Kingdom Central Council for Nursing, Midwifery and Health Visiting (UKCC) consultations are currently seeking to establish appropriate preparation for a ,higher level of practice' in the United Kingdom. The relevance of Masters level education to developing professional roles merits examination. To this end the results of a 10-year follow-up study of graduates from the Masters programme at the University of Edinburgh are reported. The sample was the entire cohorts of nurses who graduated with a Masters degree in the academic sessions from 1986 to 1996. A postal questionnaire was designed consisting of mainly closed questions to facilitate coding and analysis but also including some open questions to allow for more qualitative data to be elicited. The findings indicated clearly that the possession of an MSc degree opened up job opportunities and where promotion was not identified, the process of study at a higher level was still perceived as relevant to the work environment. This applied as much to the context of clinical practice as to that of management, education or research. The perceived enhancement of clinical practice from a generic Masters programme was considered a significant finding. Also emerging from the data was an associated sense of personal satisfaction and achievement that related to the acquisition of academic skills and the ultimate reward of Masters status. The concept of personal growth, however, emerged as a distinct entity from that of satisfaction and achievement, relating specifically to the concept of intellectual sharing, the broadening of perspectives and the development of advanced powers of reasoning. [source] Experimental addition of greenery reduces flea loads in nests of a non-greenery using species, the tree swallow Tachycineta bicolorJOURNAL OF AVIAN BIOLOGY, Issue 1 2007Dave Shutler Several bird species, including cavity-nesters such as European starlings Sturnus vulgaris, add to their nests green sprigs of plants such as yarrow Achillea millefolium that are rich in volatile compounds. In this field study on another cavity-nester, tree swallows Tachycineta bicolor, we tested whether yarrow reduced ectoparasite loads (the nest protection hypothesis), stimulated nestling immune systems (the drug hypothesis), or had other consequences for nestling growth or parental reproductive success (predicted by both preceding hypotheses). Tree swallows do not naturally add greenery to their nests, and thus offer several advantages in testing for effects of greenery independent of other potentially confounding explanations for the behaviour. We placed fresh yarrow in 23 swallow nests on the day the first egg was laid, replenishing every two days until clutch completion (=three times), and at 44 control nests, nesting material was simply touched. At 12 days of age, we measured nestling body size and mass, and took blood smears to do differential white blood cell counts. We subsequently determined the number and proportion of young fledging from nests and the number of fleas remaining after fledging. Higher humidity was associated with higher flea numbers whereas number of feathers in the nest was not. Our most significant finding was that an average of 773 fleas Ceratophyllus idius was found in control nests, versus 419 in yarrow nests. Possibly, parents compensate for blood that nestlings lose to ectoparasites by increasing food delivery, because we detected no differences between treatments in nestling mass, nestling leukocyte profiles, or proportion of young fledging, or relative to flea numbers. Our results provide no support for the drug hypothesis and strong support for the nest protection hypothesis. [source] Managing the self: living with an indwelling urinary catheterJOURNAL OF CLINICAL NURSING, Issue 7b 2007Debbie Kralik MN Aims., This paper reports the findings of a study that aimed to understand the perspectives of community dwelling adults' who lived with a permanently indwelling urinary catheter. The objectives of the research were to: reveal the participants' perspective of living in the community with a permanent indwelling urinary catheter, raise awareness of the experiences of catheterized men and women and to inform community nursing practice. Background., Catheter care is a common nursing intervention. Clinical Nurse Consultants (CNCs) with a focus on continence drove this inquiry because it was believed that Community Nurses may underestimate the impact that a permanently indwelling catheter may have on peoples' lives. Design., Structured interviews were undertaken with twelve men and nine women (n = 21), aged between 24 and 82 years and who had a permanently indwelling catheter (either urethral or supra pubic) for longer than six months. Analysis of the interview transcripts was a collaboration between the researchers and clinicians. Results., The most significant finding was that participants wanted to learn urinary catheter self-care as this allowed them to take control and gave relevance to their daily life. Data revealed a learning pattern consisting of seven interrelated themes as people have learned to self-manage: (i) resisting the intrusion of a catheter, (ii) reckoning with the need for a catheter, (iii) being vigilant for signs of problems, (iv) reconciling between the needs of self and others, (v) reclaiming life, (vi) managing self-care, and (vii) taking control. Conclusions., We do not suggest that people undergo a straightforward path toward catheter self-care, rather, that the seven interactive themes we have identified may be useful for observation in nursing practice whilst sensitizing nurses to clients' experiences of living with a catheter. Relevance to clinical practice., Promoting self-care of a catheter is not simply about educating clients about their condition or giving them relevant information. It is intrinsically a learning process, observing responses to every day events, such as the identification of the different sounds and sensations that may alert the individual to a full catheter bag, urine that has stopped flowing or signs of impending infection. [source] Therapists' Prototypical Assessment of Domestic Violence SituationsJOURNAL OF MARITAL AND FAMILY THERAPY, Issue 2 2007Kelly A. Blasko Prototypical perceptions by therapists have the potential to influence the therapeutic process of assessment. The purpose of this study is to begin to develop an understanding of how prototypes might affect marriage and family therapists' assessments of domestic violence situations. Participants evaluated one of three domestic violence scenarios that were identical in dynamics but different in terms of sexual orientation of the couple (i.e., heterosexual, gay, or lesbian). The most significant finding was that initial assessments of victim and perpetrator identification and power attribution differed depending on the sexual orientation of the couple. The "man as perpetrator, woman as victim" prototypical paradigm for heterosexual domestic violence emerged. In the same-sex scenarios, often "both" partners were perceived to be indicated both as victim and perpetrator. [source] Sinusoidal heart rate pattern: Reappraisal of its definition and clinical significanceJOURNAL OF OBSTETRICS AND GYNAECOLOGY RESEARCH (ELECTRONIC), Issue 3 2004Houchang D. Modanlou Abstract Objectives: To address the clinical significance of sinusoidal heart rate (SHR) pattern and review its occurrence, define its characteristics, and explain its physiopathology. Background: In 1972, Manseau et al. and Kubli et al. described an undulating wave form alternating with a flat or smooth baseline fetal heart rate (FHR) in severely affected, Rh-sensitized and dying fetuses. This FHR pattern was called ,sinusoidal' because of its sine waveform. Subsequently, Modanlou et al. described SHR pattern associated with fetal to maternal hemorrhage causing severe fetal anemia and hydrops fetalis. Both Manseau et al. and Kubli et al. stated that this particular FHR pattern, whatever its pathogenesis, was an extremely significant finding that implied severe fetal jeopardy and impending fetal death. Undulating FHR pattern: Undulating FHR pattern may be due to the following: (1) true SHR pattern; (2) drugs; (3) pre-mortem FHR pattern; (4) pseudo-SHR pattern; and (5) equivocal FHR patterns. Fetal conditions associated with SHR pattern: SHR pattern has been reported with the following fetal conditions: (1) severe fetal anemia of several etiologies; (2) effects of drugs, particularly narcotics; (3) fetal asphyxia/hypoxia; (4) fetal infection; (5) fetal cardiac anomalies; (6) fetal sleep cycles; and (7) sucking and rhythmic movements of fetal mouth. Definition of true SHR pattern: Modanlou and Freeman proposed the following definition for the interpretation of true SHR pattern: (a) stable baseline FHR of 120,160 bpm; (b) amplitude of 5,15 bpm, rarely greater; (c) frequency of 2,5 cycles per minute; (d) fixed or flat short-term variability; (e) oscillation of the sinusoidal wave from above and below a baseline; and (f) no areas of normal FHR variability or reactivity. Physiopathology: Since its early recognition, the physiopathology of SHR became a matter of debate. Murata et al. noted a rise of arginine vasopressin levels in the blood of posthemorrhagic/anemic fetal lamb. Further works by the same authors revealed that with chemical or surgical vagotomy, arginine vasopressin infusion produced SHR pattern, thus providing the role of autonomic nervous system dysfunction combined with the increase in arginine vasopressin as the etiology. Conclusion: SHR is a rare occurrence. A true SHR is an ominous sign of fetal jeopardy needing immediate intervention. The correct diagnosis of true SHR pattern should also include fetal biophysical profile and the absence of drugs such as narcotics. [source] Job satisfaction or production?ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 10 2008How staff, leadership understand operating room efficiency: a qualitative study Background: How to increase efficiency in operating departments has been widely studied. However, there is no overall definition of efficiency. Supervisors urging staff to work efficiently may meet strong reactions due to staff believing that demands for efficiency means just stress at work. Differences in how efficiency is understood may constitute an obstacle to supervisors' efforts to promote it. This study aimed to explore how staff and leadership understand operating room efficiency. Methods: Twenty-one members of staff and supervisors in an operating department in a Swedish county hospital were interviewed. The analysis was performed with a phenomenographic approach that aims to discover the variations in how a phenomenon is understood by a group of people. Results: Six categories were found in the understanding of operation room efficiency: (A) having the right qualifications; (B) enjoying work; (C) planning and having good control and overview; (D) each professional performing the correct tasks; (E) completing a work assignment; and (F) producing as much as possible per time unit. The most significant finding was that most of the nurses and assistant nurses understood efficiency as individual knowledge and experience emphasizing the importance of the work process, whereas the supervisors and physicians understood efficiency in terms of production per time unit or completing an assignment. Conclusions: The concept ,operating room efficiency' is understood in different ways by leadership and staff members. Supervisors who are aware of this variation will have better prerequisites for defining the concept and for creating a common platform towards becoming efficient. [source] Reciprocal hybrid formation of Spartina in San Francisco BayMOLECULAR ECOLOGY, Issue 6 2000C. K. Anttila Abstract Diversity in the tRNALEU1 intron of the chloroplast genome of Spartina was used to study hybridization of native California cordgrass, Spartina foliosa, with S. alterniflora, introduced to San Francisco Bay , 25 years ago. We sequenced 544 bases of the tRNALEU1 intron and found three polymorphic sites, a pyrimidine transition at site 126 and transversions at sites 382 and 430. Spartina from outside of San Francisco Bay, where hybridization between these species is impossible, gave cpDNA genotypes of the parental species. S. foliosa had a single chloroplast haplotype, CCT, and this was unique to California cordgrass. S. alterniflora from the native range along the Atlantic coast of North America had three chloroplast haplotypes, CAT, TAA, and TAT. Hybrids were discriminated by random amplified polymorphic DNA (RAPD) phenotypes developed in a previous study. We found one hybrid that contained a cpDNA haplotype unknown in either parental species (TCT). The most significant finding was that hybridization proceeds in both directions, assuming maternal inheritance of cpDNA; 26 of the 36 hybrid Spartina plants from San Francisco Bay contained the S. foliosa haplotype, nine contained haplotypes of the invading S. alterniflora, and one had the cpDNA of unknown origin. Furthermore, cpDNA of both parental species was distributed throughout the broad range of RAPD phenotypes, suggesting ongoing contributions to the hybrid swarm from both. The preponderance of S. foliosa cpDNA has entered the hybrid swarm indirectly, we propose, from F1s that backcross to S. foliosa. Flowering of the native precedes by several weeks that of the invading species, with little overlap between the two. Thus, F1 hybrids would be rare and sired by the last S. foliosa pollen upon the first S. alterniflora stigmas. The native species produces little pollen and this has low viability. An intermediate flowering time of hybrids as well as pollen that is more vigourous and abundant than that of the native species would predispose F1s to high fitness in a vast sea of native ovules. Thus, spread of hybrids to other S. foliosa marshes could be an even greater threat to the native species than introductions of alien S. alterniflora. [source] Oscillations in the basal ganglia under normal conditions and in movement disordersMOVEMENT DISORDERS, Issue 10 2006Plamen Gatev MD Abstract A substantial body of work within the last decade has demonstrated that there is a variety of oscillatory phenomena that occur in the basal ganglia and in associated regions of the thalamus and cortex. Most of the earlier studies focused on recordings in rodents and primates. More recently, significant advances have been made in this field of research through the analysis of basal ganglia field potentials recorded from implanted deep brain stimulation electrodes in the basal ganglia of human patients with Parkinson's disease and other disorders. It now appears that oscillatory activity may play a significant role in the pathogenesis of these diseases. The most significant finding is that in Parkinson's disease synchronized oscillatory activity in the 10- to 35-Hz band (often termed ",-band") is prevalent in the basal ganglia,thalamocortical circuits, and that such activity can be reduced by dopaminergic treatments. The entrainment of large portions of these circuits may disrupt information processing in them and may lead to parkinsonian akinesia (and perhaps tremor). Although less firmly established than the role of oscillations in movement disorders, oscillatory activities at higher frequencies may also be a component of normal basal ganglia physiology. © 2006 Movement Disorder Society [source] Dural haemorrhage in non-traumatic infant deaths: does it explain the bleeding in ,shaken baby syndrome'?NEUROPATHOLOGY & APPLIED NEUROBIOLOGY, Issue 1 2003J. F. Geddes J. F. Geddes, R. C. Tasker, A. K. Hackshaw, C. D. Nickols, G. G. W. Adams, H. L. Whitwell and I. Scheimberg (2003) Neuropathology and Applied Neurobiology 29, 14,22 Dural haemorrhage in non-traumatic infant deaths: does it explain the bleeding in ,shaken baby syndrome'? A histological review of dura mater taken from a post-mortem series of 50 paediatric cases aged up to 5 months revealed fresh bleeding in the dura in 36/50, the bleeding ranging from small perivascular haemorrhages to extensive haemorrhage which had ruptured onto the surface of the dura. Severe hypoxia had been documented clinically in 27 of the 36 cases (75%). In a similar review of three infants presenting with classical ,shaken baby syndrome', intradural haemorrhage was also found, in addition to subdural bleeding, and we believe that our findings may have relevance to the pathogenesis of some infantile subdural haemorrhage. Recent work has shown that, in a proportion of infants with fatal head injury, there is little traumatic brain damage and that the significant finding is craniocervical injury, which causes respiratory abnormalities, severe global hypoxia and brain swelling, with raised intracranial pressure. We propose that, in such infants, a combination of severe hypoxia, brain swelling and raised central venous pressure causes blood to leak from intracranial veins into the subdural space, and that the cause of the subdural bleeding in some cases of infant head injury is therefore not traumatic rupture of bridging veins, but a phenomenon of immaturity. Hypoxia with brain swelling would also account for retinal haemorrhages, and so provide a unified hypothesis for the clinical and neuropathological findings in cases of infant head injury, without impact or considerable force being necessary. [source] Shoulder Disability After Different Selective Neck Dissections (Levels II,IV Versus Levels II,V): A Comparative StudyTHE LARYNGOSCOPE, Issue 2 2005Johnny Cappiello MD Abstract Objectives/Hypothesis: The objective was to compare the results of clinical and electrophysiological investigations of shoulder function in patients affected by head and neck carcinoma treated with concomitant surgery on the primary and the neck with different selective neck dissections. Study Design: Retrospective study of 40 patients managed at the Department of Otolaryngology, University of Brescia (Brescia, Italy) between January 1999 and December 2001. Methods: Two groups of 20 patients each matched for gender and age were selected according to the type of neck dissection received: patients in group A had selective neck dissection involving clearance of levels II,IV, and patients in group B had clearance of levels II,V. The inclusion criteria were as follows: no preoperative signs of myopathy or neuropathy, no postoperative radiotherapy, and absence of locoregional recurrence. At least 1 year after surgery, patients underwent evaluation of shoulder function by means of a questionnaire, clinical inspection, strength and motion tests, electromyography of the upper trapezius and sternocleidomastoid muscles, and electroneurography of the spinal accessory nerve. Statistical comparisons of the clinical data were obtained using the contingency tables with Fisher's Exact test. Electrophysiological data were analyzed by means of Fisher's Exact test, and electromyography results by Kruskal-Wallis test. Results: A slight strength impairment of the upper limb, slight motor deficit of the shoulder, and shoulder pain were observed in 0%, 5%, and 15% of patients in group A and in 20%, 15%, and 15% of patients in group B, respectively. On inspection, in group B, shoulder droop, shoulder protraction, and scapular flaring were present in 30%, 15%, and 5% of patients, respectively. One patient (5%) in group A showed shoulder droop as the only significant finding. In group B, muscle strength and arm movement impairment were found in 25% of patients, 25% showed limited shoulder flexion, and 50% had abnormalities of shoulder abduction with contralateral head rotation. In contrast, only one patient (5%) in group A presented slight arm abduction impairment. Electromyographic abnormalities were less frequently found in group A than in group B (40% vs. 85% [P = .003]), and the distribution of abnormalities recorded in the upper trapezius muscle and sternocleidomastoid muscle was quite different: 20% and 40% in group A versus 85% and 45% in group B, respectively. Only one case of total upper trapezius muscle denervation was observed in group B. In both groups, electroneurographic data from the side of the neck treated showed a statistically significant increase in latency (P = .001) and decrease in amplitude (P = .008) compared with the contralateral side. There was no significant difference in electroneurographic data from the side with and the side without dissection in either group. Even though a high number of abnormalities was found on electrophysiological testing, only a limited number of patients, mostly in group B, displayed shoulder function disability affecting daily activities. Conclusion: The study data confirm that clearance of the posterior triangle of the neck increases shoulder morbidity. However, subclinical nerve impairment can be observed even after selective neck dissection (levels II,IV) if the submuscular recess is routinely dissected. [source] Albedo, atmospheric solar absorption and heating rate measurements with stacked UAVsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 629 2007M. V. Ramana Abstract This paper reports unique measurements of albedo, atmospheric solar absorption, and heating rates in the visible (0.4 to 0.7 µm) and broadband (0.3 to 2.8 µm) spectral regions using vertically stacked multiple lightweight autonomous unmanned aerial vehicles (UAVs). The most significant finding of this study is that when absorbing aerosols and water vapour concentrations are measured accurately and accounted for in models, and when heating rates are measured directly with stacked aircraft, the simulated clear sky heating rates are consistent with the observed broadband heating rates within experimental errors (about 15%). We conclude that there is no need to invoke anomalous or excess absorption or unknown physics in clear skies. Aerosol,radiation,cloud measurements were made over the tropical Indian Ocean within the lowest 3 km of the atmosphere during the Maldives Autonomous UAV Campaign (MAC). The UAVs and ground-based remote sensing instruments determined most of the parameters required for calculating the albedo and vertical distribution of solar fluxes. The paper provides a refined analytical procedure to reduce errors and biases due to the offset errors arising from mounting of the radiometers on the aircraft and due to the aircraft attitude. Measured fluxes have been compared with those derived from a Monte-Carlo radiative transfer algorithm which can incorporate both gaseous and aerosol components. Under cloud-free conditions the calculated and measured incoming fluxes agree within 2,10 W m,2 (<1%) depending upon the altitudes. Similarly, the measured and calculated reflected fluxes agreed within 2,5 W m,2 (<5%). The analysis focuses on a cloud-free day when the air was polluted due to long-range transport from India, and the mean aerosol optical depth (AOD) was 0.31 and mean single scattering albedo was 0.92. The UAV-measured absorption AOD was 0.019 which agreed within 20% of the value of 0.024 reported by a ground-based instrument. The observed and simulated solar absorption agreed within 5% above 1.0 km and aerosol absorption accounted for 30% to 50% of the absorption depending upon the altitude and solar zenith angle. Thus there was no need to invoke spurious or anomalous absorption, provided we accounted for aerosol black carbon. The diurnal mean absorption values for altitudes between 0.5 and 3.0 km above mean sea level were observed to be 41 ± 3 W m,2 (1.5 K/day) in the broadband region and 8 ± 2 W m,2 (0.3 K/day) in the visible region. The contribution of absorbing aerosols to the heating rate was an order of magnitude larger than the contribution of CO2 and one-third that of the water vapour. In the lowest 3 km of the tropical atmosphere, aerosols accounted for more than 80% of the atmospheric absorption in the visible region. Copyright © 2007 Royal Meteorological Society [source] European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007ANNALS OF HUMAN GENETICS, Issue 4 2007Article first published online: 28 MAY 200 Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaïti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naïve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source] PSYCHOSOCIAL INVESTIGATION OF INDIVIDUAL AND COMMUNITY RESPONSES TO THE EXPERIENCE OF OVINE JOHNE'S DISEASE IN RURAL VICTORIAAUSTRALIAN JOURNAL OF RURAL HEALTH, Issue 2 2004Bernadette Hood Objective: This paper explores the psychosocial outcomes for individuals and communities in rural Victoria who experienced the outbreak of Ovine Johne's Disease (OJD). Design: The study uses a qualitative methodology to analyse the minutes of evidence provided by the inquiry into the control of OJD to identify the psychosocial events, experiences and outcomes associated with the control of this outbreak. The inquiry was undertaken by the Environment and Natural Resources Committee of the Victorian State Government. Setting: Public hearings were undertaken by the committee across several rural Victorian communities and the state capital, Melbourne. Subjects: The transcripts detail 136 submissions from 98 individuals and 23 organisations. Outcome measures: The analysis aimed to provide insight into the impact of the disease on individuals and communities and also to explore the factors individuals perceived as associated with these outcomes. Results: While the paper identifies that aspects of stock loss associated with the outbreak caused substantial emotional and economic distress, for farmers the most significant finding was the impact of the government control program on individuals, families and rural communities. The control program was perceived as having very limited scientific credibility and its implementation was described as heartless, inflexible and authoritarian. Involvement with the program resulted in farmers reporting emotions, such as, trauma, shame, guilt and stigma. Families became discordant and the sense of community within rural townships fragmented. Psychological outcomes of grief, depression and anxiety emerged as prevalent themes within families and communities. Conclusions: These data highlight the need for significant attention to the management of rural disasters, such as, the OJD program. [source] Multilayer Substrate-Mediated Tuning Resonance of Plasmon and SERS EF of Nanostructured Silver,CHEMPHYSCHEM, Issue 12 2010Lian C. T. Shoute Dr. Abstract A thin-film of dielectric on a reflecting surface constituting a multilayer substrate modulates light intensity due to the interference effect. A nanostructure consisting of randomly oriented silver particles of different shapes, sizes, and interparticle spacings supports multiple plasmon resonances and is observed to have a broad extinction spectrum that spans the entire visible region. Combining the two systems by fabricating the nanostructure on the thin-dielectric film of the multilayer substrate yields a new composite structure which is observed to modulate both the extinction spectrum and the SERS EF (surface enhanced Raman scattering enhancement factor) of the nanostructure as the thickness of the thin-film dielectric is varied. The frequency and intensity of the visible extinction spectrum vary dramatically with the dielectric thickness and in the intermediate thickness range the spectrum has no visible band. The SERS EF determined for the composite structure as a function of the thin-film dielectric thickness varies by several orders of magnitude. Strong correlation between the magnitude of the SERS EF and the extinction intensity is observed over the entire dielectric thickness range indicating that the extinction spectrum corresponds to the excitation of the plasmon resonances of the nanostructure. A significant finding which has potential applications is that the composite structure has synergic effect to boost SERS EF of the nanostructure by an order of magnitude or more compared to the same nanostructure on an unlayered substrate. [source] Influence of religious and spiritual values on the willingness of Chinese,Americans to donate organs for transplantationCLINICAL TRANSPLANTATION, Issue 5 2000Wilbur Aaron Lam The rate of organ donation among minority groups in the United States, including Chinese,Americans, is very low. There is currently very little data in the biomedical literature that builds on qualitative research to quantify the attitudes of Chinese,Americans toward organ donation. The present study quantitatively assesses the religious and cultural reasons that Chinese,Americans appear to be less willing to donate their organs than other populations. It also seeks to determine whether Confucian, Buddhist, or Daoist ideals are a significant factor in their overall reluctance to donate organs among respondents in this sample. A questionnaire distributed to Chinese,American adults asked about general feelings toward organ donation and Buddhist, Confucian, Christian, Daoist, and other spiritual objections. The results suggest that Chinese,Americans are indeed influenced by Confucian values, and to a lesser extent, Buddhist, Daoist, and other spiritual beliefs, that associate an intact body with respect for ancestors or nature. Another significant finding is that the subjects were most willing to donate their organs after their deaths, to close relatives, and then in descending order, distant relatives, people from their home country, and strangers. This ,negotiable' willingness has enormous implications for clinicians, who may be able to increase organ donation rates among Chinese,Americans by, first, recognizing their diverse spiritual beliefs, and, second, offering a variety of possibilities for the organ procurement and allocation. [source] A pedagogical Web service-based interactive learning environment for a digital filter design course: An evolutionary approachCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2010Wen-Hsiung Wu Abstract The course of digital filter design in electronic/electrical engineering involves complicated mathematical equations and dynamic waveform variations. It is a consensus among educators that using simulation tools assist in improving students' learning experiences. Previous studies on system simulation seemed to lack an appropriate approach to design such a course. Few emphasized the design of an interactive learning environment by using an evolutionary approach. This study integrated the design concept of an evolutionary approach and Web service-based technology into a simulation system entitled Pedagogical Web Service-Based Interactive Learning Environment (PEWSILE) was introduced. The PEWSILE system contained two interactive learning environments,a simple system and an advanced system. It offered a total of six pedagogical Web services. The simple interactive learning environment included text/color-based services, and text/color/diagram-based services. The advanced interactive learning environment included batch-based, interval change-based, comparison-based, and scroll bar-based services. The study also assessed the students' performance in six pedagogical Web services covering interaction and overall use, usefulness, and intention to use through a questionnaire survey and subsequent interviews. Three significant findings were reported. For example, in the advanced interactive learning environment, the designs of interval change-based and comparison-based services make it easier to observe differences in the outcome of parameter change, while batch-based services lacks the element of waveform comparison. In sum, the findings in this study provide helpful implications in designing engineering educational software. © 2010 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 423,433, 2010; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20163 [source] Generalized social phobia and avoidant personality disorder: meaningful distinction or useless duplication?,DEPRESSION AND ANXIETY, Issue 1 2008Dianne L. Chambless Ph.D. Abstract Participants with generalized social phobia (GSP) with (n=36) and without (n=19) avoidant personality disorder (AVPD) were compared via contrasts of group means and classification analysis on purported core features of AVPD. GSP-AVPD participants proved to be more severely impaired or distressed on some group contrasts. Cluster analysis identified two groups in the sample, with group membership significantly correlated to AVPD diagnosis. However, almost all significant findings were nullified when severity of social phobia was statistically controlled. Thus, at least where participants with social phobia are concerned, it seems most parsimonious to consider AVPD a severe form of GSP rather than a separate diagnostic category. Depression and Anxiety 0:1,12 2006. Published 2006 Wiley-Liss, Inc. [source] Noble gas and boron isotopic signatures of the Bacon-Manito geothermal fluid, PhilippinesGEOFLUIDS (ELECTRONIC), Issue 4 2008F. E. B. BAYON Abstract Noble gas isotopic composition and abundances were determined on dry gas sampled in geothermal wells from the Bacon-Manito (BGPF) geothermal field in the Philippines. The most significant findings come from the 3He/4He ratio; a mantle-He source is evidenced by ratio close to 7 Ra. Peripheral fluid from the west and south of the geothermal system is relatively enriched in 4He (R/Ra slightly > 2), most probably sourced from U and Th decay in old igneous or crustal rocks. The two end-members mix, producing the range of R/Ra ratios observed in the other wells included in this study. Preliminary data on the ,11B signature of the Bacon Manito fluid separated from vapour range from 7, to 9,. These values suggest that the local magmatic rocks could represent the main boron source, in agreement with the boron isotopic signature of Pacific arc lavas. [source] Videofiberoptic examination of the pharyngoesophageal segment and esophagus in patients after total laryngectomyHEAD & NECK: JOURNAL FOR THE SCIENCES & SPECIALTIES OF THE HEAD AND NECK, Issue 10 2003Pen-Yuan Chu MD Abstract Background. Posttreatment follow-up in patients with squamous cell carcinoma of the head and neck is critical because of the high risk of recurrence or a new primary tumor. However, in patients who have undergone total laryngectomy, evaluation of the pharyngoesophageal segment (PES) and esophagus is difficult. Methods. Sixty patients who had undergone total laryngectomy received a videofiberoptic examination of the PES and esophagus at the OPD office during follow-up. Results. Satisfactory examination was achieved in 56 (93%) of the patients. Each procedure was completed within 15 minutes. Although only 11 (18%) of the patients were symptomatic at follow-up, 19 patients (34%) had significant findings, including one local recurrence and two secondary esophageal cancers. Patients were asymptomatic in all three cases. Conclusions. Videofiberoptic examination is a simple, effective, and relatively noninvasive method that can be performed in the OPD office to evaluate the PES and esophagus in patients after total laryngectomy. © 2003 Wiley Periodicals, Inc. Head Neck 25: 858,863, 2003 [source] Type 2 diabetes and hepatocellular carcinoma: A cohort study in high prevalence area of hepatitis virus infection,,HEPATOLOGY, Issue 6 2006Mei-Shu Lai This study aimed to elucidate the relationship of type 2 diabetes, other known risk factors, and primary hepatocellular carcinoma (HCC) in countries with a high prevalence of hepatitis infection. We followed a prospective cohort of 54,979 subjects who participated in the Keelung Community-Based Integrated Screening program between 1999 and 2002. A total of 5,732 subjects with type 2 diabetes cases were identified at enrollment on the basis of fasting blood glucose level, and a total of 138 confirmed HCC cases were identified either through two-stage liver cancer screening or linkage with the National Cancer Registry. The independent effect of type 2 diabetes on the incidence of HCC and the interaction between type 2 diabetes and hepatitis infection or lipids profile were assessed using the Cox proportional hazards regression model. After controlling for age, sex, hepatitis B virus (HBV), hepatitis C virus (HCV), smoking, and alcohol consumption, the association between type 2 diabetes and incidence of HCC (excluding 33 prevalent cases identified at enrollment) was modified by HCV status and cholesterol level. The associations were only statistically significant (adjusted hazard ratio [HR] = 2.08 [1.03-4.18]) for being HCV negative and for having hypercholesterolemia (adjusted HR = 2.81 [1.20-6.55]). These statistically significant findings remained even excluding cases of diabetes newly diagnosed at enrollment. In conclusion, in an area with a high prevalence of hepatitis virus infection, type 2 diabetes increases the risk of developing HCC in those who are HCV negative or have a high level of total cholesterol. (HEPATOLOGY 2006;43: 1295,1302.) [source] Patient subjective experience and satisfaction during the perioperative period in the day surgery setting: A systematic reviewINTERNATIONAL JOURNAL OF NURSING PRACTICE, Issue 4 2006BN (Hons), Lenore Rhodes RN This systematic review used the Joanna Briggs Institute Qualitative Assessment and Review Instrument to manage, appraise, analyse and synthesize textual data in order to present the best available information in relation to how patients experience nursing interventions and care during the perioperative period in the day surgery setting. Some of the significant findings that emerged from the systematic review include the importance of pre-admission contact, provision of relevant, specific education and information, improving communication skills and maintaining patient privacy throughout their continuum of care. [source] |