Biological Differences (biological + difference)

Distribution by Scientific Domains
Distribution within Medical Sciences

Selected Abstracts

Flaws in the U.S. Food and Drug Administration's Rationale for Supporting the Development and Approval of BiDil as a Treatment for Heart Failure Only in Black Patients

George T. H. Ellison
The U.S. Food and Drug Administration's (FDA) rationale for supporting the development and approval of BiDil (a combination of hydralazine hydrochloride and isosorbide dinitrate; H-I) for heart failure specifically in black patients was based on under-powered, post hoc subgroup analyses of two relatively old trials (V-HeFT I and II), which were further complicated by substantial covariate imbalances between racial groups. Indeed, the only statistically significant difference observed between black and white patients was found without any adjustment for potential confounders in samples that were unlikely to have been adequately randomized. Meanwhile, because the accepted baseline therapy for heart failure has substantially improved since these trials took place, their results cannot be combined with data from the more recent trial (A-HeFT) amongst black patients alone. There is therefore little scientific evidence to support the approval of BiDil only for use in black patients, and the FDA's rationale fails to consider the ethical consequences of recognizing racial categories as valid markers of innate biological difference, and permitting the development of group-specific therapies that are subject to commercial incentives rather than scientific evidence or therapeutic imperatives. This paper reviews the limitations in the scientific evidence used to support the approval of BiDil only for use in black patients; calls for further analysis of the V-HeFT I and II data which might clarify whether responses to H-I vary by race; and evaluates the consequences of commercial incentives to develop racialized medicines. We recommend that the FDA revise the procedures they use to examine applications for race-based therapies to ensure that these are based on robust scientific claims and do not undermine the aims of the 1992 Revitalization Act. [source]

Detection Methods for Irradiated Foods

Sulaxana Kumari Chauhan
ABSTRACT:, Proper control of irradiation processing of food is very critical to facilitate international trade of irradiated foods and to enhance consumer confidence, consumer choice, and safety. Analytical detection of radiation-processing of food is very important to implement quality control at all levels. An ideal detection method should measure a specific radiation effect, which is proportional to the dose and should not be affected by processing parameters and storage conditions or the length of time between irradiation processing and analysis. The detection of irradiated foods is mainly based on radiolysis of lipids, modification of amino acids, modification of DNA, modification of carbohydrates, formation of free radicals, release of hydrogen gas, alterations in microbial load, measurement of biological difference, and other physical methods. [source]

P28 Interleukin-8 from keratinocytes can be used to test for contact allergy

Bolli Bjarnason
Objective:, To investigate whether secretion of interleukin-8 (IL-8) proteins by keratinocytes following in vitro exposure to a contact allergen can be used to detect contact allergy. Methods:, Suction blisters were made on skin of allergic and anergic subjects to urushiol, the contact allergen of poison ivy. Keratinocyte cultures were prepared and exposed to the allergen in vitro. Controls were the allergen solvent. Variable allergen concentrations, allergen exposure times and cell culture times were used. At the end of each culture time, IL-8 RNA and protein of the culture supernatants were analyzed by PCR and ELISA. Results:, The concentration of IL-8 in the supernatants proved to be a successful way to distinguish between subjects who patch tested positive with a non-toxic concentration of urushiol and subjects who tested negative. In the allergic subjects, a correlation was established between the dose of the allergen and the IL-8 protein concentration in the supernatants. Conclusions:, In vitro testing of contact allergies in patients makes possible an objective assessment of their allergic status without causing a booster effect or risking active sensitizations. The results indicate that the method may be used as an alternative method to animal models for testing consumer products before their marketing, thus avoiding ethical problems and problems related to interpretation of tests because of biological differences between animals and humans. [source]

Assessing human germ-cell mutagenesis in the Postgenome Era: A celebration of the legacy of William Lawson (Bill) Russell,

Andrew J. Wyrobek
Abstract Birth defects, de novo genetic diseases, and chromosomal abnormality syndromes occur in ,5% of all live births, and affected children suffer from a broad range of lifelong health consequences. Despite the social and medical impact of these defects, and the 8 decades of research in animal systems that have identified numerous germ-cell mutagens, no human germ-cell mutagen has been confirmed to date. There is now a growing consensus that the inability to detect human germ-cell mutagens is due to technological limitations in the detection of random mutations rather than biological differences between animal and human susceptibility. A multidisciplinary workshop responding to this challenge convened at The Jackson Laboratory in Bar Harbor, Maine. The purpose of the workshop was to assess the applicability of an emerging repertoire of genomic technologies to studies of human germ-cell mutagenesis. Workshop participants recommended large-scale human germ-cell mutation studies be conducted using samples from donors with high-dose exposures, such as cancer survivors. Within this high-risk cohort, parents and children could be evaluated for heritable changes in (a) DNA sequence and chromosomal structure, (b) repeat sequences and minisatellites, and (c) global gene expression profiles and pathways. Participants also advocated the establishment of a bio-bank of human tissue samples from donors with well-characterized exposure, including medical and reproductive histories. This mutational resource could support large-scale, multiple-endpoint studies. Additional studies could involve the examination of transgenerational effects associated with changes in imprinting and methylation patterns, nucleotide repeats, and mitochondrial DNA mutations. The further development of animal models and the integration of these with human studies are necessary to provide molecular insights into the mechanisms of germ-cell mutations and to identify prevention strategies. Furthermore, scientific specialty groups should be convened to review and prioritize the evidence for germ-cell mutagenicity from common environmental, occupational, medical, and lifestyle exposures. Workshop attendees agreed on the need for a full-scale assault to address key fundamental questions in human germ-cell environmental mutagenesis. These include, but are not limited to, the following: Do human germ-cell mutagens exist? What are the risks to future generations? Are some parents at higher risk than others for acquiring and transmitting germ-cell mutations? Obtaining answers to these, and other critical questions, will require strong support from relevant funding agencies, in addition to the engagement of scientists outside the fields of genomics and germ-cell mutagenesis. Environ. Mol. Mutagen., 2007. Published 2007 Wiley-Liss, Inc. [source]

Analysis of clinical outcomes and prognostic factors of neoadjuvant chemoradiotherapy combined with surgery: intraperitoneal versus extraperitoneal rectal cancer

Neoadjuvant chemoradiotherapy (CRT) is a widely purposed and performed treatment for rectal cancer. Downstaging effects possibly enhance the rate of curative surgery and may enable sphincter preservation in low-lying tumours. The current study examines the clinical outcomes in patients enrolled in a neoadjuvant CRT-surgery protocol for rectal cancer, distinguishing between intraperitoneal and extraperitoneal cancer. From 1994 to 2003, 58 patients with a primary diagnosis of rectal cancer were enrolled in a single-centre, not randomized study based on 5-week sessions of radiotherapy associated with a 30-day protracted venous 5-FU infusion followed by surgical resection. The study population was divided into two groups according to the localization of the tumour: 18 intraperitoneal and 40 extraperitoneal (EPt). Fifty-eight patients were treated with neoadjuvant CRT and surgery. Overall mortality rate was 25.9%, no deaths were recorded during hospitalization; 10 patients (all EPt) died because of recurrence. Significant differences in disease-free survival and overall survival rates were found between intraperitoneal vs. extraperitoneal tumours (P = 0.006), both intraperitoneal vs. extraperitoneal tumours N0 (P = 0.04 and P < 0.05) and intraperitoneal vs. extraperitoneal tumours N+ (P < 0.05). We diagnosed all local recurrence and liver metastasis in extraperitoneal tumours (t = 0.02 and t = 0.04), and only one case of lung metastasis arose from intraperitoneal cancer. Extraperitoneal tumours could be more aggressive than intraperitoneal ones, spreading more precociously, and/or less responsive to the neoadjuvant CRT because of their localization rather than biological differences. Aside from lymph node status, the location of the tumour with respect to the peritoneum border, is also a prognostic factor of survival in rectal cancer treated by neoadjuvant CRT and surgery. [source]

Geographical range size heritability: what do neutral models with different modes of speciation predict?

GLOBAL ECOLOGY, Issue 3 2007
David Mouillot
ABSTRACT Aim, Phylogenetic conservatism or heritability of the geographical range sizes of species (i.e. the tendency for closely related species to share similar range sizes) has been predicted to occur because of the strong phylogenetic conservatism of niche traits. However, the extent of such heritability in range size is disputed and the role of biology in shaping this attribute remains unclear. Here, we investigate the level of heritability of geographical range sizes that is generated from neutral models assuming no biological differences between species. Methods, We used three different neutral models, which differ in their speciation mode, to simulate the life-history of 250,000 individuals in a square lattice of 50 × 50 cells. These individuals can speciate, reproduce, migrate and die in the metacommunity according to stochastic events. We ran each model for 3000 steps and recorded the range size of each species at each step. The heritability of geographical range size was assessed using an asymmetry coefficient between range sizes of sister species and using the coefficient of correlation between the range sizes of ancestors and their descendants. Results, Our results demonstrated the ability of neutral models to mimic some important observed patterns in the heritability of geographical range size. Consistently, sister species exhibited higher asymmetry in range sizes than expected by chance, and correlations between the range sizes of ancestor,descendant species pairs, although often weak, were almost invariably positive. Main conclusions, Our findings suggest that, even without any biological trait differences, statistically significant heritability in the geographical range sizes of species can be found. This heritability is weaker than that observed in some empirical studies, but suggests that even here a substantial component of heritability may not necessarily be associated with niche conservatism. We also conclude that both present-day and fossil data sets may provide similar information on the heritability of the geographical range sizes of species, while the omission of rare species will tend to overestimate this heritability. [source]

Ethnic skin types: are there differences in skin structure and function?,

A. V. Rawlings
Synopsis People of skin of colour comprise the majority of the world's population and Asian subjects comprise more than half of the total population of the earth. Even so, the literature on the characteristics of the subjects with skin of colour is limited. Several groups over the past decades have attempted to decipher the underlying differences in skin structure and function in different ethnic skin types. However, most of these studies have been of small scale and in some studies interindividual differences in skin quality overwhelm any racial differences. There has been a recent call for more studies to address genetic together with phenotypic differences among different racial groups and in this respect several large-scale studies have been conducted recently. The most obvious ethnic skin difference relates to skin colour which is dominated by the presence of melanin. The photoprotection derived from this polymer influences the rate of the skin aging changes between the different racial groups. However, all racial groups are eventually subjected to the photoaging process. Generally Caucasians have an earlier onset and greater skin wrinkling and sagging signs than other skin types and in general increased pigmentary problems are seen in skin of colour although one large study reported that East Asians living in the U.S.A. had the least pigment spots. Induction of a hyperpigmentary response is thought to be through signaling by the protease-activated receptor-2 which together with its activating protease is increased in the epidermis of subjects with skin of colour. Changes in skin biophysical properties with age demonstrate that the more darkly pigmented subjects retaining younger skin properties compared with the more lightly pigmented groups. However, despite having a more compact stratum corneum (SC) there are conflicting reports on barrier function in these subjects. Nevertheless, upon a chemical or mechanical challenge the SC barrier function is reported to be stronger in subjects with darker skin despite having the reported lowest ceramide levels. One has to remember that barrier function relates to the total architecture of the SC and not just its lipid levels. Asian skin is reported to possess a similar basal transepidermal water loss (TEWL) to Caucasian skin and similar ceramide levels but upon mechanical challenge it has the weakest barrier function. Differences in intercellular cohesion are obviously apparent. In contrast reduced SC natural moisturizing factor levels have been reported compared with Caucasian and African American skin. These differences will contribute to differences in desquamation but few data are available. One recent study has shown reduced epidermal Cathepsin L2 levels in darker skin types which if also occurs in the SC could contribute to the known skin ashing problems these subjects experience. In very general terms as the desquamatory enzymes are extruded with the lamellar granules subjects with lowered SC lipid levels are expected to have lowered desquamatory enzyme levels. Increased pores size, sebum secretion and skin surface microflora occur in Negroid subjects. Equally increased mast cell granule size occurs in these subjects. The frequency of skin sensitivity is quite similar across different racial groups but the stimuli for its induction shows subtle differences. Nevertheless, several studies indicate that Asian skin maybe more sensitive to exogenous chemicals probably due to a thinner SC and higher eccrine gland density. In conclusion, we know more of the biophysical and somatosensory characteristics of ethnic skin types but clearly, there is still more to learn and especially about the inherent underlying biological differences in ethnic skin types. Résumé, Les gens qui ont une peau de couleur représentent la majorité de la population mondiale et les sujets asiatiques en représentent plus de la moitié. Pourtant la littérature consacrée aux caractéristiques de ces sujets est limitée. Plusieurs groupes de travail ont essayé au cours des dernières années de comprendre les différences sous-jacentes de la structure et de la fonction de la peau de différentes ethnies. Maisla plupart de ces études ont été réalisées à petite échelle et dans certains cas les différences observées entre les individus au niveau de la qualité de la peau ne font pas ressortir de différence entre races. Récemment, un besoin d'études reliant les diffèrences génétiques et phénotypiques entre différents groupes raciaux s'est fait sentir et de ce fait beaucoup d'études à grande èchelle ont été entreprises. La différence la plus évidente, entre les peaux ethniques, est leur couleur liée à la présence de la mélanine. La photoprotection induite par ce polymère influence le taux de vieillissement de la peau entre les différents groupes raciaux qui finalement sont tous sujets au processus de photovieillissement. Généralement, les caucasiens ont des signes plus précoces et plus importants de formation de rides et de relâchement de la peau; en général, les problèmes d'augmentation de la pigmentation sont observés sur les peaux de couleur, bien qu'une grande étude ait rapporté que des sujets originaires de l'Asie de l'Est vivant aux U.S.A. avaient le moins de taches pigmentaires. On pense que la réponse d'une induction hyperpigmentaire est due à un signal envoyé par le récepteur 2 activé par une protéase. Le récepteur 2 augmente en même temps que la protéase activatrice dans l'épiderme des sujets ayant une peau de couleur. Les changements dans les propriètés biophysiques de la peau en fonction de l'âge montrent que les sujets qui ont la pigmentation la plus sombre gardent une peau plus jeune par comparaison aux groupes qui possèdent une pigmentation moins forte. Toutefois, bien qu'ayant un stratum corneum plus compact, il existe des rapports divergents sur la fonction barrière de ces sujets. Dans le cas d'agression chimique ou mécanique, la fonction barrière du stratum corneum est considérée plus forte chez les sujets à peau plus foncée, malgré leurs taux plus faibles encéramide. On doit garder à l'esprit que la fonction barrière du stratum corneum dépend de toute son architecture et pas seulement de sa teneur en lipides. On considère que la peau asiatique à unePIE (TEWL) basale similaire à la peau caucasienne, ainsi que des taux en céramides comparables, mais on constate que dans le cas d'agression mécanique, elle possède un effet barrière le plus faible. Des différences dans la cohésion intercellulaire sont évidentes. A contrario, on a mis en évidence des taux d'hydratation (NMF) plus faibles dans son stratum corneum, comparativement à la peau caucasienne et afro-américaine. Ces différences expliquent les variations au niveau de la desquamation, mais on a très peu de données sur ce sujet. Une étude récente a mis en évidence des taux réduits de Cathepsin L2 dans l'épiderme des types de peau plus sombre, ce qui, si cela se produisait dans le stratum corneum, expliquerait les problèmes biens connus de cendrage de la peau que ces sujets connaissent. En terme très gènéral, étant donné que les enzymes liées à la desquamation sont libérées avec les granules lamellaires, on s'attend à ce que les sujets ayant des taux de lipides faibles dans le stratum corneum aient des taux d'enzymes liés à la desquamation faibles. On constate chez les sujets noirs une augmentation de la taille des pores, de la sécrétion du sébum et de la microflore cutanée. On observe également chez ces sujets une augmentation de la taille des granules mastocellulaires. Le phénomène de peau sensible se retrouve à une fréquence similaire dans les différents groupes raciaux, mais il existe des différences subtiles dans lesstimuli nécessaires pour l'induire. En tout cas, plusieurs études montrent que la peau asiatique est peut-être plus sensible aux produits chimiques exogènes, ce qui probablement est dûà un stratum corneum plus mince et à une densité de glandes eccrines plus élevées. En conclusion, c'est sur les caractéristiques biophysiques et somato-sensorielles des différents types de peaux ethniques que nous en savons plus, mais il est clair qu'il nous reste à comprendre encore beaucoup de choses principalement sur leurs différences biologiques. [source]

Should amenorrhea be a diagnostic criterion for anorexia nervosa?

Evelyn Attia MD
Abstract Objective: The removal of the amenorrhea criterion for anorexia nervosa (AN) is being considered for the fifth edition of The Diagnostic and Statistical Manual (DSM-V). This article presents and discusses the arguments for maintaining as well as those for removing the criterion. Method: The psychological and biological literatures on the utility of amenorrhea as a distinguishing diagnostic criterion for AN and as an indicator of illness severity are reviewed. Results: The findings suggest that the majority of differences among patients with AN who do and do not meet the amenorrhea criterion appear largely to reflect nutritional status. Overall, the two groups have few psychological differences. There are mixed findings regarding biological differences between those with AN who do and do not menstruate and the relationship between amenorrhea and bone health among patients with AN. Discussion: Based on these findings, one option is to describe amenorrhea in DSM-V as a frequent occurrence among individuals with AN that may provide important information about clinical severity, but should not be maintained as a core diagnostic feature. The possibilities of retaining the criterion or eliminating it altogether are discussed. © 2009 American Psychiatric Association. Int J Eat Disord 2009 [source]

Is late onset depression a prodrome to dementia?

Isaac Schweitzer
Abstract Background Recent research suggests there are clinical and biological differences between late onset depression (LOD) and early-onset depression (EOD). Objectives In this paper we review clinical, epidemiological, structural neuroimaging and genetic investigations of late life depression that have been performed over the past two decades and offer evidence that LOD is often a prodromal disorder for dementia. Results LOD patients are more likely to have cognitive impairment and to have more deep white matter lesions (DWMLs). Evidence concerning cortical and temporal lobe atrophy is conflicting, while the ApoE 4 allele is not associated with LOD. Conclusions It is likely that LOD is not a prodrome for a particular type of dementia, but the majority of patients who do develop dementia will acquire Alzheimer's disease (AD) or a vascular dementia, as these are by far the most common causes of dementia. This issue requires further clarification with follow-up of patients over the long term. Copyright © 2002 John Wiley & Sons, Ltd. [source]

Nonreplication in Genetic Studies of Complex Diseases,Lessons Learned From Studies of Osteoporosis and Tentative Remedies,

Hui Shen
Abstract Inconsistent results have accumulated in genetic studies of complex diseases/traits over the past decade. Using osteoporosis as an example, we address major potential factors for the nonreplication results and propose some potential remedies. Over the past decade, numerous linkage and association studies have been performed to search for genes predisposing to complex human diseases. However, relatively little success has been achieved, and inconsistent results have accumulated. We argue that those nonreplication results are not unexpected, given the complicated nature of complex diseases and a number of confounding factors. In this article, based on our experience in genetic studies of osteoporosis, we discuss major potential factors for the inconsistent results and propose some potential remedies. We believe that one of the main reasons for this lack of reproducibility is overinterpretation of nominally significant results from studies with insufficient statistical power. We indicate that the power of a study is not only influenced by the sample size, but also by genetic heterogeneity, the extent and degree of linkage disequilibrium (LD) between the markers tested and the causal variants, and the allele frequency differences between them. We also discuss the effects of other confounding factors, including population stratification, phenotype difference, genotype and phenotype quality control, multiple testing, and genuine biological differences. In addition, we note that with low statistical power, even a "replicated" finding is still likely to be a false positive. We believe that with rigorous control of study design and interpretation of different outcomes, inconsistency will be largely reduced, and the chances of successfully revealing genetic components of complex diseases will be greatly improved. [source]

Involvement of the INK4a/Arf gene locus in senescence

AGING CELL, Issue 3 2003
Carol J. Collins
Summary The INK4a/ARF locus encodes two proteins whose expression limits cellular proliferation. Whilst the biochemical activities of the two proteins appear very different, they both converge on regulating the retinoblastoma and p53 tumour suppressor pathways. Neither protein is required for normal development, but lack of either predisposes to the development of malignancy. Both proteins have also been implicated in the establishment of senescence states in response to a variety of stresses, signalling imbalances and telomere shortening. The INK4a/Arf regulatory circuits appear to be partially redundant and show evidence of rapid evolution. Especially intriguing are the large number of biological differences documented between mice and man. We review here the brief history of INK4a/Arf and explore possible links with organismal aging and the evolution of longevity. [source]

Population dynamics of fisheries stock enhancement

K. Lorenzen
The population dynamics of fisheries stock enhancement, and its potential for generating benefits over and above those obtainable from optimal exploitation of wild stocks alone are poorly understood and highly controversial. I extend the dynamic pool theory of fishing to stock enhancement by unpacking recruitment, incorporating regulation in the recruited stock, and accounting for biological differences between wild and hatchery fish. I then analyse the dynamics of stock enhancement and its potential role in fisheries management, using the candidate stock of North Sea sole as an example. Enhancement through release of recruits or advanced juveniles is predicted to increase total yield and stock abundance, but reduce abundance of the naturally recruited stock component through compensatory responses or overfishing. Release of genetically maladapted fish reduces the effectiveness of enhancement, and is most detrimental overall if fitness of hatchery fish is only moderately compromised. As a temporary measure for rebuilding of depleted stocks, enhancement can not substitute for effort limitation, and is advantageous as an auxiliary measure only if the population has been reduced to a very low proportion of its unexploited biomass. Quantitative analysis of population dynamics is central to the responsible use of stock enhancement in fisheries management, and the necessary tools are available. [source]


ABSTRACT Maturing Atlantic mackerel with and without artificial feeding, kept in sea pens (September to May), showed differences in digestive efficiency (protease activity ratio of trypsin to chymotrypsin), muscle growth (concentrations of RNA, protein, RNA/protein ratio and free amino acids [FAA]) and oocyte quality (trypsin-like specific activity, and concentrations of RNA, RNA/protein ratio and FAA). The artificially fed mackerel had higher body weights (1.7 times) but with less white muscle protein concentration (0.5 time), compared to the control group. Both groups showed higher levels of capacity for protein synthesis in the oocytes than in the white muscle, but it was about two times higher in the artificially fed fish whereas about four times higher in the control group. This indicated that, during maturation, development of oocytes and muscle for growth occurred concurrently in higher growth mackerel, while development of oocytes dominated in slower growth fish. A higher trypsin-like specific activity with higher FAA levels in the oocytes from females fed with an artificial diet, compared to the control group, suggested differences in development and quality between the gametes of the fish with different feedings. PRACTICAL APPLICATIONS The work illustrates differences in digestive efficiency and the quality of growth performance (growth and protein metabolism in muscle and oocytes) in fish with different feedings. The use of various methods for evaluating digestive efficiency and the quality of fish growth performance could provide reasonable information for some important biological differences between fish groups, especially when the number of samples are low. It is more advantageous to apply different methods simultaneously than using growth parameter alone in order to study for precise evaluation of the quality of fish growth performance. The methods are very practical for studying food utilization and growth quality of fish in different environmental conditions and with different behaviors in aquaculture as well as in natural ecosystem where food consumption rate and feeding regime cannot be under control. [source]

Remodeling of fracture callus in mice is consistent with mechanical loading and bone remodeling theory

Hanna Isaksson
Abstract During the remodeling phase of fracture healing in mice, the callus gradually transforms into a double cortex, which thereafter merges into one cortex. In large animals, a double cortex normally does not form. We investigated whether these patterns of remodeling of the fracture callus in mice can be explained by mechanical loading. Morphologies of fractures after 21, 28, and 42 days of healing were determined from an in vivo mid-diaphyseal femoral osteotomy healing experiment in mice. Bone density distributions from microCT at 21 days were converted into adaptive finite element models. To assess the effect of loading mode on bone remodeling, a well-established remodeling algorithm was used to examine the effect of axial force or bending moment on bone structure. All simulations predicted that under axial loading, the callus remodeled to form a single cortex. When a bending moment was applied, dual concentric cortices developed in all simulations, corresponding well to the progression of remodeling observed experimentally and resulting in quantitatively comparable callus areas of woven and lamellar bone. Effects of biological differences between species or other reasons cannot be excluded, but this study demonstrates how a difference in loading mode could explain the differences between the remodeling phase in small rodents and larger mammals. © 2008 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 27: 664,672, 2009 [source]

Isolates of Microdochium nivale and M. majus Differentiated by Pathogenicity on Perennial Ryegrass (Lolium perenne L.) and in vitro Growth at Low Temperature

I. S. Hofgaard
Abstract Pink snow mould is a serious disease on grasses and winter cereals in cold and temperate zones during winter. To better understand the basis for the variation in pathogenicity between different isolates of Microdochium nivale and M. majus and to simplify selection of highly pathogenic isolates to use when screening for resistance to pink snow mould in perennial ryegrass, we sought traits correlated with pathogenicity. Isolates of M. nivale were more pathogenic on perennial ryegrass than isolates of M. majus, as measured by survival and regrowth of perennial ryegrass after infection and incubation under simulated snow cover. Pathogenicity as measured by relative regrowth was highly correlated with fungal growth rate on potato dextrose agar (PDA) at 2°C. Measuring fungal growth on PDA therefore seems to be a relatively simple method of screening for potentially highly pathogenic isolates. In a study of a limited number of isolates, highly pathogenic isolates showed an earlier increase and a higher total specific activity of , -glucosidase, a cell wall-degrading enzyme, compared with less pathogenic isolates. None of the M. majus isolates was highly pathogenic on perennial ryegrass. Our results indicate biological differences between M. nivale and M. majus and thus strengthen the recently published sequence-based evidence for the elevation of these former varieties to species status. [source]

Pretreatment assessment and predictors of hepatitis C virus treatment in US veterans coinfected with HIV and hepatitis C virus

L. I. Backus
Summary., The US Department of Veterans Affairs (VA) cares for many human immunodeficiency virus/hepatitis C virus (HIV/HCV)-coinfected patients. VA treatment recommendations indicate that all HIV/HCV-coinfected patients undergo evaluation for HCV treatment and list pretreatment assessment tests. We compared clinical practice with these recommendations. We identified 377 HIV/HCV-coinfected veterans who began HCV therapy with pegylated interferon and ribavirin and 4135 HIV/HCV-coinfected veterans who did not but were in VA care at the same facilities during the same period. We compared laboratory and clinical characteristics of the two groups and estimated multivariate logistic regression models of receipt of HCV treatment. Overall, patients had high rates of receipt of tests necessary for HCV pretreatment assessment. Patients starting HCV treatment had higher alanine aminotransferase (ALT), lower creatinine, higher CD4 counts and lower HIV viral loads than patients not starting HCV treatment. In the multivariate model, positive predictors of starting HCV treatment included being non-Hispanic whites, having higher ALTs, lower creatinines, higher HCV viral loads, higher CD4 counts, undetectable HIV viral loads and receiving HIV antiretrovirals. A history of chronic mental illness and a history of hard drug use were negative predictors. Most HIV/HCV-coinfected patients received the necessary HCV pretreatment assessments, although rates of screening for hepatitis A and B immunity can be improved. Having well-controlled HIV disease is by far the most important modifiable factor affecting the receipt of HCV treatment. More research is needed to determine if the observed racial differences in starting HCV treatment reflect biological differences, provider behaviour or patient preference. [source]

Advancing the diagnosis and treatment of hepatocellular carcinoma

J. Wallis Marsh MD
We analyzed global gene expression patterns of 91 human hepatocellular carcinomas (HCCs) to define the molecular characteristics of the tumors and to test the prognostic value of the expression profiles. Unsupervised classification methods revealed two distinctive subclasses of HCC that are highly associated with patient survival. This association was validated via 5 independent supervised learning methods. We also identified the genes most strongly associated with survival by using the Cox proportional hazards survival analysis. This approach identified a limited number of genes that accurately predicted the length of survival and provides new molecular insight into the pathogenesis of HCC. Tumors from the low survival subclass have strong cell proliferation and antiapoptosis gene expression signatures. In addition, the low survival subclass displayed higher expression of genes involved in ubiquitination and histone modification, suggesting an etiological involvement of these processes in accelerating the progression of HCC. In conclusion, the biological differences identified in the HCC subclasses should provide an attractive source for the development of therapeutic targets (e.g., HIF1a) for selective treatment of HCC patients. Supplementary material for this article can be found on the HEPATOLOGY Web site ( Copyright 2004 American Association for the Study of Liver Diseases. Hepatology. 2004 Sep;40(3):667,76. [source]

Field and laboratory studies in a Neotropical population of the spinose ear tick, Otobius megnini

Abstract One ear of each of five cows on a property close to Dean Funes, province of Córdoba, Argentina, was inspected monthly from December 2004 to November 2006 to determine the presence of Otobius megnini (Dugès) and to ascertain its seasonality. Ticks were collected to study the biological parameters of larvae, nymphs and adult ticks. Groups of nymphs were also maintained at three different photoperiods at 25 °C. The abundance of immature stages was greatest during January,April and August,October in the first and second years of the study, respectively. No larvae successfully moulted. Nymphs weighing < 17 mg also failed to moult, but 89% of heavier nymphs moulted into adults. Nymphs moulting to males weighed less (49.5 ± 16.09 mg) than nymphs moulting to females (98.1 ± 34.08 mg). The pre-moult period was similar for nymphs moulting to either sex and significantly longer (P < 0.01) for female nymphs maintained at 25 °C compared with nymphs kept at 27 °C. No effect of photoperiod on the pre-moult periods of nymphs was detected. Female ticks produced a mean of 7.0 ± 1.94 egg batches after a preoviposition period of 16.4 ± 8.41 days for the first batch. The mean oviposition period was 61 ± 20.8 days and the duration of oviposition for each batch varied from 1 to 6 days. The mean number of eggs per batch was 93.1 ± 87.53. The minimum incubation period for the first egg batch was 13.6 ± 2.77 days. The total number of eggs laid by each female was 651.6 ± 288.90. Parthenogenesis was not observed. The reproductive efficiency index (REI) (number of eggs laid/weight of female in mg) was 5.5 ± 1.26. Pearson's correlations showed a significant direct relationship between the weight of the female and number of eggs laid (P < 0.01) and REI (P < 0.05). Several of the biological values presented above for the tick population from the Neotropical zoogeographic region showed marked differences to equivalent values for O. megnini populations from the U.S.A. (Nearctic) and India (Oriental). Nevertheless, the only two sequences of 16S rDNA deposited in GenBank from ticks originating in Argentina and allegedly in the U.S.A. indicate that they are conspecific (99.8% agreement). We tentatively consider the biological differences among populations of this tick species to represent adaptations for survival at different conditions. [source]

From cells to tissues: Fluorescence confocal microscopy in the study of histological samples

Pietro Transidico
Abstract Our knowledge of the genetic mechanisms controlling cell proliferation and differentiation usually originates from in vitro cultured cell line models. However, the definition of the molecular switches involved in control of homeostasis and the understanding of the changes occurring in neoplastic transformation require looking at single cells as the components of a complex tissue network. Histological examination of tissue samples can gain a substantial amount of information from high-resolution fluorescence analysis. In particular, confocal microscopy can help in the definition of functional pathways using multiparameter analysis. In this report, we present acquisition and analysis procedures to obtain high-resolution data from tissue sections. Confocal microscopy coupled to computational restoration, statistical evaluation of spatial correlations, and morphological analysis over large tissue areas were applied to colorectal samples providing a molecular fingerprint of the biological differences inferred from classical histological examination. Microsc. Res. Tech. 64:89,95, 2004. © 2004 Wiley-Liss, Inc. [source]

Contractile properties of human motor units in health, aging, and disease

MUSCLE AND NERVE, Issue 9 2001
FRCPC, K. Ming Chan MD
Abstract The primary function of skeletal muscle is to produce force for postural control and movement. Although the contractile properties of the whole muscle are useful functional indicators, they do not accurately reflect the heterogeneity of the constituent motor units (MUs) and their changes in health and disease. However, data on the contractile properties of human MUs, in comparison to other animal species, are relatively sparse. This, in part, is due to greater methodological challenges of in vivo studies of MUs in the human. The purpose of this review is to critically appraise the methods used in humans; to describe the normative data from different muscle groups; to discuss differences between data from healthy humans and other animal species; and, last, to characterize changes of the MU contractile properties in aging, disease, and in response to intervention. Because the spike-triggered averaging technique can only be used to study the twitch properties, other methods were subsequently developed to measure a wider range of contractile properties. Although there is general agreement between human data and those from other animal species, major differences do exist. Potential reasons for these discrepancies include true biological differences, but differences in the techniques used may also be responsible. Although limited, measurement of MU contractile properties in humans has provided insight into the changes associated with aging and motoneuronal diseases and provides a means of gauging their adaptive capacity for training and immobilization. © 2001 John Wiley & Sons, Inc. Muscle Nerve 24: 1113,1133 [source]

Genomics of cellulose biosynthesis in poplars

Chandrashekhar P. Joshi
Summary Genetic improvement of cellulose production in commercially important trees is one of the formidable goals of current forest biotechnology research. To achieve this goal, we must first decipher the enigmatic and complex process of cellulose biosynthesis in trees. The recent availability of rich genomic resources in poplars make Populus the first tree genus for which genetic augmentation of cellulose may soon become possible. Fortunately, because of the structural conservation of key cellulose biosynthesis genes between Arabidopsis and poplar genomes, the lessons learned from exploring the functions of Arabidopsis genes may be applied directly to poplars. However, regulation of these genes will most likely be distinct in these two-model systems because of their inherent biological differences. This research review covers the current state of knowledge about the three major cellulose biosynthesis-related gene families from poplar genomes: cellulose synthases, sucrose synthases and korrigan cellulases. Furthermore, we also suggest some future research directions that may have significant economical impacts on global forest product industries. [source]

Molluscan and vertebrate immune responses to bird schistosomes

SUMMARY There is a growing understanding of risks posed by human contact with the cercariae of bird schistosomes. In general, there are no fundamental biological differences between human and bird schistosomes in terms of their interactions with snail and vertebrate hosts. The penetration of host surfaces is accompanied by the release of penetration gland products and the shedding of highly antigenic surface components (miracidial ciliated plates and cercarial glycocalyx) which trigger host immune reactions. New surface structures are formed during transformation: the tegument of mother sporocysts and the tegumental double membrane of schistosomula. These surfaces apparently serve as protection against the host immune response. Certain parasite excretory,secretory products may contribute to immunosuppression or, on the other hand, stimulation of host immune reactions. Discovery of new species and their life cycles, the characterization of host,parasite interactions (including at the molecular level), the determination of parasite pathogenicity towards the host, the development of tools for differential diagnosis and the application of protective measures are all topical research streams of the future. Regularly updated information on bird schistosomes and cercarial dermatitis can be found at (web pages of Schistosome Group Prague). [source]

Pediatric diffuse large B-cell lymphoma demonstrates a high proliferation index, frequent c-Myc protein expression, and a high incidence of germinal center subtype: Report of the French,American,British (FAB) international study group,

Rodney R. Miles MD
Abstract Background Diffuse large B-cell lymphoma (DLBCL) makes up 10,20% of pediatric non-Hodgkin lymphoma, and these patients have a significantly better prognosis than adults with DLBCL. The difference in prognosis may be related to clinical, phenotypic, and/or biological differences between adult and pediatric DLBCL. In adult DLBCL, the germinal center (GC) phenotype is associated with a better prognosis than the activated B-cell (ABC) phenotype. However, a high proliferative index and expression of Bcl2 and c-Myc protein have all been associated with worse outcomes. While multiple studies have addressed the phenotype and expression patterns of adult DLBCL, relatively little is known about these biological variables in pediatric DLBCL. The goal of this study was to investigate the proliferative index, the relative frequencies of the GC and non-GC subtypes, and the expression of Bcl2 and c-Myc protein in a cohort of children with DLBCL treated in a uniform manner. Procedure We performed immunohistochemistry (IHC) for MIB1, CD10, Bcl6, MUM1, Bcl2, and c-Myc on DLBCL tissue from children treated uniformly in the FAB LMB96 trial (SFOP LMB96/CCG5961/UKCCSG/NHL 9600). Results Compared to published adult DLBCL studies, pediatric DLBCL demonstrated moderate to high proliferation rates (83%), increased c-Myc protein expression (84%), decreased Bcl2 protein expression (28%), and an increased frequency of the GC phenotype (75%). Conclusions These findings suggest that there are significant biologic differences between pediatric and adult forms of DLBCL, which may contribute to the superior prognosis seen in the pediatric population relative to adult disease. Pediatr Blood Cancer 2008;51:369,374. © 2008 Wiley-Liss, Inc. [source]

European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007

Article first published online: 28 MAY 200
Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaïti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naïve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source]

Feeding dynamics in fish experiencing cycles of feed deprivation: a comparison of four species

Lei Wu
Abstract The temporal dynamics of daily food consumption were examined in individually housed fish that experienced four cycles of 1 week of feed deprivation followed by 2 weeks of feeding to satiation. Four species were compared: European minnows Phoxinus phoxinus: Cyprinidae; three-spined sticklebacks Gasteosteus aculeatus: Gasterosteidae: gibel carp Carassius auratus gibelio: Cyprinidae; and the longsnout catfish Leiocassis longirostris: Bagridae. The stickleback, carp and catfish showed significant compensatory increases in food intake following deprivation, with the response becoming clearer in successive cycles. The temporal pattern of consumption during the refeeding periods differed between the four species. In sticklebacks, daily intake over a refeeding period initially decreased, but then recovered. In minnows, intake tended to decline over a refeeding period. Gibel carp showed an increase in daily intake on refeeding, but this may have reflected an adverse response to weighing. Over a refeeding period, catfish had a weak tendency to show an initial decline, followed by an increase. These differences are discussed in relation to differences in experimental protocols and biological differences between the species. [source]

Environmental constraints on life histories in Antarctic ecosystems: tempos, timings and predictability

Lloyd S. Peck
ABSTRACT Knowledge of Antarctic biotas and environments has increased dramatically in recent years. There has also been a rapid increase in the use of novel technologies. Despite this, some fundamental aspects of environmental control that structure physiological, ecological and life-history traits in Antarctic organisms have received little attention. Possibly the most important of these is the timing and availability of resources, and the way in which this dictates the tempo or pace of life. The clearest view of this effect comes from comparisons of species living in different habitats. Here, we (i) show that the timing and extent of resource availability, from nutrients to colonisable space, differ across Antarctic marine, intertidal and terrestrial habitats, and (ii) illustrate that these differences affect the rate at which organisms function. Consequently, there are many dramatic biological differences between organisms that live as little as 10 m apart, but have gaping voids between them ecologically. Identifying the effects of environmental timing and predictability requires detailed analysis in a wide context, where Antarctic terrestrial and marine ecosystems are at one extreme of the continuum of available environments for many characteristics including temperature, ice cover and seasonality. Anthropocentrically, Antarctica is harsh and as might be expected terrestrial animal and plant diversity and biomass are restricted. By contrast, Antarctic marine biotas are rich and diverse, and several phyla are represented at levels greater than global averages. There has been much debate on the relative importance of various physical factors that structure the characteristics of Antarctic biotas. This is especially so for temperature and seasonality, and their effects on physiology, life history and biodiversity. More recently, habitat age and persistence through previous ice maxima have been identified as key factors dictating biodiversity and endemism. Modern molecular methods have also recently been incorporated into many traditional areas of polar biology. Environmental predictability dictates many of the biological characters seen in all of these areas of Antarctic research. [source]

,O sibling, where art thou?'- a review of avian sibling recognition with respect to the mammalian literature

Shinichi Nakagawa
ABSTRACT Avian literature on sibling recognition is rare compared to that developed by mammalian researchers. We compare avian and mammalian research on sibling recognition to identify why avian work is rare, how approaches differ and what avian and mammalian researchers can learn from each other. Three factors: (1) biological differences between birds and mammals, (2) conceptual biases and (3) practical constraints, appear to influence our current understanding. Avian research focuses on colonial species because sibling recognition is considered adaptive where,mixing potential'of dependent young is high; research on a wide range of species, breeding systems and ecological conditions is now needed. Studies of acoustic recognition cues dominate avian literature; other types of cues (e.g. visual, olfactory) deserve further attention. The effect of gender on avian sibling recognition has yet to be investigated; mammalian work shows that gender can have important influences. Most importantly, many researchers assume that birds recognise siblings through,direct familiarisation'(commonly known as associative learning or familiarity); future experiments should also incorporate tests for,indirect familiarisation'(commonly known as phenotype matching). If direct familiarisation proves crucial, avian research should investigate how periods of separation influence sibling discrimination. Mammalian researchers typically interpret sibling recognition in broad functional terms (nepotism, optimal outbreeding); some avian researchers more successfully identify specific and testable adaptive explanations, with greater relevance to natural contexts. We end by reporting exciting discoveries from recent studies of avian sibling recognition that inspire further interest in this topic. [source]

Molecular chemical structure of barley proteins revealed by ultra-spatially resolved synchrotron light sourced FTIR microspectroscopy: Comparison of barley varieties

BIOPOLYMERS, Issue 4 2007
Peiqiang Yu
Abstract Barley protein structure affects the barley quality, fermentation, and degradation behavior in both humans and animals among other factors such as protein matrix. Publications show various biological differences among barley varieties such as Valier and Harrington, which have significantly different degradation behaviors. The objectives of this study were to reveal the molecular structure of barley protein, comparing various varieties (Dolly, Valier, Harrington, LP955, AC Metcalfe, and Sisler), and quantify protein structure profiles using Gaussian and Lorentzian methods of multi-component peak modeling by using the ultra-spatially resolved synchrotron light sourced Fourier transform infrared microspectroscopy (SFTIRM). The items of the protein molecular structure revealed included protein structure ,-helices, ,-sheets, and others such as ,-turns and random coils. The experiment was performed at the National Synchrotron Light Source in Brookhaven National Laboratory (BNL, US Department of Energy, NY). The results showed that with the SFTIRM, the molecular structure of barley protein could be revealed. Barley protein structures exhibited significant differences among the varieties in terms of proportion and ratio of model-fitted ,-helices, ,-sheets, and others. By using multi-component peaks modeling at protein amide I region of 1710,1576 cm,1, the results show that barley protein consisted of approximately 18,34% of ,-helices, 14,25% of ,-sheets, and 44,69% others. AC Metcalfe, Sisler, and LP955 consisted of higher (P < 0.05) proportions of ,-helices (30,34%) than Dolly and Valier (,-helices 18,23%). Harrington was in between which was 25%. For protein ,-sheets, AC Metcalfe, and LP955 consisted of higher proportions (22,25%) than Dolly and Valier (13,17%). Different barley varieties contained different ,-helix to ,-sheet ratios, ranging from 1.4 to 2.0, although the difference were insignificant (P > 0.05). The ratio of ,-helices to others (0.3 to 1.0, P < 0.05) and that of ,-sheets to others (0.2 to 0.8, P < 0.05) were different among the barley varieties. It needs to be pointed out that using a multi-peak modeling for protein structure analysis is only for making relative estimates and not exact determinations and only for the comparison purpose between varieties. The principal component analysis showed that protein amide I Fourier self-deconvolution spectra were different among the barley varieties, indicating that protein internal molecular structure differed. The above results demonstrate the potential of the SFTIRM to localize relatively pure protein areas in barley tissues and reveal protein molecular structure. The results indicated relative differences in protein structures among the barley varieties, which may partly explain the biological differences among the barley varieties. Further study is needed to understand the relationship between barley molecular chemical structure and biological features in terms of nutrient availability and digestive behavior. © 2006 Wiley Periodicals, Inc. Biopolymers 85:308,317, 2007. This article was originally published online as an accepted preprint. The "Published Online" date corresponds to the preprint version. You can request a copy of the preprint by emailing the Biopolymers editorial office at [source]

Characterization of the epithelial cell adhesion molecule (EpCAM)+ cell population in hepatocellular carcinoma cell lines

CANCER SCIENCE, Issue 10 2010
Osamu Kimura
Accumulating evidence suggests that cancer stem cells (CSC) play an important role in tumorigenicity. Epithelial cell adhesion molecule (EpCAM) is one of the markers that identifies tumor cells with high tumorigenicity. The expression of EpCAM in liver progenitor cells prompted us to investigate whether CSC could be identified in hepatocellular carcinoma (HCC) cell lines. The sorted EpCAM+ subpopulation from HCC cell lines showed a greater colony formation rate than the sorted EpCAM, subpopulation from the same cell lines, although cell proliferation was comparable between the two subpopulations. The in vivo evaluation of tumorigenicity, using supra-immunodeficient NOD/scid/,cnull (NOG) mice, revealed that a smaller number of EpCAM+ cells (minimum 100) than EpCAM, cells was necessary for tumor formation. The bifurcated differentiation of EpCAM+ cell clones into both EpCAM+ and EpCAM, cells was obvious both in vitro and in vivo, but EpCAM, clones sustained their phenotype. These clonal analyses suggested that EpCAM+ cells may contain a multipotent cell population. Interestingly, the introduction of exogenous EpCAM into EpCAM+ clones, but not into EpCAM, clones, markedly enhanced their tumor-forming ability, even though both transfectants expressed a similar level of EpCAM. Therefore, the difference in the tumor-forming ability between EpCAM+ and EpCAM, cells is probably due to the intrinsic biological differences between them. Collectively, our results suggest that the EpCAM+ population is biologically quite different from the EpCAM, population in HCC cell lines, and preferentially contains a highly tumorigenic cell population with the characteristics of CSC. (Cancer Sci 2010) [source]