Principal Components (principal + component)

Distribution by Scientific Domains
Distribution within Life Sciences

Kinds of Principal Components

  • first principal component

  • Terms modified by Principal Components

  • principal component analysis
  • principal component factor analysis
  • principal component regression

  • Selected Abstracts


    QSAR of Progestogens: Use of a Priori and Computed Molecular Descriptors and Molecular Graphics

    MOLECULAR INFORMATICS, Issue 4 2003
    Rudolf Kiralj
    Abstract Quantitative Structure-Activity Relationship (QSAR) study of two sets of oral progestogens was carried out by using Principal Component Analysis (PCA), Hierarchical Cluster Analysis (HCA) and Partial Least Squares (PLS). A priori, computed (at DFT 6-31G** level) and molecular graphics and modeling descriptors were employed. Molecular graphics and modeling studies of crystal structures of complexes progesterone receptor (PR)-progesterone, Fab,-progesterone and PR-metribolone have been performed. QSAR of progestogens is a three-dimensional phenomenon (over 96% of information is explained by the first three Principal Components), which can be, although it exhibits significant non-linearity, treated well with linear methods such as PLS. Progestogen activity depends primarily on double bond contents and resonance effects which define the skeletal conformation, and also on substituent characteristics (size, conformational and electronic properties). Sterical relationships between a substituent at C6(sp2) or C6(sp3)-, and sulfur atom from Met 801 residue of PR are important for progesterone binding to the protein and can be quantified. Basically the same was observed for substituents at ,-C10 with respect to residue Met759. [source]


    Tests of Association for Quantitative Traits in Nuclear Families Using Principal Components to Correct for Population Stratification

    ANNALS OF HUMAN GENETICS, Issue 6 2009
    Lei Zhang
    SUMMARY Traditional transmission disequilibrium test (TDT) based methods for genetic association analyses are robust to population stratification at the cost of a substantial loss of power. We here describe a novel method for family-based association studies that corrects for population stratification with the use of an extension of principal component analysis (PCA). Specifically, we adopt PCA on unrelated parents in each family. We then infer principal components for children from those for their parents through a TDT-like strategy. Two test statistics within the variance-components model are proposed for association tests. Simulation results show that the proposed tests have correct type I error rates regardless of population stratification, and have greatly improved power over two popular TDT-based methods: QTDT and FBAT. The application to the Genetic Analysis Workshop 16 (GAW16) data sets attests to the feasibility of the proposed method. [source]


    Organohalogen contaminants and reproductive hormones in incubating glaucous gulls (Larus hyperboreus) from the Norwegian Arctic

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2006
    Jonathan Verreault
    Abstract Organohalogen contaminants detected globally in avian wildlife, including populations from the Arctic, have been related to various reproductive hormone potencies, and altered hormonal balance and functions. Besides legacy organochlorine (OC) substances, that is, polychlorinated biphenyls (PCBs) and OC pesticides and by-products, endocrine-disruptive properties have been defined for chemicals of new and emerging environmental concern, such as polybrominated diphenyl ethers (PBDEs) and metabolically derived products like methylsulfonyl (MeSO2)- and hydroxyl (OH)-PCBs. We investigated the relationships between plasma concentrations of selected legacy OCs, PBDEs, and MeSO2 - and OH-PCB metabolites and the circulating reproductive hormones testosterone (T), 17,-estradiol (E2), and progesterone (P4) in incubating male and female glaucous gulls (Larus hyperboreus) from the Norwegian Arctic. Principal component and regression analyses demonstrated that P4 levels in male glaucous gulls were associated positively with variations of sum (,) PCB, dichloro-diphenyl-trichloroethane (,DDT), chlordane (,CHL), and ,PBDE concentrations, which were the most recalcitrant organohalogens determined in glaucous gulls. No such relationship was found for female glaucous gulls as well as between concentrations of any of the selected organohalogens and levels of T for both sexes. The E2 was not detected in any plasma samples. Present results were highly suggestive that exposure to high organohalogen concentrations in glaucous gulls, particularly the most persistent compound classes, may have the potential to interfere with steroidogenesis and impinge on circulating P4 homeostasis. Because significant effects were found in males exclusively, it cannot be completely ruled out that male glaucous gulls are more sensitive than females to organohalogen-mediated alteration of P4 synthesis and breakdown. [source]


    The spatial and temporal behaviour of the lower stratospheric temperature over the Southern Hemisphere: the MSU view.

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 4 2001
    Part I: data, methodology, temporal behaviour
    Abstract The lower stratosphere monthly temperature anomalies over the Southern Hemisphere derived from soundings made by the Microwave Sounding Unit (MSU) between 1979 and 1997 are analysed. Specifically MSU channel 4 temperature retrievals are considered. Principal component (PC) analysis with the S-mode approach is used in order to isolate grid points that covary in a similar manner and to determine the main features of their temporal behaviour. The first six PCs explain 81.3% of the variance and represent the different time variability patterns observed over the Southern Hemisphere for the ten area clusters determined by the method. The most important feature is common to all the PC score pattern,time series and corresponds to a negative linear trend present in almost all the Southern Hemisphere except over New Zealand and surrounding areas. The negative trend is largest over Antarctica. The remaining features of the temporal variability are different for each PC score and therefore for each cluster region over the Southern Hemisphere. The first PC score pattern shows the impact of the Chichón and Mt Pinatubo eruptions that each produced a 2-year warming over the tropical and sub-tropical lower stratosphere. This variability is orthogonal with the behaviour present over Antarctica. There are different anomalies between 1987 (El Niño) and 1988 (La Niña). This second PC does not show any evidence whatsoever of the volcanic eruptions. The semi-annual wave is present in the anomaly recurrence at mid to high latitudes. Over very low latitudes, close to the Equator, the Quasi-Biennial Oscillation (QBO) band of frequency is also present. Copyright © 2001 Royal Meteorological Society [source]


    Simultaneous determination of iridoids, phenolic acids, flavonoids, and saponins in Flos Lonicerae and Flos Lonicerae Japonicae by HPLC-DAD-ELSD coupled with principal component analysis

    JOURNAL OF SEPARATION SCIENCE, JSS, Issue 18 2007
    Chun-Yun Chen
    Abstract A method, HPLC coupled with diode-array and evaporative light scattering detectors (HPLC-DAD-ELSD), was newly developed to evaluate the quality of Flos Lonicerae (FL) and Flos Lonicerae Japonicae (FLJ), through a simultaneous determination of multiple types of bioactive components. By employing DAD, the detection wavelengths were set at 240 nm for the determination of iridoids, 330 nm for phenolic acids, and 360 nm for flavonoids, respectively. While ELSD, connected in series after DAD, was applied to the determination of saponins. This assay was fully validated with respect to precision, repeatability, and accuracy. Moreover, principal component analysis (PCA) was used for the similarity evaluation of different samples, and it was proven straightforward and reliable to differentiate FL and FLJ samples from different origins. For PCA, two principal components have been extracted. Principal component 1 (PC1) influences the separation between different sample sets, capturing 54.598% variance, while principal component 2 (PC2) affects differentiation within sample sets, capturing 12.579% variance. In conclusion, simultaneous quantification of bioactive components by HPLC-DAD-ELSD coupled with PCA would be a well-acceptable strategy to differentiate the sources and to comprehensively control the quality of the medicinal plants FL and FLJ. [source]


    Upland Controls on the Hydrological Functioning of Riparian Zones in Glacial Till Valleys of the Midwest,

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2007
    P. Vidon
    Abstract:, Identifying relationships between landscape hydrogeological setting, riparian hydrological functioning and riparian zone sensitivity to climate and water quality changes is critical in order to best use riparian zones as best management practices in the future. In this study, we investigate water table dynamics, water flow path and the relative importance of precipitation, deep ground water (DG) and seep water as sources of water to a riparian zone in a deeply incised glacial till valley of the Midwest. Data indicate that water table fluctuations are strongly influenced by soil texture and to a lesser extent by upland sediment stratigraphy producing seeps near the slope bottom. The occurrence of till in the upland and at 1.7-2 m in the riparian zone contributes to maintaining flow parallel to the ground surface at this site. Lateral ground-water fluxes at this site with a steep topography in the upland (16%) and loam soil near the slope bottom are small (<10 l/d/m stream length) and intermittent. A shift in flow path from a lateral direction to a down valley direction is observed in the summer despite the steep concave topography and the occurrence of seeps at the slope bottom. Principal component and discriminant analysis indicate that riparian water is most similar to seep water throughout the year and that DG originating from imbedded sand and gravel layers in the lower till unit is not a major source of water to riparian zones in this setting. Water quality data and the dependence of the riparian zone for recharge on seep water suggest that sites in this setting may be highly sensitive to changes in precipitation and water quality in the upland in the future. A conceptual framework describing the hydrological functioning of riparian zones on this setting is presented to generalize the finding of this study. [source]


    Pattern of geographical variation in petal shape in wild populations of Primula sieboldii E. Morren

    PLANT SPECIES BIOLOGY, Issue 2 2007
    YOSUKE YOSHIOKA
    Abstract The petal shape of Primula sieboldii E. Morren (Primulaceae) is diverse in wild populations. In this study, we investigated population differentiation in the petal shape of P. sieboldii using image analysis. Flowers were sampled from 160 genets from eight wild populations in the western to north-eastern parts of the Japanese archipelago. Principal component (PC) analysis of 40 coefficients of elliptic Fourier descriptors (EFDs) detected three major characteristics of petal shape variation: the ratio of length to width (PC1), the depth of the head notch (PC2) and the position of the center of gravity (PC3). To test the association between divergence in petal shape and geographical and genetic distances, we calculated two types of pairwise population distances for petal shape: Mahalanobis distances for the 40 EFD coefficients and for the first three PCs. The existence of an association between neutral genetic markers and petal shape was revealed by the Mahalanobis distances based on the 40 EFD coefficients, suggesting that evolutionary forces, such as founder effect and isolation by distance effect, are probably the main causes of differentiation in petal shape. In contrast, we found no association between Mahalanobis distances for the first three PCs and geographical and genetic distances. The discrepancy between the two petal shape distances indicated that the population differentiation promoted by the founder effects and isolation by distance effect appears mainly as subtle changes in petal shape rather than in major characteristics of petal shape variation. [source]


    Metabolomic approaches reveal that phosphatidic and phosphatidyl glycerol phospholipids are major discriminatory non-polar metabolites in responses by Brachypodium distachyon to challenge by Magnaporthe grisea

    THE PLANT JOURNAL, Issue 3 2006
    J. William Allwood
    Summary Metabolomic approaches were used to elucidate some key metabolite changes occurring during interactions of Magnaporthe grisea, the cause of rice blast disease , with an alternate host, Brachypodium distachyon. Fourier-transform infrared (FT-IR) spectroscopy provided a high-throughput metabolic fingerprint of M. grisea interacting with the B. distachyon accessions ABR1 (susceptible) and ABR5 (resistant). Principal component,discriminant function analysis (PC-DFA) allowed the differentiation between developing disease symptoms and host resistance. Alignment of projected ,test-set' on to ,training-set' data indicated that our experimental approach produced highly reproducible data. Examination of PC-DFA loading plots indicated that fatty acids were one chemical group that discriminated between responses by ABR1 and ABR5 to M. grisea. To identify these, non-polar extracts of M. grisea -challenged B. distachyon were directly infused into an electrospray ionization mass spectrometer (ESI-MS). PC-DFA indicated that M. grisea -challenged ABR1 and ABR5 were differentially clustered away from healthy material. Subtraction spectra and PC-DFA loadings plots revealed discriminatory analytes (m/z) between each interaction and seven metabolites were subsequently identified as phospholipids (PLs) by ESI-MS-MS. Phosphatidyl glycerol (PG) PLs were suppressed during both resistant and susceptible responses. By contrast, different phosphatidic acid PLs either increased or were reduced during resistance or during disease development. This suggests considerable and differential PL processing of membrane lipids during each interaction which may be associated with the elaboration/suppression of defence mechanisms or developing disease symptoms. [source]


    Craniological differentiation amongst wild-living cats in Britain and southern Africa: natural variation or the effects of hybridisation?

    ANIMAL CONSERVATION, Issue 4 2004
    Nobuyuki Yamaguchi
    The natural morphological variation in the wildcat, Felis silvestris, and morphological changes possibly caused by introgressive hybridisation with the domestic cat, F. catus, were examined, based on up to 39 variables concerning cranial morphology. The samples of wild-living cats originated from Scotland and southern Africa and consisted of both classical wildcat and other pelage types. Principal component and cluster analyses suggested that introgressive hybridisation occurred in both areas, with the consequence that the characteristics of local wildcat populations had been altered in terms of the frequencies of occurrence of certain characters, especially those concerning cranial capacity. In both regions the clustering patterns of wild-living cats can be interpreted as containing four main groups. One of these consisted mainly of ,non-wildcats' and groups furthest from the ,non-wild' cluster consisted of the highest proportion of ,wildcats' (c. 80%). We propose that where a population is heavily introgressed, the only feasible way to define a wildcat is on the basis of inter-correlated features and conservationists must take a population-based approach to assess the extent of introgression. This approach may provide an operational standard for assessing the impact of hybridisation between wildcats and domestic cats throughout the species' range; it suggests that the Scottish wildcats may be critically endangered. [source]


    Behavioural and psychological syndromes in Alzheimer's disease

    INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 11 2004
    A. Mirakhur
    Abstract Objectives The origins of behavioural and psychological symptoms of dementia are still poorly understood. By focusing on piecemeal behaviours as opposed to more robust syndrome change valid biological correlates may be overlooked. Our understanding of BPSD via the identification of neuropsychiatric syndromes. Methods We recruited 435 subjects from old age psychiatry and elderly care memory outpatient clinics fulfilling the criteria for diagnosis of probable Alzheimer's disease. Behavioural and psychological symptoms were assessed using the Neuropsychiatric Inventory. Principal components factor analysis was carried out on the composite scores of the 12 symptom domains to identify behavioural syndromes (factors). Results were confirmed by performing three different rotations: Varimax, Equamax and Quartimax. Results Four factors were identified (which accounted for 57% of the variance): ,affect' factor,depression/dysphoria, anxiety, irritability/lability and agitation/aggression; ,physical behaviour' factor,apathy, aberrant motor behaviour, sleep disturbance and appetite/eating disturbance; ,psychosis' factor,delusions and hallucinations; ,hypomania' factor,disinhibition and elation/euphoria. These groups were unchanged when different methods of rotation were used. Conclusions We report novel observations that agitation/aggression/irritability cluster within a depressive symptom factor and apathy is found within a physical behaviour factor. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Phylogeography and environmental correlates of a cap on reproduction: teat number in a small marsupial, Antechinus agilis

    MOLECULAR ECOLOGY, Issue 5 2007
    J. BECKMAN
    Abstract Natural selection should optimize litter size in response to the distribution and abundance of resources during breeding. In semelparous, litter-bearing antechinuses, teat number limits litter size. Consequently adaptation has been inferred in explaining intraspecific, geographic variability in teat number for several Antechinus spp. The phylogeography of teat number variation and associated genetic divergence were assessed in A. agilis using nine microsatellites and mitochondrial cytochrome b sequence data. Six-teat Otway Range animals were divergent in microsatellite allele identity and frequencies: samples from three Otway six-teat sites demonstrated significantly greater similarity genetically to those from six-teat animals ,250 km to the west, than to nearby Otway 10-teat samples, or to the six-teat animals at Wilsons Promontory. Gene flow between Otway phenotypes appears to have been limited for sufficient time to enable different microsatellite alleles to evolve. Nonetheless, nuclear genetic evidence suggested only incomplete reproductive isolation, and mitochondrial DNA (mtDNA) haplotypes showed no association with teat number. Other populations across the range were no more genetically differentiated from one another than expected from geographic separation. Principal components and distance-based redundancy analyses found an association between environmental variables and geographic distribution of A. agilis teat number , six-teat animals inhabit more temperate forests, whilst those with more teats experience greater seasonality. The apparent restricted breeding between phenotypically distinct animals, together with phylogenetically separate groups of six-teat animals in different locations with similar environments, are consistent with the hypothesis that adaptation to different habitats drives teat number variation in A. agilis. [source]


    A geometric morphometric approach to the quantification of population variation in sub-Saharan African crania

    AMERICAN JOURNAL OF HUMAN BIOLOGY, Issue 1 2010
    Daniel Franklin
    We report here on new data examining cranial variation in 18 modern human sub-Saharan African populations. Previously, we investigated variation within southern Africa; we now extend our analyses to include a series of Central, East, and West African crania, to further knowledge of the relationships between, and variation and regional morphological patterning in, those populations. The sample comprises 377 male individuals; the three-dimensional coordinates of 96 landmarks are analyzed using Procrustes-based methods. Interpopulation variation is examined by calculating shape distances between groups, which are compared using resampling statistics and parametric tests. Phenotypic variance, as a proxy for genetic variance, is measured and compared across populations. Principal components and cluster analyses are employed to explore relationships between the populations. Shape differences are visualized using three-dimensional rendered models. Observed disparity patterns imply a mix of differences and similarities across populations, with no apparent support for genetic bottlenecks, which is likely a consequence of migrations that may have influenced differences in cranial form; supporting data are found in recent molecular studies. The Pygmy sample had the most distinctive cranial morphology; characteristically small in size with marked prognathism. These features characterized, although less strongly, the neighboring Bateke, and are possibly related to similar selective pressures in conjunction with interbreeding. Small cranial size is also involved in the considerable distinctiveness of the San and Khoikhoi. The statistical procedures applied in this study afford a powerful and robust means of quantifying and visualizing the magnitude and pattern of cranial variation between sub-Saharan African populations. Am. J. Hum. Biol., 2010. © 2009 Wiley-Liss, Inc. [source]


    European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007

    ANNALS OF HUMAN GENETICS, Issue 4 2007
    Article first published online: 28 MAY 200
    Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaïti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naïve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source]


    Atlantic Forest Butterflies: Indicators for Landscape Conservation,

    BIOTROPICA, Issue 4b 2000
    Keith S. Brown Jr.
    ABSTRACT The Atlantic Forest region (wide sense) includes very complex tropical environments, increasingly threatened by extensive anthropogenic conversion (>90%). Ecologically specialized, short-generation insects (butterflies) are evaluated here as indicators for monitoring community richness, landscape integrity, and sustainable resource use in the region. The > 2100 butterfly species in the Atlantic Forest tegion have been censused in many sites over 35 years, giving comparable daily, weekly, monthly, and long-tetm site lists. The 21 most thoroughly studied sites include 218,914 species, of which half can be censused in a week or less. The butterfly communities are divided into six relatively distinct faunal regions, centered in the northeast, the central coastal tablelands, the southeast coastal plain, the mountains plus interior of the southeastern states, the central plateau, and the southern states. Species richness shows the highest values in coastal mountains from 15 to 23°S. Local butterfly communities show a high turnover, with 20 to 40 percent of the species, especially small Lycaenidae and Hesperiidae, recorded only as unstable populations or "tourists." Easily sampled species in the family Nymphalidae, and especially its bait-attracted subfamilies, are best correlated with the entire butterfly fauna and can be used as surrogates for species diversity. In most butterfly groups, species richness is well predicted by landscape connectivity alone, or by composite indices of environmental heterogeneity, natural disturbance, and (negatively) anthropogenic disturbance. Principal components and redundancy analyses showed that the richness and proportions of different butterfly groups in the local fauna are variably explained by disturbance, seasonality, temperature, vegetation, soils, and landscape connectivity. Various groups thus can be used as rapid indicators of different types of change in the community, its environment, and the landscape. Threatened and rare species also can be used as indicators of the most unique Atlantic Forest communities (paleoenvironments), which need special attention. RESUMO A região da Mata Atlantica latu senstt inclui ambientes tropicals muito complexes, cada vez mais amea¸ados por extensa conversão antrópica (>90%). Insetos pequenos, especializados, e de ciclo rápido (borboletas) são avaliados neste trabalho como indicadores para o monitoramento da tiqueza de comunidades, integridade de paisagens, e uso susten-tável de recursos na região. As >2100 espécies de borboletas na região da Mata Atlantica têrn sido recenseadas em muitos sítios durante os últimos 35 anos, dando listas comparáveis diárias, semanais, mensais e totais para cada sítio. Os 21 sítios mais intensivamente estudados incluem 218,914 espécies, das quais metade pode ser amostrada em uma semana ou menos. As comunidades de borboletas são divididas em seis subregiões faunísticas relativamente distintas, centradas no nordeste, nos tabuleiros baianos, no literal do sudeste, nas regiões montanhosas no interior dos estados do sudeste, no Planalto Central, e no estados do sul. A riqueza de espécies é maior nas serras costeiras entre 15 c 23°S. As comunidades locals de borboletas possuem alta reposi¸ão, com 20 a 40 por cento das espécies, especialmente os pequenos Lycaenidae e Hesperiidae, registradas em popula¸ões instáveis ou sendo apenas "turistas." As espécies facilmente amostradas na família Nymphalidae, especialmente as atraídas a iscas fermentadas, são mais correlacionadas com a riqueza total e podem ser usadas como estimadores da riqueza total no ambiente. Na maior parte dos grupos de borboletas, a riqueza de espécies é altamente correlacionada com conectividade simples da paisagem, e com índices compostos de heterogeneidade, perturba¸ão natural, e (negativamente) perturba¸ão total no ambiente. As análises de Componentes Principals e de Redundãncia mostram que as riquezas e proor¸ões de diferentes grupos de borboletas são variavelmente explicadas por pertba¸ão, sazonalidade, temperatura, vegeta¸ão, solos, e conectividade. Váries grupos podem assim ser úteis como indicadores rápidos de diferentes tipos de mudan¸as na comunidade, no seu ambiente, e na paisagem. Espécies raras e amea¸adas podem também ser usadas para indicar os sistemas mais únicos na região (paleoarnbientes), que necessitam de aten¸ão especial. [source]


    Assessing the joint effects of chlorinated dioxins, some pesticides and polychlorinated biphenyls on thyroid hormone status in Japanese breast-fed infants

    ENVIRONMETRICS, Issue 2 2003
    Takashi Yanagawa
    Abstract Joint effects of dioxin related chemicals (DXNs), hexachlorocyclohexanes (HCHs), DDT, dieldrin, heptachlor-epoxide (HCE), chlordane and polychlorinated biphenyls (PCB) on the levels of triirodothyronine (T3), thyroxine (T4), thyroid stimulating hormones (TSH) and thyroid binding globulin (TBG) in the peripheral blood of 101 breast-fed infants are studied. The statistical issue involved is how to estimate the effects based on data from volunteer subjects with possible measurement errors. A chain independent graph is applied for modeling the associations among factors, and dicotomizations of selected factors are performed for estimating the effects. Use of nonparametric methods with careful consideration of over-adjustment is suggested. It is shown that the estimated odds ratios of DXNs,DDT, the first principal component of DXNs and DDT, relative to TSH are 3.02 (p -value=0.03) and 7.15 (p -value=0.02), respectively, when PCB is not adjusted and adjusted for, respectively. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    The isochronic band hypothesis and climbing fibre regulation of motricity: an experimental study

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 2 2001
    Masaji Fukuda
    Abstract The dynamic organization of the olivocerebellar afferent input to Purkinje cells was examined in rat cerebellar cortex. The distribution of synchronous Purkinje cell complex spike activity was characterized, bilaterally, utilizing multiple electrode recordings in crus IIa folium under ketamine anaesthesia. The results confirmed the existence of rostrocaudal complex spike isochronicity bands with a mediolateral width of 500 µm. For a given band, no finer spatial submicrostructures could be discerned at a first-order approximation (two-dimensional projection). Closer analysis determined that isochronicity between bands is not continuous in space but demonstrates discrete discontinuities at the mediolateral boundaries. Principal component multivariate analysis revealed that the first principal component of the spatio-temporal variance is synchronicity along the rostrocaudal band with a decreased level of coupling in the mediolateral direction at the band boundary. Furthermore, this discrete banding isochronicity is organized by the distribution of feedback inhibition from the cerebellar nuclei on to the inferior olive nucleus. The usual multiple band structure can be dynamically altered to a single wide-band dynamic architecture, or to other patterns of activity, as may be required by movement coordination. [source]


    Plasmodium falciparum growth is arrested by monoterpenes from eucalyptus oil

    FLAVOUR AND FRAGRANCE JOURNAL, Issue 5 2008
    Vanessa Su
    Abstract Cerebral malaria is a major health problem in the developing world. Widespread resistance to existing drugs by the parasite Plasmodium falciparum has coincided with an increase in mortality, particularly in children. One potential source of new drugs comes from plant natural products. We found that commercially available, pharmaceutical grade eucalyptus oil and its principal component 1,8-cineole inhibited the growth and development of chloroquine-sensitive and chloroquine-resistant P. falciparum. This was true both when the oil was added directly to the parasite cultures and when cultures were exposed to the vapours. The development of the parasite was arrested at the early trophozoite stage, irrespective of when the oil was introduced. We used a new approach where the concentration of monoterpenes actually taken up by the cultures was measured directly using HS,GC. We found that the critical concentration required to inhibit and kill the parasite did not adversely affect the host erythrocytes, placing it in the range suitable for drug development. Given the ready availability and existing quality control of eucalyptus oils, this may represent an economically viable adjunct to current antimalarial therapies. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Volatile constituents of the flowerheads of three Echinacea species cultivated in Iran

    FLAVOUR AND FRAGRANCE JOURNAL, Issue 2 2006
    Mohammad Hossein Mirjalili
    Abstract Three medicinal species of the genus Echinacea (Asteraceae), i.e. E. purpurea, E. pallida and E. angustifolia, were cultivated in the experimental field of the Medicinal Plants and Drugs Research Institute of Shahid Beheshti University (Tehran, Iran). The essential oil of flowerheads of the studied species was isolated by hydrodistillation. The essential oils were analyzed by GC and GC-MS. In total, 36, 30 and 36 constituents were identified and quantified in E. purpurea, E. pallida and E. angustifolia, respectively. Sesquiterpene hydrocarbons were the main group of compounds in E. purpurea (70.9%), E. angustifolia (70%) and E. pallida (62.6%). The content of germacrene-D in E. purpurea (57%) was higher than that in E. pallida (51.4%) and E. angustifolia (49.6%) as the principal component in all samples. Also, the monoterpene hydrocarbons were observed in the oil of E. purpurea (6.4%) and E. angustifolia (1.2%), while these compounds were completely absent in E. pallida oil. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Pleiotropy and principal components of heritability combine to increase power for association analysis

    GENETIC EPIDEMIOLOGY, Issue 1 2008
    Lambertus Klei
    Abstract When many correlated traits are measured the potential exists to discover the coordinated control of these traits via genotyped polymorphisms. A common statistical approach to this problem involves assessing the relationship between each phenotype and each single nucleotide polymorphism (SNP) individually (PHN); and taking a Bonferroni correction for the effective number of independent tests conducted. Alternatively, one can apply a dimension reduction technique, such as estimation of principal components, and test for an association with the principal components of the phenotypes (PCP) rather than the individual phenotypes. Building on the work of Lange and colleagues we develop an alternative method based on the principal component of heritability (PCH). For each SNP the PCH approach reduces the phenotypes to a single trait that has a higher heritability than any other linear combination of the phenotypes. As a result, the association between a SNP and derived trait is often easier to detect than an association with any of the individual phenotypes or the PCP. When applied to unrelated subjects, PCH has a drawback. For each SNP it is necessary to estimate the vector of loadings that maximize the heritability over all phenotypes. We develop a method of iterated sample splitting that uses one portion of the data for training and the remainder for testing. This cross-validation approach maintains the type I error control and yet utilizes the data efficiently, resulting in a powerful test for association. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


    Shift in birch leaf metabolome and carbon allocation during long-term open-field ozone exposure

    GLOBAL CHANGE BIOLOGY, Issue 5 2007
    SARI KONTUNEN-SOPPELA
    Abstract Current and future ozone concentrations have the potential to reduce plant growth and increase carbon demand for defence and repair processes, which may result in reduced carbon sink strength of forest trees in long-term. Still, there is limited understanding regarding the alterations in plant metabolism and variation in ozone tolerance among tree species and genotypes. Therefore, this paper aims to study changes in birch leaf metabolome due to long-term realistic ozone stress and to relate these shifts in the metabolism with growth responses. Two European white birch (Betula pendula Roth) genotypes showing different ozone sensitivity were growing under 1.4,1.7 × ambient ozone in open-field conditions in Central Finland. After seven growing seasons, the trees were analysed for changes in leaf metabolite profiling, based on 339 low molecular weight compounds (including phenolics, polar and lipophilic compounds, and pigments) and related whole-tree growth responses. Genotype caused most of the variance of metabolite concentrations, while ozone concentration was the second principal component explaining the metabolome profiling. The main ozone caused changes included increases in quercetin-phenolic compounds and compounds related to leaf cuticular wax layer, whereas several compounds related to carbohydrate metabolism and function of chloroplast membranes and pigments (such as chlorophyll-related phytol derivatives) were decreasing. Some candidate compounds such as surface wax-related squalene, 1-dotriacontanol, and dotriacontane, providing growth-related tolerance against ozone were demonstrated. This study indicated that current growth-based ozone risk assessment methods are inadequate, because they ignore ecophysiological impacts due to alterations in leaf chemistry. [source]


    Estimating the spatial distribution of available biomass in grazing forests with a satellite image: A preliminary study

    GRASSLAND SCIENCE, Issue 2 2005
    Michio Tsutsumi
    Abstract We tested whether available biomass in grazing forests could be estimated by analyzing a satellite image with field data. Our study site was situated in north-eastern Japan and was composed of coniferous forest differing in afforested years, multilayered coniferous forest and deciduous broadleaf forest. The data of available biomass collected in previous studies was used to analyze a Landsat Thematic Mapper (TM) image acquired in summer and we tried to depict a map of the spatial distribution of available biomass in the forest. It was suggested that an analysis should be conducted separately in each of the multilayered coniferous, the other coniferous and broadleaf forests. As a result of regression analysis on the relationship between available biomass and each of several parameters, the first principal component computed with reflectance of the six bands of the Landsat TM was the most appropriate parameter to estimate available biomass. The answer to the question ,Can the spatial distribution of available biomass in a forest be estimated with a satellite image?' is ,Yes, in coniferous forests'. We propose a procedure for depicting a precise map of the distribution of available biomass in a forest with analysis of a satellite image. [source]


    Short-term spatial and temporal patterns of suspended sediment transfer in proglacial channels, small River Glacier, Canada

    HYDROLOGICAL PROCESSES, Issue 9 2004
    John F. Orwin
    Abstract Alpine glacial basins are a significant source and storage area for sediment exposed by glacial retreat. Recent research has indicated that short-term storage and release of sediment in proglacial channels may control the pattern of suspended sediment transfer from these basins. Custom-built continuously recording turbidimeters installed on a network of nine gauging sites were used to characterize spatial and temporal variability in suspended sediment transfer patterns for the entire proglacial area at Small River Glacier, British Columbia, Canada. Discharge and suspended sediment concentration were measured at 5 min intervals over the ablation season of 2000. Differences in suspended sediment transfer patterns were then extracted using multivariate statistics (principal component and cluster analysis). Results showed that each gauging station was dominated c. 80% of days by diurnal sediment transfer patterns and ,low' suspended sediment concentrations. ,Irregular' transfer patterns were generally associated with ,high' sediment concentrations during snowmelt and rainfall events, resulting in the transfer of up to 70% of the total seasonal suspended sediment load at some gauging stations. Suspended sediment enrichment of up to 600% from channel storage release and extrachannel inputs occurred between the glacial front and distal proglacial boundary. However, these patterns differed significantly between gauging stations as determined by the location of the gauging station within the catchment and meteorological conditions. Overall, the proglacial area was the source for up to 80% of the total suspended sediment yield transferred from the Small River Glacier basin. These results confirmed that sediment stored and released in the proglacial area, in particular from proglacial channels, was controlling suspended sediment transfer patterns. To characterize this control accurately requires multiple gauging stations with high frequency monitoring of suspended sediment concentration. Accurate characterization of this proglacial control on suspended sediment transfer may therefore aid interpretation of suspended sediment yield patterns from glacierized basins. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Processes governing river water quality identified by principal component analysis

    HYDROLOGICAL PROCESSES, Issue 16 2002
    I. Haag
    Abstract The present study demonstrates the usefulness of principal component analysis in condensing and interpreting multivariate time-series of water quality data. In a case study the water quality system of the lock-regulated part of the River Neckar (Germany) was analysed, with special emphasis on the oxygen budget. Pooled data of ten water quality parameters and discharge, which had been determined at six stations along a 200 km reach of the river between the years 1993 and 1998, were subjected to principal component analysis. The analysis yielded four stable principal components, explaining 72% of the total variance of the 11 parameters. The four components could be interpreted confidently in terms of underlying processes: biological activity, dilution by high discharge, seasonal effects and the influence of wastewater. From analysing the data of single stations separately, these processes were found to be active throughout the complete reach. Considering the oxygen budget of the river, the variance of biological activity, representing the counteracting processes of primary production and microbial degradation, was found to be most important. This principal component explained 79% of the observed variance of oxygen saturation. In contrast, the analysis of a reduced data set from the 1970s showed that oxygen saturation was then dominated by discharge and temperature variations. The findings indicate that the oxygen budget used to be governed directly by the emission of degradable matter, whereas nowadays eutrophication is most important for extreme oxygen concentrations. Therefore, controlling eutrophication has to be the primary goal, in order to mitigate the rare episodes of pronounced oxygen over- and undersaturation in the future. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Can principal component analysis provide atmospheric circulation or teleconnection patterns?

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 6 2008
    Rosa H. Compagnucci
    Abstract This investigation examines principal component (PC) methodology and the interpretation of the displays, such as eigenvalue magnitude, loadings and scores, which the methodology provides. The key question posed is, to what extent can S- and T-mode decompositions of a dispersion matrix yield the kinds of interpretations placed on them typically? In particular, a series of experiments are designed based on various amalgamations of three distinct synoptic flow patterns. Since these flow patterns are known, a priori, this allows testing via subtle alterations of the methodology to determine whether there is equivalence between the S- and T-mode decompositions, the degree to which the flow patterns or teleconnections can be recovered by each mode, and the interpretation of each mode. The findings are examined in two contexts: how well they classify the flow patterns, and how well they provide meaningful teleconnections. Both correlation and covariance dispersion matrices are used to determine differences that arise from the standardization. Additionally, unrotated and rotated results are included. By examining a variety of commonly applied methodologies, the results hold for a wider range of studies. Key findings are that eigenvalue degeneracy can influence one mode (but not the other) or both modes for any set of flow patterns resulting in pattern intermixing at times. Similarly, such degeneracy is found in one or both dispersion matrices. Congruence coefficients are used to provide a measure of validity by matching the PC loadings to the parent correlations and covariances. This matching is vital as the loadings exhibit dipoles that have been interpreted historically as physically meaningful, but the present work indicates they may arise purely through the methodology. Overall, we observe that S-mode results can be interpreted as teleconnection patterns and T-mode as flow patterns for well-designed analyses that are meticulously scrutinized for methodological problems. Copyright © 2007 Royal Meteorological Society [source]


    Formation of Lipid Emulsions and Clear Gels by Liquid Crystal Emulsification

    INTERNATIONAL JOURNAL OF COSMETIC SCIENCE, Issue 1 2007
    T. Suzuki
    Recently developed emulsion technologies for the formation of fine emulsions, lipid emulsions and clear gels by liquid crystal emulsification were reviewed. As a basic information on liquid crystal emulsification, the structures and characteristic behaviours of lyotropic liquid crystals were summarized. Formation of a liquid crystalline phase was often seen in emulsions and biological systems. The significance of liquid crystal formation during emulsification was analysed by comparing the states and stabilities of emulsions prepared by different processes. Then uses of liquid crystals for formation of the characteristic emulsions and gels were also discussed. In liquid crystal emulsification, an oil phase is dispersed directly into the lamellar liquid-crystalline phase composed of surfactant, glycerol and water to prepare a gel-like oil-in-liquid crystal emulsion. This is followed by dilution with the remaining water to produce an emulsion. From the phase behaviour during emulsification and analysis of the local motion of the liquid crystal membrane by fluorometry, it was confirmed that the interaction between surfactant and a polyol molecule such as glycerol promotes hydrogen bonding and enhances the strength of the lamellar liquid crystal membranes, which results in the formation of oil-in-liquid crystal emulsions. The interaction between the liquid crystal and oil was analysed from the changes in molecular motion of the membrane at the oil-liquid crystal interface using the spin label technique of electron spin resonance (ESR). The fluidity of the liquid crystal membrane did not change when oil was added, and therefore oil-in-liquid crystal emulsions of various oils were prepared by the identical process. This lack of dependence of the liquid crystal membrane on oil results in the unique properties of liquid crystal emulsification, which can be used for oils of various polarity and different molecular constituents. When a self-organizing artificial stratum corneum lipid containing pseudo-ceramide was used as a principal component of the oil, a multilamellar emulsion of concentric lamellar structure was formed. The multilamellar emulsion supplements the physiological function of stratum corneum by the identical mechanism as natural intercellular lipids. High-pressure treatment of the lipid emulsion produced a gel-like emulsion crystal, in which the homogeneous nanoemulsion droplets were arranged in a hexagonal array. This review paper was presented at the Conference of the Asian Societies of Cosmetic Scientists 2005 in Bangkok. [source]


    Movement trajectories and habitat partitioning of small mammals in logged and unlogged rain forests on Borneo

    JOURNAL OF ANIMAL ECOLOGY, Issue 5 2006
    KONSTANS WELLS
    Summary 1Non-volant animals in tropical rain forests differ in their ability to exploit the habitat above the forest floor and also in their response to habitat variability. It is predicted that specific movement trajectories are determined both by intrinsic factors such as ecological specialization, morphology and body size and by structural features of the surrounding habitat such as undergrowth and availability of supportive structures. 2We applied spool-and-line tracking in order to describe movement trajectories and habitat segregation of eight species of small mammals from an assemblage of Muridae, Tupaiidae and Sciuridae in the rain forest of Borneo where we followed a total of 13 525 m path. We also analysed specific changes in the movement patterns of the small mammals in relation to habitat stratification between logged and unlogged forests. Variables related to climbing activity of the tracked species as well as the supportive structures of the vegetation and undergrowth density were measured along their tracks. 3Movement patterns of the small mammals differed significantly between species. Most similarities were found in congeneric species that converged strongly in body size and morphology. All species were affected in their movement patterns by the altered forest structure in logged forests with most differences found in Leopoldamys sabanus. However, the large proportions of short step lengths found in all species for both forest types and similar path tortuosity suggest that the main movement strategies of the small mammals were not influenced by logging but comprised generally a response to the heterogeneous habitat as opposed to random movement strategies predicted for homogeneous environments. 4Overall shifts in microhabitat use showed no coherent trend among species. Multivariate (principal component) analysis revealed contrasting trends for convergent species, in particular for Maxomys rajah and M. surifer as well as for Tupaia longipes and T. tana, suggesting that each species was uniquely affected in its movement trajectories by a multiple set of environmental and intrinsic features. [source]


    Endomyocardial biopsy derived adherent proliferating cells,A potential cell source for cardiac tissue engineering

    JOURNAL OF CELLULAR BIOCHEMISTRY, Issue 3 2010
    Marion Haag
    Abstract Heart diseases are a leading cause of morbidity and mortality. Cardiac stem cells (CSC) are considered as candidates for cardiac-directed cell therapies. However, clinical translation is hampered since their isolation and expansion is complex. We describe a population of human cardiac derived adherent proliferating (CAP) cells that can be reliably and efficiently isolated and expanded from endomyocardial biopsies (0.1,cm3). Growth kinetics revealed a mean cell doubling time of 49.9,h and a high number of 2.54,×,107 cells in passage 3. Microarray analysis directed at investigating the gene expression profile of human CAP cells demonstrated the absence of the hematopoietic cell markers CD34 and CD45, and of CD90, which is expressed on mesenchymal stem cells (MSC) and fibroblasts. These data were confirmed by flow cytometry analysis. CAP cells could not be differentiated into adipocytes, osteoblasts, chondrocytes, or myoblasts, demonstrating the absence of multilineage potential. Moreover, despite the expression of heart muscle markers like ,-sarcomeric actin and cardiac myosin, CAP cells cannot be differentiated into cardiomyocytes. Regarding functionality, CAP cells were especially positive for many genes involved in angiogenesis like angiopoietin-1, VEGF, KDR, and neuropilins. Globally, principal component and hierarchical clustering analysis and comparison with microarray data from many undifferentiated and differentiated reference cell types, revealed a unique identity of CAP cells. In conclusion, we have identified a unique cardiac tissue derived cell type that can be isolated and expanded from endomyocardial biopsies and which presents a potential cell source for cardiac repair. Results indicate that these cells rather support angiogenesis than cardiomyocyte differentiation. J. Cell. Biochem. 109: 564,575, 2010. © 2009 Wiley-Liss, Inc. [source]


    Hyperspectral imaging combined with principal component analysis for bruise damage detection on white mushrooms (Agaricus bisporus)

    JOURNAL OF CHEMOMETRICS, Issue 3-4 2008
    A. A. Gowen
    Abstract Hyperspectral imaging (HSI) combines conventional imaging and spectroscopy to simultaneously acquire both spatial and spectral information from an object. This technology has recently emerged as a powerful process analytical tool for rapid, non-contact and non-destructive food analysis. In this study, the potential application of HSI for damage detection on the caps of white mushrooms (Agaricus bisporus) was investigated. Mushrooms were damaged by controlled vibration to simulate damage caused by transportation. Hyperspectral images were obtained using a pushbroom line-scanning HSI instrument, operating in the wavelength range of 400,1000,nm with spectroscopic resolution of 5,nm. The effective resolution of the CCD detector was 580,×,580,pixels by 12 bits. Two data reduction methods were investigated: in the first, principal component analysis (PCA) was applied to the hypercube of each sample, and the second PC (PC 2) scores image was used for identification of bruise-damaged regions on the mushroom surface; in the second method PCA was applied to a dataset comprising of average spectra from regions normal and bruise-damaged tissue. In this case it was observed that normal and bruised tissue were separable along the resultant first principal component (PC 1) axis. Multiplying the PC 1 eigenvector by the hypercube data allowed reduction of the hypercube to a 2-D image, which showed maximal contrast between normal and bruise-damaged tissue. The second method performed better than the first when applied to a set of independent mushroom samples. The results from this study could be used for the development of a non-destructive monitoring system for rapid detection of damaged mushrooms on the processing line. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Interpolative biplots applied to principal component analysis and canonical correlation analysis

    JOURNAL OF CHEMOMETRICS, Issue 11 2003
    M. Rui Alves
    Abstract The multivariate statistical analysis of cis and trans isomers of fatty acid profiles of eight margarine brands obtained by HRGC/FID/capillary column was carried out based on biplots applied to principal component analysis (PCA) and canonical correlation analysis (CCA). It is shown that while predictive biplots are the best choice for interpretation purposes, interpolative biplots are very useful for classification of new observations that were not used for the construction of the principal component or canonical dimension axes. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Comparison of genetic (co)variance matrices within and between Scabiosa canescens and S. columbaria

    JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 5 2000
    Waldmann
    In the current study, we used bootstrap analyses and the common principal component (CPC) method of Flury (1988) to estimate and compare the G -matrix of Scabiosa columbaria and S. canescens populations. We found three major patterns in the G -matrices: (i) the magnitude of the (co)variances was more variable among characters than among populations, (ii) different populations showed high (co)variance for different characters, and (iii) there was a tendency for S. canescens to have higher genetic (co)variances than S. columbaria. The hypothesis of equal G -matrices was rejected in all comparisons and there was no evidence that the matrices differed by a proportional constant in any of the analyses. The two ,species matrices' were found to be unrelated, both for raw data and data standardized over populations, and there was significant between-population variation in the G -matrix in both species. Populations of S. canescens showed conservation of structure (principal components) in their G -matrices, contrasting with the lack of common structure among the S. columbaria matrices. Given these observations and the results from previous studies, we propose that selection may be responsible for some of the variation between the G -matrices, at least in S. columbaria and at the between-species level. [source]