Current Technology (current + technology)

Distribution by Scientific Domains


Selected Abstracts


Fetal Mouse Imaging Using Echocardiography: A Review of Current Technology

ECHOCARDIOGRAPHY, Issue 10 2006
Christopher F. Spurney M.D.
Advances in genetic research have led to the need for phenotypic analysis of small animal models. However, often these genetic alterations, especially when affecting the cardiovascular system, can result in fetal or perinatal death. Noninvasive ultrasound imaging is an ideal method for detecting and studying such congenital malformations, as it allows early recognition of abnormalities in the living fetus and the progression of disease can be followed in utero with longitudinal studies. Two platforms for fetal mouse echocardiography exist, the clinical systems with 15-MHz phased array transducers and research systems with 20,55-MHz mechanical transducers. The clinical ultrasound system has limited two-dimensional (2D) resolution (axial resolution of 440 ,m), but the availability of color and spectral Doppler allows quick interrogations of blood flows, facilitating the detection of structural abnormalities. M-mode imaging further provides important functional data, although, the proper imaging planes are often difficult to obtain. In comparison, the research biomicroscope system has significantly improved 2D resolution (axial resolution of 28 ,m). Spectral Doppler imaging is also available, but in the absence of color Doppler, imaging times are increased and the detection of flow abnormalities is more difficult. M-mode imaging is available and equivalent to the clinical ultrasound system. Overall, the research system, given its higher 2D resolution, is best suited for in-depth analysis of mouse fetal cardiovascular structure and function, while the clinical ultrasound systems, equipped with phase array transducers and color Doppler imaging, are ideal for high-throughput fetal cardiovascular screens. [source]


Metabolomics: Current technologies and future trends

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 17 2006
Katherine Hollywood
Abstract The ability to sequence whole genomes has taught us that our knowledge with respect to gene function is rather limited with typically 30,40% of open reading frames having no known function. Thus, within the life sciences there is a need for determination of the biological function of these so-called orphan genes, some of which may be molecular targets for therapeutic intervention. The search for specific mRNA, proteins, or metabolites that can serve as diagnostic markers has also increased, as has the fact that these biomarkers may be useful in following and predicting disease progression or response to therapy. Functional analyses have become increasingly popular. They include investigations at the level of gene expression (transcriptomics), protein translation (proteomics) and more recently the metabolite network (metabolomics). This article provides an overview of metabolomics and discusses its complementary role with transcriptomics and proteomics, and within system biology. It highlights how metabolome analyses are conducted and how the highly complex data that are generated are analysed. Non-invasive footprinting analysis is also discussed as this has many applications to in,vitro cell systems. Finally, for studying biotic or abiotic stresses on animals, plants or microbes, we believe that metabolomics could very easily be applied to large populations, because this approach tends to be of higher throughput and generally lower cost than transcriptomics and proteomics, whilst also providing indications of which area of metabolism may be affected by external perturbation. [source]


Testing association between disease and multiple SNPs in a candidate gene

GENETIC EPIDEMIOLOGY, Issue 5 2007
W. James Gauderman
Abstract Current technology allows investigators to obtain genotypes at multiple single nucleotide polymorphism (SNPs) within a candidate locus. Many approaches have been developed for using such data in a test of association with disease, ranging from genotype-based to haplotype-based tests. We develop a new approach that involves two basic steps. In the first step, we use principal components (PCs) analysis to compute combinations of SNPs that capture the underlying correlation structure within the locus. The second step uses the PCs directly in a test of disease association. The PC approach captures linkage-disequilibrium information within a candidate region, but does not require the difficult computing implicit in a haplotype analysis. We demonstrate by simulation that the PC approach is typically as or more powerful than both genotype- and haplotype-based approaches. We also analyze association between respiratory symptoms in children and four SNPs in the Glutathione-S-Transferase P1 locus, based on data from the Children's Health Study. We observe stronger evidence of an association using the PC approach (p = 0.044) than using either a genotype-based (p = 0.13) or haplotype-based (p = 0.052) approach. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007

ANNALS OF HUMAN GENETICS, Issue 4 2007
Article first published online: 28 MAY 200
Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaďti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naďve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source]


Raman-FISH: combining stable-isotope Raman spectroscopy and fluorescence in situ hybridization for the single cell analysis of identity and function

ENVIRONMENTAL MICROBIOLOGY, Issue 8 2007
Wei E. Huang
Summary We have coupled fluorescence in situ hybridization (FISH) with Raman microscopy for simultaneous cultivation-independent identification and determination of 13C incorporation into microbial cells. Highly resolved Raman confocal spectra were generated for individual cells which were grown in minimal medium where the ratio of 13C to 12C content of the sole carbon source was incrementally varied. Cells which were 13C-labelled through anabolic incorporation of the isotope exhibited key red-shifted spectral peaks, the calculated ,red shift ratio' (RSR) being highly correlated with the 13C-content of the cells. Subsequently, Raman instrumentation and FISH protocols were optimized to allow combined epifluorescence and Raman imaging of Fluos, Cy3 and Cy5-labelled microbial populations at the single cell level. Cellular 13C-content determinations exhibited good congruence between fresh cells and FISH hybridized cells indicating that spectral peaks, including phenylalanine resonance, which were used to determine 13C-labelling, were preserved during fixation and hybridization. In order to demonstrate the suitability of this technology for structure,function analyses in complex microbial communities, Raman-FISH was deployed to show the importance of Pseudomonas populations during naphthalene degradation in groundwater microcosms. Raman-FISH extends and complements current technologies such as FISH-microautoradiography and stable isotope probing in that it can be applied at the resolution of single cells in complex communities, is quantitative if suitable calibrations are performed, can be used with stable isotopes and has analysis times of typically 1 min per cell. [source]


Molecular imaging: The application of small animal positron emission tomography

JOURNAL OF CELLULAR BIOCHEMISTRY, Issue S39 2002
Douglas J. Rowland
Abstract The extraordinary advances in genomic technologies over the last decade have led to the establishment of new animal models of disease. The use of molecular imaging techniques to examine these models, preferably with non-destructive imaging procedures, such as those offered by positron emission tomography (PET), are especially valuable for the timely advancement of research. With the use of small animal PET imaging it is possible to follow individual subjects of a sample population over an extended time period by using highly specific molecular probes and radiopharmaceuticals. In this Prospect small animal PET imaging will be described, specifically focusing on the current technologies, its applications in molecular imaging and the logistics of performing small animal PET. J. Cell. Biochem. Suppl. 39: 110,115, 2002. © 2002 Wiley-Liss, Inc. [source]


Knowledge exploitation, knowledge exploration, and competency trap

KNOWLEDGE AND PROCESS MANAGEMENT: THE JOURNAL OF CORPORATE TRANSFORMATION, Issue 3 2006
Weiping LiuArticle first published online: 11 AUG 200
It is no surprise that knowledge exploitation and knowledge exploration have become the consistent theme in organizational learning literature. Strategy and organization theorists have similarly observed the dynamic capabilities anchored in a firm's ability to simultaneously exploit current technologies and resources to secure efficiency benefits, and creating variation through exploratory innovation. While some studies argue that excessive exploration or excessive exploitation can lead to a competency trap, the ,competency trap' component actually has received less empirical scrutiny. This paper provides a study about how competency traps are formed in the process of knowledge exploration and exploitation as well as their effects on business performance. The paper includes three main sections: First, the theoretical interpretation of the ,competency trap' construct is broadened by investigating the formation of competency traps based on organizational learning theory; second, factors leading to the formation of different competency traps are identified; and third, the relationship between an organization's competency trap and business performance is investigated. The article ends with a discussion of implications for the organizational learning literature. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Exploring snake venom proteomes: multifaceted analyses for complex toxin mixtures

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 4 2008
Jay W. Fox Dr.
Abstract Snake venom proteomes are complex mixtures of a large number of distinct proteins. In a sense, the field of snake venom proteomics has been under investigation since the very earliest biochemical studies on venoms where peptides and proteins were isolated and structurally and biologically characterized. With the recent developments in mass spectrometry for the identification of proteins, coupled with venom gland transcriptomes, has the field of snake venom proteomics began to flourish. These developments have led to exciting insights into the protein composition of venoms and subsequently their pathological activities. In this review, we will discuss the state of art of snake venom proteomics. Although we have not reached the ultimate goal of characterizing and quantifying all unique proteins in a venom proteome, current technologies have opened many opportunities for high-throughput proteomic studies that have gone beyond simple protein identification to analyzing various functional aspects, such as post-translational modifications, proteolytic processing and toxin-target interactions. In this review, we will discuss the technological approaches used in the study of venom proteomics highlighting the advances made and future directions. [source]


Microbial systems engineering: First successes and the way ahead

BIOESSAYS, Issue 4 2010
Sven Dietz
Abstract The first promising results from "streamlined," minimal genomes tend to support the notion that these are a useful tool in biological systems engineering. However, compared with the speed with which genomic microbial sequencing has provided us with a wealth of data to study biological functions, it is a slow process. So far only a few projects have emerged whose synthetic ambition even remotely matches our analytic capabilities. Here, we survey current technologies converging into a future ability to engineer large-scale biological systems. We argue that the underlying synthetic technology, de novo DNA synthesis, is already rather mature , in particular relative to the scope of our current synthetic ambitions. Furthermore, technologies towards rationalizing the design of the newly synthesized DNA fragment are emerging. These include techniques to implement complex regulatory circuits, suites of parts on a DNA and RNA level to fine tune gene expression, and supporting computational tools. As such DNA fragments will, in most cases, be destined for operating in a cellular context, attention has to be paid to the potential interactions of the host with the functions encoded on the engineered DNA fragment. Here, the need of biological systems engineering to deal with a robust and predictable bacterial host coincides with current scientific efforts to theoretically and experimentally explore minimal bacterial genomes. [source]


Effects of Near-Surface Absorption on Reflection Characteristics of Continental Interbedded Strata: the Dagang Oilfield as an Example

ACTA GEOLOGICA SINICA (ENGLISH EDITION), Issue 5 2010
LI Guofa
Abstract: Due to the effects of seismic wave field interference, the reflection events generated from interbedded and superposed sand and shale strata no longer have an explicit corresponding relationship with the geological interface. The absorption of the near-surface layer decreases the resolution of the seismic wavelet, intensifies the interference of seismic reflections from different sand bodies, and makes seismic data interpretation of thin interbedded strata more complex and difficult. In order to concretely investigate and analyze the effects of the near-surface absorption on seismic reflection characteristics of interbedded strata, and to make clear the ability of current technologies to compensate the near-surface absorption, a geological model of continental interbedded strata with near-surface absorption was designed, and the prestack seismic wave field was numerically simulated with wave equations. Then, the simulated wave field was processed by the prestack time migration, the effects of near-surface absorption on prestack and poststack reflection characteristics were analyzed, and the near-surface absorption was compensated for by inverse Q -filtering. The model test shows that: (1) the reliability of prediction and delineation of a continental reservoir with AVO inversion is degraded due to the lateral variation of the near-surface structure; (2) the corresponding relationships between seismic reflection events and geological interfaces are further weakened as a result of near-surface absorption; and (3) the current technology of absorption compensation probably results in false geological structure and anomaly. Based on the model experiment, the real seismic data of the Dagang Oil Field were analyzed and processed. The seismic reflection characteristics of continental interbedded strata were improved, and the reliability of geological interpretation from seismic data was enhanced. [source]


3236: Corneal grafting assisted by wavelength-optimised ultrashort pulser lasers

ACTA OPHTHALMOLOGICA, Issue 2010
TAL MARCIANO
Purpose We realized an innovative device for ocular surgery by ultrafast pulse laser optimised for corneal grafting. Methods We constructed a demonstrator device that reproduces surgical conditions of corneal transplant. It is thus possible to realize with the help of an easy handling automatised interface all the kinds of already existing corneal transplants. Also, in order to maximize the spatial quality of the beam, a wavefront correction system using a deformable mirror module has been added. The Demonstrator contains an erbium fiber laser emitting at 1,6 microns. This laser delivers a beam of a few Joule with pulse duration of 700 femtoseconds and a repetition rate of 100-200 KHz. It includes deformable mirrors permitting horizontal displacements and a wavefront sensor. It also contains the administration system of the laser beam. Results The experiments carried out with a surgical tunable source confirmed the initial assumptions: the penetration depth is limited to wavelengths close to 1 microns. When increasing the wavelength, the drop of the scattering compensates the absorption and therefore the penetration depth is slowly varying when increasing the wavelength. The laser does not penetrate near the maximum of the water absorption band located at 1,45 microns. However, the use of a wavelength of 1,6 micros enables an important increasing of penetration depth (factor 3) while conserving the same energy of current technologies. Conclusion The use of a laser source with a wavelength corresponding to the window of transparency of the cornea (1,65 microns) permits to increase both the penetration depth of an ultrafast laser source and the cut quality. [source]


When does a protein become an allergen?

CLINICAL & EXPERIMENTAL ALLERGY, Issue 7 2008
Searching for a dynamic definition based on most advanced technology tools
Summary Since the early beginning of allergology as a science considerable efforts have been made by clinicians and researchers to identify and characterize allergic triggers as raw allergenic materials, allergenic sources and tissues, and more recently basic allergenic structures defined as molecules. The last 15,20 years have witnessed many centres focusing on the identification and characterization of allergenic molecules leading to an expanding wealth of knowledge. The need to organize this information leads to the most important question ,when does a protein become an allergen?' In this article, I try to address this question by reviewing a few basic concepts of the immunology of IgE-mediated diseases, reporting on the current diagnostic and epidemiological tools used for allergic disease studies and discussing the usefulness of novel biotechnology tools (i.e. proteomics and molecular biology approaches), information technology tools (i.e. Internet-based resources) and microtechnology tools (i.e. proteomic microarray for IgE testing on molecular allergens). A step-wise staging of the identification and characterization process, including bench, clinical and epidemiological aspects, is proposed, in order to classify allergenic molecules dynamically. This proposal reflects the application and use of all the new tools available from current technologies. [source]


Estimating the Maximum Attainable Efficiency in Dye-Sensitized Solar Cells

ADVANCED FUNCTIONAL MATERIALS, Issue 1 2010
Henry J. Snaith
Abstract For an ideal solar cell, a maximum solar-to-electrical power conversion efficiency of just over 30% is achievable by harvesting UV to near IR photons up to 1.1,eV. Dye-sensitized solar cells (DSCs) are, however, not ideal. Here, the electrical and optical losses in the dye-sensitized system are reviewed, and the main losses in potential from the conversion of an absorbed photon at the optical bandgap of the sensitizer to the open-circuit voltage generated by the solar cell are specifically highlighted. In the first instance, the maximum power conversion efficiency attainable as a function of optical bandgap of the sensitizer and the "loss-in-potential" from the optical bandgap to the open-circuit voltage is estimated. For the best performing DSCs with current technology, the loss-in-potential is ,0.75,eV, which leads to a maximum power-conversion efficiency of 13.4% with an optical bandgap of 1.48,eV (840,nm absorption onset). Means by which the loss-in-potential could be reduced to 0.4,eV are discussed; a maximum efficiency of 20.25% with an optical bandgap of 1.31,eV (940,nm) is possible if this is achieved. [source]


On removing the primary field from fixed-wing time-domain airborne electromagnetic data: some consequences for quantitative modelling, estimating bird position and detecting perfect conductors

GEOPHYSICAL PROSPECTING, Issue 4 2001
Richard Smith
In the process of removing the primary field from fixed-wing time-domain airborne EM data, the response is decomposed into two parts, which are referred to here as the time-domain ,in-phase' and ,quadrature' components. The time-domain in-phase component is dominated by the primary field, which varies significantly as the transmitter,receiver separation changes. The time-domain quadrature component comes solely from the secondary response associated with currents induced in the ground and this is the component that has traditionally been used in the interpretation of data from fixed-wing towed-bird time-domain EM systems. In the off-time, the quadrature response is very similar to the total secondary response. However, there are large differences in the on-time and even some small differences in the off-time.One consequence of these differences is that when airborne EM data are to be interpreted using a synthetic mathematical model, the synthetic data calculated should also be the quadrature component. A second consequence relates to the time-domain in-phase component which is sometimes used to estimate the receiver-sensor (bird) position. The bird-position estimation process assumes there is no secondary field in the in-phase component. If the ground is resistive, the secondary contained in the in-phase component is small, so the bird-position estimate is accurate to about 30 cm, but in highly conductive areas the secondary contribution can be large and the position estimate can be out by as much as 5 m. A third consequence arises for highly conductive bodies, the response of which is predominantly in-phase. This means that any response from these types of body is lost in the component that has been removed in the primary-field extraction procedure. However, if the bird position is measured very accurately, the actual free-space primary field can be estimated. If this is then subtracted from the estimated primary (actually free-space primary plus secondary in-phase response), then the residual is the secondary in-phase response of the ground. Using this methodology, very conductive ore bodies could be detected. However, a sensitivity analysis shows that detection of a large vertically dipping very conductive body at 150 m depth would require that the bird position be measured to an accuracy of about 1.4 cm and the aircraft attitude to within about 0.01°. Such tolerances are very stringent and not easily attainable with current technology. [source]


Soil arthropods as indicators of water stress in Antarctic terrestrial habitats?

GLOBAL CHANGE BIOLOGY, Issue 12 2003
Peter Convey
Abstract Abiotic features of Antarctic terrestrial habitats, particularly low temperatures and limited availability of liquid water, strongly influence the ecophysiology and life histories of resident biota. However, while temperature regimes of a range of land microhabitats are reasonably well characterized, much less is known of patterns of soil water stress, as current technology does not allow measurement at the required scale. An alternative approach is to use the water status of individual organisms as a proxy for habitat water status and to sample over several years from a population to identify seasonal or long-term patterns. This broad generalization for terrestrial invertebrates was tested on arthropods in the maritime Antarctic. We present analyses of a long-term data set of body water content generated by monthly sampling for 8,11 years of seven species of soil arthropods (four species of Acari, two Collembola and one Diptera) on maritime Antarctic Signy Island, South Orkney Islands. In all species, there was considerable within- and between-sample variability. Despite this, clear seasonal patterns were present in five species, particularly the two collembolans and a prostigmatid mite. Analyses of monthly water content trends across the entire study period identified several statistically significant trends of either increase or decrease in body water content, which we interpret in the context of regional climate change. The data further support the separation of the species into two groups as follows: firstly, the soft-bodied Collembola and Prostigmata, with limited cuticular sclerotization, which are sensitive to changes in soil moisture and are potentially rapid sensors of microhabitat water status, secondly, more heavily sclerotized forms such as Cryptostigmata (=Oribatida) and Mesostigmata mites, which are much less sensitive and responsive to short-term fluctuations in soil water availability. The significance of these findings is discussed and it is concluded that annual cycles of water content were driven by temperature, mediated via radiation and precipitation, and constituted reliable indicators of habitat moisture regimes. However, detailed ecophysiological studies are required on particular species before such information can be used to predict over long timescales. [source]


Haemophilia 2002: emerging risks of treatment

HAEMOPHILIA, Issue 3 2002
B. L. EVATT
Haemophilia care and treatment products have greatly improved over the past 2 decades. Transitions in treatment produced by these changes were accompanied by the emergence of unexpected risks and new complications. In order to provide the best comprehensive care to patients with haemophilia, healthcare providers periodically need to re-evaluate and adjust their management and therapeutic products to prevent or minimize the effects produced by the emerging issues. For example, reducing the effects of infectious agents remains the highest priority for the haemophilia community because of the high level of morbidity and mortality that has resulted from earlier therapeutic agents. In many countries, the goal has been to achieve absolute zero risk for infectious agents. In some instances, the screening procedures to achieve these goals reduced the availability of plasma needed for manufactured derivatives and produced another emerging risk, shortages of clotting factor preparations. Similarly, better diagnostic methods identified other potential agents that were not inactivated by current technology. Likewise, immune tolerance regimens and the prophylactic management of haemophilia introduced different therapeutic delivery systems with their own risks. The drugs used to manage diseases such as human immunodeficiency virus (HIV), which were transmitted by products manufactured before mid-1980, create their own set of risks for this community. Topical emerging risks of treatment, including variant Creutzfeldt,Jakob disease, an assessment of its risks and impact, the complications of using indwelling catheters, and the role of protease inhibitors used to treat HIV may have on bleeding complications of haemophilia are discussed. [source]


Tidal Current Energy Technologies

IBIS, Issue 2006
PETER L FRAENKEL
This paper sets the context for the development of tidal current technology in the face of impending climate change and so called ,peak oil'. Siting requirements are specified for tidal turbines and a general overview of the different technologies under development is given. Specific and detailed descriptions of leading Marine Current Turbine's technology are also highlighted. The paper considers the likely environmental impact of the technology, considering in particular possible (perceived and real) risks to marine wildlife, including birds. It concludes by indicating the planned future developments, and the scale and speed of implementation that might be achieved. [source]


Large-scale cultivation of fingerlings of the Chinese Sturgeon Acipenser sinensis for re-stocking: a description of current technology

JOURNAL OF APPLIED ICHTHYOLOGY, Issue 2006
Y. Zhu
First page of article [source]


Marking high-technology market evolution through the foci of market stories: the case of local area networks

THE JOURNAL OF PRODUCT INNOVATION MANAGEMENT, Issue 6 2002
Vasilis Theoharakis
Previous research suggests that changing consumer and producer knowledge structures play a role in market evolution and that the sociocognitive processes of product markets are revealed in the sensemaking stories of market actors that are rebroadcasted in commercial publications. In this article, the authors lend further support to the story-based nature of market sensemaking and the use of the sociocognitive approach in explaining the evolution of high-technology markets. They examine the content (i.e., subject matter or topic) and volume (i.e., the number) of market stories and the extent to which content and volume of market stories evolve as a technology emerges. Data were obtained from a content analysis of 10,412 article abstracts, published in key trade journals, pertaining to Local Area Network (LAN) technologies and spanning the period 1981 to 2000. Hypotheses concerning the evolving nature (content and volume) of market stories in technology evolution are tested. The analysis identified four categories of market stories,technical, product availability, product adoption, and product discontinuation. The findings show that the emerging technology passes initially through a ,technical-intensive' phase whereby technology related stories dominate, through a ,supply-push' phase, in which stories presenting products embracing the technology tend to exceed technical stories while there is a rise in the number of product adoption reference stories, to a ,product-focus' phase, with stories predominantly focusing on product availability. Overall story volume declines when a technology matures as the need for sensemaking reduces. When stories about product discontinuation surface, these signal the decline of current technology. New technologies that fail to maintain the ,product-focus' stage also reflect limited market acceptance. The article also discusses the theoretical and managerial implications of the study's findings. [source]


Novel nickel-based catalyst for low temperature hydrogen production from methane steam reforming in membrane reformer

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2010
Yazhong Chen
Abstract Hydrogen production from various hydrocarbon fuels, particularly biomass-derived fuels, has attracted worldwide attention due to its potential for application to fuel cells, a device which converts chemical energy into electricity efficiently and cleanly. However, current technology, such as natural gas steam reforming, could not meet the specific requirements of hydrogen for fuel cells. Therefore, novel processes are intensively investigated, aiming to develop economic and efficient ones for the specific purpose. An important direction is the integrated membrane reformer for one-step high-purity hydrogen production. However, for the commercial realization of this technology, there are still some difficulties to overcome. By comparison with previous investigations with a similar membrane, this work showed that catalyst also played an important role in determining membrane reformer performance. We proposed that when thickness of membrane was several micrometers, the permeance of membrane became less important than the kinetics of catalyst, due to the fact that under such conditions, hydrogen permeation rate was faster than the kinetics of steam reforming reaction when commercial catalyst was applied, but further evidence is indispensable. In this initial work, we focused on developing efficient nickel catalyst for low temperature steam reforming. Nickel-based catalyst was developed by deposition,coprecipitation and used as pre-reduced, showing high performance for methane steam reforming at low temperatures and good durability, which may find practical application for the integrated membrane reforming process. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


Large-scale production, harvest and logistics of switchgrass (Panicum virgatum L.) , current technology and envisioning a mature technology

BIOFUELS, BIOPRODUCTS AND BIOREFINING, Issue 2 2009
Shahab Sokhansanj
Abstract Switchgrass (Panicum virgatum L.) is a promising cellulosic biomass feedstock for biorefineries and biofuel production. This paper reviews current and future potential technologies for production, harvest, storage, and transportation of switchgrass. Our analysis indicates that for a yield of 10 Mg ha,1, the current cost of producing switchgrass (after establishment) is about $41.50 Mg,1. The costs may be reduced to about half this if the yield is increased to 30 Mg ha,1 through genetic improvement, intensive crop management, and/or optimized inputs. At a yield of 10 Mg ha,1, we estimate that harvesting costs range from $23.72 Mg,1 for current baling technology to less than $16 Mg,1 when using a loafing collection system. At yields of 20 and 30 Mg ha,1 with an improved loafing system, harvesting costs are even lower at $12.75 Mg,1 and $9.59 Mg,1, respectively. Transport costs vary depending upon yield and fraction of land under switchgrass, bulk density of biomass, and total annual demand of a biorefinery. For a 2000 Mg d,1 plant and an annual yield of 10 Mg ha,1, the transport cost is an estimated $15.42 Mg,1, assuming 25% of the land is under switchgrass production. Total delivered cost of switchgrass using current baling technology is $80.64 Mg,1, requiring an energy input of 8.5% of the feedstock higher heating value (HHV). With mature technology, for example, a large, loaf-collection system, the total delivered cost is reduced to about $71.16 Mg,1 with 7.8% of the feedstock HHV required as input. Further cost reduction can be achieved by combining mature technology with increased crop productivity. Delivered cost and energy input do not vary significantly as biorefinery capacity increases from 2000 Mg d,1 to 5000 Mg d,1 because the cost of increased distance to access a larger volume feedstock offsets the gains in increased biorefinery capacity. This paper outlines possible scenarios for the expansion of switchgrass handling to 30 Tg (million Mg) in 2015 and 100 Tg in 2030 based on predicted growth of the biorefinery industry in the USA. The value of switchgrass collection operations is estimated at more than $0.6 billion in 2015 and more than $2.1 billion in 2030. The estimated value of post-harvest operations is $0.6,$2.0 billion in 2015, and $2.0,$6.5 billion in 2030, depending on the degree of preprocessing. The need for power equipment (tractors) will increase from 100 MW in 2015 to 666 MW in 2030, with corresponding annual values of $150 and $520 million, respectively. © 2009 Society of Chemical Industry and John Wiley & Sons, Ltd [source]


Effects of Near-Surface Absorption on Reflection Characteristics of Continental Interbedded Strata: the Dagang Oilfield as an Example

ACTA GEOLOGICA SINICA (ENGLISH EDITION), Issue 5 2010
LI Guofa
Abstract: Due to the effects of seismic wave field interference, the reflection events generated from interbedded and superposed sand and shale strata no longer have an explicit corresponding relationship with the geological interface. The absorption of the near-surface layer decreases the resolution of the seismic wavelet, intensifies the interference of seismic reflections from different sand bodies, and makes seismic data interpretation of thin interbedded strata more complex and difficult. In order to concretely investigate and analyze the effects of the near-surface absorption on seismic reflection characteristics of interbedded strata, and to make clear the ability of current technologies to compensate the near-surface absorption, a geological model of continental interbedded strata with near-surface absorption was designed, and the prestack seismic wave field was numerically simulated with wave equations. Then, the simulated wave field was processed by the prestack time migration, the effects of near-surface absorption on prestack and poststack reflection characteristics were analyzed, and the near-surface absorption was compensated for by inverse Q -filtering. The model test shows that: (1) the reliability of prediction and delineation of a continental reservoir with AVO inversion is degraded due to the lateral variation of the near-surface structure; (2) the corresponding relationships between seismic reflection events and geological interfaces are further weakened as a result of near-surface absorption; and (3) the current technology of absorption compensation probably results in false geological structure and anomaly. Based on the model experiment, the real seismic data of the Dagang Oil Field were analyzed and processed. The seismic reflection characteristics of continental interbedded strata were improved, and the reliability of geological interpretation from seismic data was enhanced. [source]


Transcatheter Aortic Valve Replacement: A Potential Option for the Nonsurgical Patient

CLINICAL CARDIOLOGY, Issue 6 2009
Jigar H. Patel MD
With improved life expectancy, the incidence of aortic stenosis is rising. However, up to one-third of patients who require lifesaving surgical aortic valve replacement are denied surgery due to a high operative mortality rate. Such patients can only be treated with medical therapy or percutaneous aortic valvuloplasty, neither of which has been shown to improve mortality. With advances in interventional cardiology, transcatheter methods have been developed for aortic valve replacement. Clinical trials are investigating these devices in patients with severe aortic stenosis that have been denied surgery. Preliminary results from these trials suggest that transcatheter aortic valve replacement (TAVR) is not only feasible, but an effective way to improve symptoms. In this review, we describe the current technology and display available outcome data. Though technical challenges and operator learning curve limit optimal use of the current technology, continued experience and advancements in technology may one day make TAVR a viable alternative to traditional surgical aortic valve replacement. Copyright © 2009 Wiley Periodicals, Inc. [source]