Potential Loss (potential + loss)

Distribution by Scientific Domains


Selected Abstracts


Identifying the Potential Loss of Monitoring Wells Using an Uncertainty Analysis

GROUND WATER, Issue 6 2005
Vicky L. Freedman
From the mid-1940s through the 1980s, large volumes of waste water were discharged at the Hanford Site in southeastern Washington State, causing a large-scale rise (>20 m) in the water table. When waste water discharges ceased in 1988, ground water mounds began to dissipate. This caused a large number of wells to go dry and has made it difficult to monitor contaminant plume migration. To identify monitoring wells that will need replacement, a methodology has been developed using a first-order uncertainty analysis with UCODE, a nonlinear parameter estimation code. Using a three-dimensional, finite-element ground water flow code, key parameters were identified by calibrating to historical hydraulic head data. Results from the calibration period were then used to check model predictions by comparing monitoring wells' wet/dry status with field data. This status was analyzed using a methodology that incorporated the 0.3 cumulative probability derived from the confidence and prediction intervals. For comparison, a nonphysically based trend model was also used as a predictor of wells' wet/dry status. Although the numerical model outperformed the trend model, for both models, the central value of the intervals was a better predictor of a wet well status. The prediction interval, however, was more successful at identifying dry wells. Predictions made through the year 2048 indicated that 46% of the wells in the monitoring well network are likely to go dry in areas near the river and where the ground water mound is dissipating. [source]


Return to work following unilateral enucleation in 34 horses (2000,2008)

EQUINE VETERINARY JOURNAL, Issue 2 2010
M. E. UTTER
Summary Reasons for study: The effect of unilateral enucleation on vision and potential loss of performance in horses has received little study. Objective: To evaluate the likelihood of return to prior discipline following unilateral enucleation in horses, assessing the role of age at enucleation, equine discipline, reason for enucleation, time to vision loss and eye enucleated. Hypothesis: Unilateral enucleation has no significant effect on likelihood of return to work in horses, for both right and left eyes, across age and discipline. Method: A retrospective review of medical records identified 92 horses that underwent unilateral enucleation at the University of Pennsylvania New Bolton Center from April 2000,April 2008. Case variables determined from the medical record included breed and sex of horse, age at enucleation, which eye was enucleated, reason for enucleation and onset of vision loss. Pre- and post operative occupations were determined by telephone interview with the owner or trainer of each horse. Results: Based on hospital surgery logs, 92 enucleations were performed over the 8 year period and 77 records were available for review, with follow-up information available for 34 horses. Of these, 29/34 (85%) horses returned to work in pleasure or trail riding (11/13), flat racing (7/10), hunter/jumpers (4/4), dressage (3/3), group lessons (1/1), eventing (1/1), steeplechase (1/1) and as a broodmare (1/1). Four of 5 horses (4/34, or 12% sample) that did not return to work (2 pleasure and 2 racing) were retired due to anticipated or perceived decrease in performance or behaviour change following unilateral enucleation, with the remaining horse retired from racing for lameness issues unrelated to enucleation. Twenty-two of 25 horses (88%) with acute vision loss and 7/9 horses (78%) with gradual vision loss returned to their previous discipline. Conclusions: Horses are able to return to a variety of occupations after unilateral enucleation. [source]


A kinetic perspective on extracellular electron transfer by anode-respiring bacteria

FEMS MICROBIOLOGY REVIEWS, Issue 1 2010
César I. Torres
Abstract In microbial fuel cells and electrolysis cells (MXCs), anode-respiring bacteria (ARB) oxidize organic substrates to produce electrical current. In order to develop an electrical current, ARB must transfer electrons to a solid anode through extracellular electron transfer (EET). ARB use various EET mechanisms to transfer electrons to the anode, including direct contact through outer-membrane proteins, diffusion of soluble electron shuttles, and electron transport through solid components of the extracellular biofilm matrix. In this review, we perform a novel kinetic analysis of each EET mechanism by analyzing the results available in the literature. Our goal is to evaluate how well each EET mechanism can produce a high current density (>10 A m,2) without a large anode potential loss (less than a few hundred millivolts), which are feasibility goals of MXCs. Direct contact of ARB to the anode cannot achieve high current densities due to the limited number of cells that can come in direct contact with the anode. Slow diffusive flux of electron shuttles at commonly observed concentrations limits current generation and results in high potential losses, as has been observed experimentally. Only electron transport through a solid conductive matrix can explain observations of high current densities and low anode potential losses. Thus, a study of the biological components that create a solid conductive matrix is of critical importance for understanding the function of ARB. [source]


Youth, AIDS and Rural Livelihoods in Southern Africa

GEOGRAPHY COMPASS (ELECTRONIC), Issue 3 2008
Lorraine Van Blerk
AIDS, in interaction with other factors, is impacting on the livelihood activities, opportunities and choices of young people in southern Africa. This article explores these linkages firstly by reviewing what is known about the impacts of AIDS on young people, before looking more specifically at how this impinges on their future ability to secure livelihoods. Within the home and family, AIDS often results in youth taking on a heavy burden of responsibilities. This can include caring for sick relatives, helping with chores and taking on paid employment. This burden of care and work can have further impacts on young people's future livelihoods as they find they have reduced access to schooling, potential loss of inheritance and a breakdown in the intergenerational transfer of knowledge, which is especially important for sustained agricultural production. The article ends by suggesting that the sustainable livelihoods approach can be useful for understanding the complexity of the issues surrounding the impacts of AIDS on young people's livelihoods and calls for further research to explore how their access to future sustainable livelihoods in rural southern Africa might be supported. [source]


Human leukocyte antigen,associated sequence polymorphisms in hepatitis C virus reveal reproducible immune responses and constraints on viral evolution,

HEPATOLOGY, Issue 2 2007
Joerg Timm
CD8+ T cell responses play a key role in governing the outcome of hepatitis C virus (HCV) infection, and viral evolution enabling escape from these responses may contribute to the inability to resolve infection. To more comprehensively examine the extent of CD8 escape and adaptation of HCV to human leukocyte antigen (HLA) class I restricted immune pressures on a population level, we sequenced all non-structural proteins in a cohort of 70 chronic HCV genotype 1a-infected subjects (28 subjects with HCV monoinfection and 42 with HCV/human immunodeficiency virus [HIV] coinfection). Linking of sequence polymorphisms with HLA allele expression revealed numerous HLA-associated polymorphisms across the HCV proteome. Multiple associations resided within relatively conserved regions, highlighting attractive targets for vaccination. Additional mutations provided evidence of HLA-driven fixation of sequence polymorphisms, suggesting potential loss of some CD8 targets from the population. In a subgroup analysis of mono- and co-infected subjects some associations lost significance partly due to reduced power of the utilized statistics. A phylogenetic analysis of the data revealed the substantial influence of founder effects upon viral evolution and HLA associations, cautioning against simple statistical approaches to examine the influence of host genetics upon sequence evolution of highly variable pathogens. Conclusion: These data provide insight into the frequency and reproducibility of viral escape from CD8+ T cell responses in human HCV infection, and clarify the combined influence of multiple forces shaping the sequence diversity of HCV and other highly variable pathogens. (HEPATOLOGY 2007.) [source]


Accounting for Joint Ventures and Associates in Canada, UK, and US: Do US Rules Hide Information?

JOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 3-4 2006
Kazbi Soonawalla
Abstract: Unlike US GAAP, accounting principles in Canada and the UK require disclosure of disaggregated components of joint ventures and associates. Using comparative analysis of Canadian, UK and US data, this study investigates the potential loss of forecasting and valuation relevant information from aggregating joint venture and associate accounting amounts. Findings show that aggregating joint venture and associate investment numbers, and aggregating joint venture revenues and expenses, each leads to loss of forecasting and valuation relevant information. Thus, current US accounting principles likely mask information that financial statement users could use to predict future earnings and explain share prices. [source]


Increasing Expression of the Retinoic X Receptor-B During Malignant Melanoma Progression

JOURNAL OF CUTANEOUS PATHOLOGY, Issue 1 2005
S.J. McAlhany
Retinoic X receptor-b (RXR-b) is a heterodimerization partner for vitamin D receptor (VDR). 1,25-dihydroxyvitamin D3 activation of VDR leads to growth inhibition in numerous cell lines, including some melanoma lines. Evaluation of VDR and RXR-b expression in vivo in melanocytic neoplasms will increase our understanding of this pathways potential role in growth control. Previous studies in our laboratory showed decreased VDR expression in superficially invasive melanoma, and progressive loss of expression in deeply invasive melanomas and metastatic melanomas (MET). We next sought to evaluate RXR-b expression. Twenty-eight melanocytic neoplasms including 8 melanomas in situ (MIS), 9 primary invasive melanomas (PIM), and 11 MET were evaluated for RXR-b expression by immunohistochemistry. Nuclear labeling was assessed as 0 (0%), 1+(<5%), 2+(>5% but <50%), or 3+(>=50%). A significant increase in RXR-b expression from low (0,1+) to high (>1+) was found when comparing MIS to PIM and MET (chi2 p < 0.05). These data suggest: 1) potential loss of 1,25-dihydroxyvitamin D3 induced growth inhibition during melanoma progression may be due to decreased VDR expression without concomitant loss of RXR-b; and 2) increased RXR-b expression during melanoma progression may offer selective advantage through alternative signaling pathways. [source]


RESIDUAL PECTINESTERASE ACTIVITY IN DEHYDRATED ONION AND GARLIC PRODUCTS

JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 1 2002
ELISABETH GARCIA
During the dehydration of onion and garlic products, use of high temperatures is undesirable due to the potential loss of aroma and flavor characteristics. As a consequence, residual pectinesterase (PE) activity may be found in these dehydrated spices. This study reports the presence of PE activity in raw onions and in dehydrated onion and garlic products. Pectinesterase activity is higher in the raw onion stem disks, and dehydrated products made from this tissue, than in the bulbs. Dehydrated onion products induced gelation of citrus pectin solutions and tomato purees. Although some inactivation of PE in dehydrated onion water suspensions and extracts was observed after 10 min at 50C, complete inactivation required 2 min at 82C. Commercial dehydration operations may require reevaluation to eliminate residual PE activity in dehydrated onion and garlic products. [source]


Nordihydroguaiaretic acid induces astroglial death via glutathione depletion

JOURNAL OF NEUROSCIENCE RESEARCH, Issue 14 2007
Joo-Young Im
Abstract Nordihydroguaiaretic acid (NDGA) is known to cause cell death in certain cell types that is independent of its activity as a lipoxygenase inhibitor; however, the underlying mechanisms are not fully understood. In the present study, we examined the cellular responses of cultured primary astroglia to NDGA treatment. Continuous treatment of primary astroglia with 30 ,M NDGA caused >85% cell death within 24 hr. Cotreatment with the lipoxygenase products 5-HETE, 12-HETE, and 15-HETE did not override the cytotoxic effects of NDGA. In assays employing the mitochondrial membrane potential-sensitive dye JC-1, NDGA was found to induce a rapid and almost complete loss of mitochondrial membrane potential. However, the mitochondrial permeability transition pore inhibitors cyclosporin A and bongkrekic acid did not block NDGA-induced astroglial death. We found that treatment with N-acetyl cysteine (NAC), glutathione (GSH), and GSH ethyl ester (GSH-EE) did inhibit NDGA-induced astroglial death. Consistently, NDGA-induced astroglial death proceeded in parallel with intracellular GSH depletion. Pretreatment with GSH-EE and NAC did not block NDGA-induced mitochondrial membrane potential loss, and there was no evidence that reactive oxygen species (ROS) production was involved in NDGA-induced astroglial death. Together, these results suggest that NDGA-induced astroglial death occurs via a mechanism that involves GSH depletion independent of lipoxygenase activity inhibition and ROS stress. © 2007 Wiley-Liss, Inc. [source]


PHYLOGENY OF THE DASYCLADALES (CHLOROPHYTA, ULVOPHYCEAE) BASED ON ANALYSES OF RUBISCO LARGE SUBUNIT (rbcL) GENE SEQUENCES,

JOURNAL OF PHYCOLOGY, Issue 4 2003
Frederick W. Zechman
The phylogeny of the green algal Order Dasycladales was inferred by maximum parsimony and Bayesian analyses of chloroplast-encoded rbcL sequence data. Bayesian analysis suggested that the tribe Acetabularieae is monophyletic but that some genera within the tribe, such as Acetabularia Lamouroux and Polyphysa Lamouroux, are not. Bayesian analysis placed Halicoryne Harvey as the sister group of the Acetabularieae, a result consistent with limited fossil evidence and monophyly of the family Acetabulariaceae but was not supported by significant posterior probability. Bayesian analysis further suggested that the family Dasycladaceae is a paraphyletic assemblage at the base of the Dasycladales radiation, casting doubt on the current family-level classification. The genus Cymopolia Lamouroux was inferred to be the basal-most dasycladalean genus, which is also consistent with limited fossil evidence. Unweighted parsimony analyses provided similar results but primarily differed by the sister relationship between Halicoryne Lamouroux and Bornetella Munier-Chalmas, thus supporting the monophyly of neither the families Acetabulariaceae nor Dasycladaceae. This result, however, was supported by low bootstrap values. Low transition-to-transversion ratios, potential loss of phylogenetic signal in third codon positions, and the 550 million year old Dasycladalean lineage suggest that dasyclad rbcL sequences may be saturated due to deep time divergences. Such factors may have contributed to inaccurate reconstruction of phylogeny, particularly with respect to potential inconsistency of parsimony analyses. Regardless, strongly negative g1 values were obtained in analyses including all codon positions, indicating the presence of considerable phylogenetic signal in dasyclad rbcL sequence data. Morphological features relevant to the separation of taxa within the Dasycladales and the possible effects of extinction on phylogeny reconstruction are discussed relative to the inferred phylogenies. [source]


New product decision making: How chance and size of loss influence what marketing managers see and do

PSYCHOLOGY & MARKETING, Issue 11 2002
David Forlani
This article empirically examines, in a new-product decision context, the relationships among risk propensity, perceived risk, and risky choice decisions, when risk is operationalized as the chance of loss and the size of loss. The results indicate that perceptions of chance of loss directly influence choice among alternatives possessing different chances of loss and gain, whereas risk propensity directly influences choice among alternatives that differ in their size of loss and gain. The findings extend previous research by identifying dimension-specific effects (a) between who the decision maker is and the size of an investment's potential loss, and (b) between what the decision maker sees and the chance that an investment will experience a loss. These results not only contribute to theory, but also provide marketing managers with guidance for their risky choice decisions. The composition of a new product's risk has implications for the decisions marketing managers make, for the placement of managers in risk-sensitive positions, and for the presentation of information to individuals with oversight responsibility for the firm's product strategy decisions. © 2002 Wiley Periodicals, Inc. [source]


Why Current Breast Pathology Practices Must Be Evaluated.

THE BREAST JOURNAL, Issue 5 2007
A Susan G. Komen for the Cure White Paper: June 200
To this end, the organization has a strong interest and proven track record in ensuring public investment in quality breast health and breast cancer care. Recently, Susan G. Komen for the Cure identified major issues in the practice of pathology that have a negative impact on the lives of thousands of breast cancer patients in the United States. These issues were identified through a comprehensive literature review and interviews conducted in 2005,2006 with experts in oncology, breast pathology, surgery, and radiology. The interviewees practiced in community, academic, and cooperative group settings. Komen for the Cure has identified four areas that have a direct impact on the quality of care breast cancer patients receive in the United States, the accuracy of breast pathology diagnostics, the effects of current health insurance, and reimbursement policies on patients who are evaluated for a possible breast cancer diagnosis, the substantial decrease in tissue banking participation, particularly during a time of rapid advances in biologically correlated clinical science and the role for the Susan G. Komen for the Cure, pathology professional societies and the Federal government in ensuring that breast pathology practices meet the highest possible standards in the United States Concerns surrounding the quality and practice of breast pathology are not limited to diagnostic accuracy. Other considerations include, training and proficiency of pathologists who are evaluating breast specimens, the lack of integration of pathologists in the clinical care team, inadequate compensation for the amount of work required to thoroughly analyze specimens, potential loss in translational research as a result of medical privacy regulations, and the lack of mandatory uniform pathology practice standards without any way to measure the degree of variation or to remedy it. [source]


A probability and decision-model analysis of PROVOST seasonal multi-model ensemble integrations

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 567 2000
T. N. Palmer
Abstract A probabilistic analysis is made of seasonal ensemble integrations from the PROVOST project (PRediction Of climate Variations On Seasonal to interannual Time-scales), with emphasis on the Brier score and related Murphy decomposition, and the relative operating characteristic. To illustrate the significance of these results to potential users, results from the analysis of the relative operating characteristic are input to a simple decision model. The decision-model analysis is used to define a user-specific objective measure of the economic value of seasonal forecasts. The analysis is made for two simple meteorological forecast conditions or ,events', E, based on 850 hPa temperature. The ensemble integrations result from integrating four different models over the period 1979,93. For each model a set of 9-member ensembles is generated by running from consecutive analyses. Results from the Brier skill score analysis taken over all northern hemisphere grid points indicate that, whilst the skill of individual-model ensembles is only marginally higher than a probabilistic forecast of climatological frequencies, the multi-model ensemble is substantially more skilful than climatology. Both reliability and resolution are better for the multi-model ensemble than for the individual-model ensembles. This improvement arises both from the use of different models in the ensemble, and from the enhanced ensemble size obtained by combining individual-model ensembles; the latter reason was found to be the more important. Brier skill scores are higher for years in which there were moderate or strong El Niño Southern Oscillation (ENSO) events. Over Europe, only the multi-model ensembles showed skill over climatology. Similar conclusions are reached from an analysis of the relative operating characteristic. Results from the decision-model analysis show that the economic value of seasonal forecasts is strongly dependent on the cost, C, to the user of taking precautionary action against E, in relation to the potential loss, L, if precautionary action is not taken and E occurs. However, based on the multi-model ensemble data, the economic value can be as much as 50% of the value of a hypothetical perfect deterministic forecast. For the hemisphere as a whole, value is enhanced by restriction to ENSO years. It is shown that there is potential economic value in seasonal forecasts for European users. However, the impact of ENSO on economic value over Europe is mixed; value is enhanced by El Niño only for some potential users with specific C/L. The techniques developed are applicable to complex E for arbitrary regions. Hence these techniques are proposed as the basis of an objective probabilistic and decision-model evaluation of operational seasonal ensemble forecasts. [source]


European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007

ANNALS OF HUMAN GENETICS, Issue 4 2007
Article first published online: 28 MAY 200
Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaïti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naïve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source]


Genetic covariation in production traits of sub-adult black bream Acanthopagrus butcheri after grow-out

AQUACULTURE RESEARCH, Issue 11 2005
Robert G Doupé
Abstract Predicting the suitability and reliability of traits associated with juvenile growth as indirect selection criteria for choosing future broodstock requires accurate and repeatable estimates of genetic (co)variation for growth traits at different ages. We compared juvenile wet weight of black bream Acanthopagrus butcheri (Munro) at 6 months of age with wet weight, dressed weight, fillet yield and gonad weight in tagged individuals at 18 months of age, following 12 months of farm grow-out. Fish survival and tag retention was high, and there was significant among-family variation for all traits. The phenotypic correlations among wet weight, dressed weight and fillet yield at 18 months of age were very high (0.93,0.97) and similar to their genetic correlations (0.96). Importantly, the phenotypic correlations between wet weight at 6 months and wet weight, dressed weight and fillet yield at 18 months were high (0.63,0.65), and so too were their genetic correlations (0.66,0.73), indicating the potential for using wet weight in the hatchery as a selection criterion for improved weight and meat yield of fish at harvest. Gonad weight shared little or no phenotypic or genetic correlation with these other traits, suggesting that selection for faster growing fish will not affect fecundity or sexual maturation rate. It appears, however, that cultured black bream do become sexually mature more rapidly than wild fish, as 78% of all fish harvested in this study had developing or mature gonads, whereas less than 50% of fish in wild populations are reproductively mature by the same age. Precocious sexual development may lead to uncontrolled spawning in grow-out ponds and a potential loss of selection gains. [source]


Uncovering Symptom Progression History from Disease Registry Data with Application to Young Cystic Fibrosis Patients

BIOMETRICS, Issue 2 2010
Jun Yan
Summary The growing availability of various disease registry data has brought precious opportunities to epidemiologists to understand the natural history of the registered diseases. It also presents challenges to the traditional data analysis techniques because of complicated censoring/truncation schemes and temporal dynamics of covariate influences. In a case study of the Cystic Fibrosis Foundation Patient Registry data, we propose analyses of progressive symptoms using temporal process regressions, as an alternative to the commonly employed proportional hazards models. Two endpoints are considered, the prevalence of ever positive and currently positive for Pseudomonas aeruginosa (PA) infection in the lungs, which capture different aspect of the disease process. The analysis of ever PA positive via a time-varying coefficient model demonstrates the lack of fit, as well as the potential loss of information, in the standard proportional hazards analysis. The analysis of currently PA positive yields results that are clinically meaningful and have not previously been reported in the cystic fibrosis literature. Our analyses demonstrate that prenatal/neonatal screening results in lower prevalence of PA infection compared to traditional diagnosis via signs and symptoms, but this benefit attenuates with age. Calendar years of diagnosis also affect the risk of PA infection; patients diagnosed in more recent cohort show higher prevalence of ever PA positive but lower prevalence of currently PA positive. [source]


Transformative Knowledge Transfer Through Empowering and Paying Community Researchers

BIOTROPICA, Issue 5 2009
Stephen T. Garnett
ABSTRACT Environmental research is often conducted independently of the community in which the environment is situated, with transfer of results into policy and on-ground action occurring independently of the community's interests or aspirations. Increasingly the need for greater community involvement in the research process has been recognized. For community members, however, such engagement usually involves trade-offs. While it is often assumed that community members should participate voluntarily because they will gain from the research, any benefits from knowledge, understanding and a capacity to influence the research have to be offset against time and potential loss of unremunerated intellectual property. We argue, using case studies from tropical Australia and Africa, that a more effective means of engagement and knowledge transfer is training and remuneration of community members as coresearchers. This engagement is much more than payment for labor,it is investment in local intellectual property and requires researcher humility, power-sharing and recognition that access to research funding provides no moral or intellectual authority. Further, we argue that, for effective adoption of research results, community members need to be part of negotiated agreements on the initial nature of the research to ensure it answers questions of genuine local relevance and that local researchers have the capacity to place locally conducted research into a wider context. We argue that immediate rewards for involvement not only secure engagement but, where appropriate, are likely to lead to effective implementation of research results, enhanced local capacity and greater equity in intellectual power-sharing. [source]


Influence of implant diameter on surrounding bone

CLINICAL ORAL IMPLANTS RESEARCH, Issue 5 2007
Jeff Brink
Abstract Objectives: Implant osseointegration is dependent upon various factors, such as bone quality and type of implant surface. It is also subject to adaptation in response to changes in bone metabolism or transmission of masticatory forces. Understanding of long-term physiologic adjustment is critical to prevention of potential loss of osseointegration, especially because excessive occlusal forces lead to failure. To address this issue, wide-diameter implants were introduced in part with the hope that greater total implant surface would offer mechanical resistance. Yet, there is little evidence that variation in diameter translates into a different bone response in the implant vicinity. Therefore, this study aimed at comparing the impact of implant diameter on surrounding bone. Material and methods: Twenty standard (3.75 mm) and 20 wide (5 mm) implants were placed using an animal model. Histomorphometry was performed to establish initial bone density (IBD), bone to implant contact (BIC) and adjacent bone density (ABD). Results: BIC was 71% and 73%, whereas ABD was 65% and 52%, for standard and wide implants, respectively. These differences were not statistically different (P>0.05). Correlation with IBD was then investigated. BIC was not correlated with IBD. ABD was not correlated to IBD for standard implants (r2=0.126), but it was correlated with wide implants (r2=0.82). In addition, a 1 : 1 ratio between IBD and ABD was found for wide implants. It can be concluded, within the limits of this study, that ABD may be influenced by implant diameter, perhaps due to differences in force dissipation. [source]


A kinetic perspective on extracellular electron transfer by anode-respiring bacteria

FEMS MICROBIOLOGY REVIEWS, Issue 1 2010
César I. Torres
Abstract In microbial fuel cells and electrolysis cells (MXCs), anode-respiring bacteria (ARB) oxidize organic substrates to produce electrical current. In order to develop an electrical current, ARB must transfer electrons to a solid anode through extracellular electron transfer (EET). ARB use various EET mechanisms to transfer electrons to the anode, including direct contact through outer-membrane proteins, diffusion of soluble electron shuttles, and electron transport through solid components of the extracellular biofilm matrix. In this review, we perform a novel kinetic analysis of each EET mechanism by analyzing the results available in the literature. Our goal is to evaluate how well each EET mechanism can produce a high current density (>10 A m,2) without a large anode potential loss (less than a few hundred millivolts), which are feasibility goals of MXCs. Direct contact of ARB to the anode cannot achieve high current densities due to the limited number of cells that can come in direct contact with the anode. Slow diffusive flux of electron shuttles at commonly observed concentrations limits current generation and results in high potential losses, as has been observed experimentally. Only electron transport through a solid conductive matrix can explain observations of high current densities and low anode potential losses. Thus, a study of the biological components that create a solid conductive matrix is of critical importance for understanding the function of ARB. [source]


Current and Future Trends of Climatic Extremes in Switzerland

GEOGRAPHY COMPASS (ELECTRONIC), Issue 4 2007
Martin Beniston
This article provides an overview of extreme climatic events that are a feature of current and future climate that require full understanding if they are to be assessed in terms of social and economic costs. A review is made of the type of events that are important in mid-latitudes, with examples taken from the heat waves, floods and wind-storms that have affected Switzerland during the twentieth century. Regional climate model results are also presented for a scenario conducted over Europe. These simulations suggest that there may be significant shifts in the frequency and intensity of many forms of extremes as a warmer global climate progressively replaces current climate. In view of the potential losses in human, economic and environmental terms, extreme events and their future evolution need to carefully assessed in order to formulate appropriate adaptation strategies aimed at minimizing the negative impacts that extremes are capable of generating. [source]


The importance of rapid, disturbance-induced losses in carbon management and sequestration

GLOBAL ECOLOGY, Issue 1 2002
David D. Breshears
Abstract Management of terrestrial carbon fluxes is being proposed as a means of increasing the amount of carbon sequestered in the terrestrial biosphere. This approach is generally viewed only as an interim strategy for the coming decades while other longer-term strategies are developed and implemented , the most important being the direct reduction of carbon emissions. We are concerned that the potential for rapid, disturbance-induced losses may be much greater than is currently appreciated, especially by the decision-making community. Here we wish to: (1) highlight the complex and threshold-like nature of disturbances , such as fire and drought, as well as the erosion associated with each , that could lead to carbon losses; (2) note the global extent of ecosystems that are at risk of such disturbance-induced carbon losses; and (3) call for increased consideration of and research on the mechanisms by which large, rapid disturbance-induced losses of terrestrial carbon could occur. Our lack of ability as a scientific community to predict such ecosystem dynamics is precluding the effective consideration of these processes into strategies and policies related to carbon management and sequestration. Consequently, scientists need to do more to improve quantification of these potential losses and to integrate them into sound, sustainable policy options. [source]


Estimating the effectiveness of a rotational irrigation delivery system: A case study from Pakistan,

IRRIGATION AND DRAINAGE, Issue 3 2010
Noor ul Hassan Zardari
warabandi; allocation de l'eau; bassin de l'Indus; Pakistan Abstract In this study, basic principles of the rotational irrigation water delivery system of Pakistan (i.e. the warabandi) and the performance of the warabandi system under current socio-economic conditions have been investigated from a farmers' survey completed from 154 farmers located on five watercourses of the lower Indus River Basin. It is shown that irrigation water allocation based on very limited criteria does not give much incentive to the farmers for improving agricultural income. Also, the survey results suggest that the productivity of limited irrigation water could not be maximized under the warabandi system. We have therefore suggested the basic principles of the warabandi system should be revised by making them suitable for the current socio-economic conditions. We propose that the existence or non-existence of fresh groundwater resources along with other critical variables should be taken into consideration when making canal water allocation decisions. A framework to allow distribution equity and efficiency in water allocations , such as considering the gross area of a tertiary canal, sensitivity of crop growth stage to water shortage, crop value, bias of allocation towards most water use efficient areas, the potential losses from water deficiency, etc. , should be developed as a tool to improve water productivity for Pakistan and for individual farmers. The contribution of groundwater in the farmers' income from agriculture and the economic value of irrigation water have also been estimated. Copyright © 2009 John Wiley & Sons, Ltd. Cette étude analyse les principes de base et la performance du tour d'eau (le warabandi) selon les conditions socio-économiques à partir d'une enquête auprès de 154 agriculteurs situés sur cinq cours d'eau du bassin inférieur de l'Indus. Il est montré que la répartition de l'eau d'irrigation basée sur peu de critères n'incite pas les agriculteurs à améliorer leurs revenus. En outre, les résultats de l'enquête suggèrent que la productivité de l'eau rare ne peut pas être améliorée dans le cadre du système warabandi. Nous avons donc proposé que les principes de base du warabandi soient révisés de façon à les adapter à la situation socio-économique actuelle. Nous proposons que la disponibilité en eaux souterraines ainsi que d'autres variables soient prises en compte dans les décisions d'allocation. Un cadre permettant une allocation de l'eau équitable et efficace , prenant en compte la surface brute commandée par un canal tertiaire, la sensibilité de la croissance des cultures au moment de la pénurie d'eau, la valeur de la récolte, l'orientation vers les zones valorisant le mieux l'eau, les pertes dues au déficit en eau, etc. , devrait être développé comme un outil pour améliorer la productivité de l'eau pour le Pakistan et pour les agriculteurs. La contribution des eaux souterraines au revenu des irrigants et la valeur économique de l'eau ont également été estimées. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Prospect theory analysis of guessing in multiple choice tests

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 4 2002
Yoella Bereby-Meyer
Abstract The guessing of answers in multiple choice tests adds random error to the variance of the test scores, lowering their reliability. Formula scoring rules that penalize for wrong guesses are frequently used to solve this problem. This paper uses prospect theory to analyze scoring rules from a decision-making perspective and focuses on the effects of framing on the tendency to guess. In three experiments participants were presented with hypothetical test situations and were asked to indicate the degree of certainty that they thought was required for them to answer a question. In accordance with the framing hypothesis, participants tended to guess more when they anticipated a low grade and therefore considered themselves to be in the loss domain, or when the scoring rule caused the situation to be framed as entailing potential losses. The last experiment replicated these results with a task that resembles an actual test. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Securitization, Insurance, and Reinsurance

JOURNAL OF RISK AND INSURANCE, Issue 3 2009
J. David Cummins
This article considers strengths and weaknesses of reinsurance and securitization in managing insurable risks. Traditional reinsurance operates efficiently in managing relatively small, uncorrelated risks and in facilitating efficient information sharing between cedants and reinsurers. However, when the magnitude of potential losses and the correlation of risks increase, the efficiency of the reinsurance model breaks down, and the cost of capital may become uneconomical. At this juncture, securitization has a role to play by passing the risks along to broader capital markets. Securitization also serves as a complement for reinsurance in other ways such as facilitating regulatory arbitrage and collateralizing low-frequency risks. [source]


The effects of one night of sleep deprivation on known-risk and ambiguous-risk decisions

JOURNAL OF SLEEP RESEARCH, Issue 3 2007
BENJAMIN S. MCKENNA
Summary Sleep deprivation has been shown to alter decision-making abilities. The majority of research has utilized fairly complex tasks with the goal of emulating 'real-life' scenarios. Here, we use a Lottery Choice Task (LCT) which assesses risk and ambiguity preference for both decisions involving potential gains and those involving potential losses. We hypothesized that one night of sleep deprivation would make subjects more risk seeking in both gains and losses. Both a control group and an experimental group took the LCT on two consecutive days, with an intervening night of either sleep or sleep deprivation. The control group demonstrated that there was no effect of repeated administration of the LCT. For the experimental group, results showed significant interactions of night (normal sleep versus total sleep deprivation, TSD) by frame (gains versus losses), which demonstrate that following as little as 23 h of TSD, the prototypical response to decisions involving risk is altered. Following TSD, subjects were willing to take more risk than they ordinarily would when they were considering a gain, but less risk than they ordinarily would when they were considering a loss. For ambiguity preferences, there seems to be no direct effect of TSD. These findings suggest that, overall, risk preference is moderated by TSD, but whether an individual is willing to take more or less risk than when well-rested depends on whether the decision is framed in terms of gains or losses. [source]


Principal Component Value at Risk

MATHEMATICAL FINANCE, Issue 1 2002
R. BRUMMELHUIS
Value at risk (VaR) is an industrial standard for monitoring financial risk in an investment portfolio. It measures potential losses within a given confidence interval. The implementation, calculation, and interpretation of VaR contains a wealth of mathematical issues that are not fully understood. In this paper we present a methodology for an approximation to value at risk that is based on the principal components of a sensitivity-adjusted covariance matrix. The result is an explicit expression in terms of portfolio deltas, gammas, and the variance/covariance matrix. It can be viewed as a nonlinear extension of the linear model given by the delta-normal VaR or RiskMetrics (J.P. Morgan, 1996). [source]


Loss of genetic diversity in sea otters (Enhydra lutris) associated with the fur trade of the 18th and 19th centuries

MOLECULAR ECOLOGY, Issue 10 2002
Shawn Larson
Abstract Sea otter (Enhydra lutris) populations experienced widespread reduction and extirpation due to the fur trade of the 18th and 19th centuries. We examined genetic variation within four microsatellite markers and the mitochondrial DNA (mtDNA) d -loop in one prefur trade population and compared it to five modern populations to determine potential losses in genetic variation. While mtDNA sequence variability was low within both modern and extinct populations, analysis of microsatellite allelic data revealed that the prefur trade population had significantly more variation than all the extant sea otter populations. Reduced genetic variation may lead to inbreeding depression and we believe sea otter populations should be closely monitored for potential associated negative effects. [source]


THE EFFECTS OF ITQ IMPLEMENTATION: A DYNAMIC APPROACH

NATURAL RESOURCE MODELING, Issue 4 2000
LEE G. ANDERSON
ABSTRACT. This paper investigates the intertemporal effects of introducing Individual Transferable Quota, ITQ, fishery management programs on stock size, fleet size and composition, and returns to quota holders and to vessel operators. Theoretical analysis is conducted using a specific version of a general dynamic model of a regulated fishery. It is demonstrated that the effects will differ depending upon the prevailing regulation program, current stock size, and existing fleet size, composition and mobility and upon how the stock and fleet change over time after the switch to ITQs. The paper expands upon previous works by modeling the dynamics of change in fleet and stock size and by allowing for changes in the TAC as stock size changes, by comparing ITQs to different regulations, and by allowing the status quo before ITQ implementation to be something other than a bioeconomic equilibrium. Specific cases are analyzed using a simulation model. The analysis shows that the annual return per unit harvest to quota owners can increase or decrease over the transition period due to counteracting effects of changes in stock and fleet size. With ITQs denominated as a percentage of the TAC, the current annual value of a quota share depends upon the annual return per unit of harvest and the annual amount of harvest rights. Because the per unit value can increase or decrease over time, it is also possible that the total value can do the same. Distribution effects are also studied and it is shown that while the gains from quota share received are the present value of a potentially infinite stream of returns, potential losses are the present value of a finite stream, the length of which depends upon the remaining life of the vessel and the expected time it will continue to operate. [source]


Proteomic analysis of high-density lipoprotein

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 2 2006
Farhad Rezaee Dr.
Abstract Plasma lipoproteins, such as high-density lipoprotein (HDL), can serve as carriers for a wide range of proteins that are involved in processes such as lipid metabolism, thrombosis, inflammation and atherosclerosis. The identification of HDL-associated proteins is essential with regards to understanding these processes at the molecular level. In this study, a combination of proteomic approaches including 1-DE and 2-DE MALDI-TOF, isotope-coded affinity tag and Western blot analysis were employed to identify proteins associated with human HDL. To minimize potential losses of HDL-associated proteins during isolation, a one-step ultracentrifugation technique was applied and the quality of purified HDL was confirmed by nephelometry, high-performance gel chromatography, and Western blot analysis. MS analysis revealed the presence of 56 HDL-associated proteins including all known apolipoproteins and lipid transport proteins. Furthermore, proteins involved in hemostasis and thrombosis, the immune and complement system were found. In addition, growth factors, receptors, hormone-associated proteins and many other proteins were found to be associated with HDL. Our approach thus resulted in the identification of a large number of proteins associated with HDL. The combination of proteomic technologies proved to be a powerful and comprehensive tool for the identification of proteins on HDL. [source]


A Landscape Approach for Ecologically Based Management of Great Basin Shrublands

RESTORATION ECOLOGY, Issue 5 2009
Michael J. Wisdom
Abstract Native shrublands dominate the Great Basin of western of North America, and most of these communities are at moderate or high risk of loss from non-native grass invasion and woodland expansion. Landscape-scale management based on differences in ecological resistance and resilience of shrublands can reduce these risks. We demonstrate this approach with an example that focuses on maintenance of sagebrush (Artemisia spp.) habitats for Greater Sage-grouse (Centrocercus urophasianus), a bird species threatened by habitat loss. The approach involves five steps: (1) identify the undesired disturbance processes affecting each shrubland community type; (2) characterize the resistance and resilience of each shrubland type in relation to the undesired processes; (3) assess potential losses of shrublands based on their resistance, resilience, and associated risk; (4) use knowledge from these steps to design a landscape strategy to mitigate the risk of shrubland loss; and (5) implement the strategy with a comprehensive set of active and passive management prescriptions. Results indicate that large areas of the Great Basin currently provide Sage-grouse habitats, but many areas of sagebrush with low resistance and resilience may be lost to continued woodland expansion or invasion by non-native annual grasses. Preventing these losses will require landscape strategies that prioritize management areas based on efficient use of limited resources to maintain the largest shrubland areas over time. Landscape-scale approaches, based on concepts of resistance and resilience, provide an essential framework for successful management of arid and semiarid shrublands and their native species. [source]