Home About us Contact | |||
Better Strategies (good + strategy)
Selected AbstractsRecording previous adverse drug reactions,a gap in the systemBRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 6 2001Gillian M. Shenfield Aims, To measure the accuracy of recording of previous adverse drug reaction (ADR) history in patients admitted to a teaching hospital before and after an education programme. Methods, One month survey of patients on one medical and one surgical ward, repeated after a 1 month education programme. Patients answered a questionnaire about previous ADRs and this information was compared with that in all relevant sections of their medical records and medication charts. Results, Of 117 patients at baseline, 50 had a total of 81 previous ADRs. Only 75% were recorded on medication charts and 57% and 64%, respectively, in medical and nursing notes. In the post education survey of 124 patients, 56 had 105 previous ADRs, 85% were recorded on medication charts and 64% and 70% in medical and nursing records. These differences were not significant. Serious ADRs were also poorly recorded at baseline but, due to intervention by ward pharmacists, their recording on medication charts improved significantly after education. Pharmacists also significantly improved the quality of description of previous ADRs in both parts of the study. Conclusions, Previous ADR history obtainable from hospital patients is poorly recorded in medical records and an intensive education programme only produced a significant change in recording by ward pharmacists. Better strategies are needed to improve this essential aspect of history taking. [source] Dyslipidemia in patients with angiographically confirmed coronary artery disease,an opportunity for improvementCLINICAL CARDIOLOGY, Issue 10 2004Sanjaya Khanal M.D. Abstract Background: There are few data about lipid profiles in unselected patients with angiographically confirmed coronary artery disease (CAD). Hypothesis: The study was undertaken to investigate the demographics, clinical characteristics, angiographic findings, and baseline lipid status of 1,000 consecutive unselected patients with angiographically confirmed CAD. Methods: Between April 2001 and July 2002, we obtained informed consent and prospectively collected clinical characteristics, fasting lipid profiles, and angiographic results from 1,000 sequential patients with CAD confirmed by angiography. Results: In these patients with confirmed CAD, 78% had history of hyperlipidemia. Although 62% were receiving lip-id-lowering therapy, only 46% had a low-density lipoprotein target of < 100 mg/dl, and only 20% had achieved all four National Cholesterol Education Program-recommended lip-id targets. Conclusions: Better strategies to ensure optimal lipid levels are required. One such method using computerized workflow is being evaluated in this population. [source] Placebo-corrected efficacy of modern antiepileptic drugs for refractory epilepsy: Systematic review and meta-analysisEPILEPSIA, Issue 1 2010Stefan Beyenburg Summary Although adjunctive treatment with modern antiepileptic drugs (AEDs) is standard care in refractory epilepsy, it is unclear how much of the effect can be attributed directly to the AEDs and how much to the beneficial changes seen with placebo. Therefore, we performed a systematic review and meta-analysis of the evidence to determine the placebo-corrected net efficacy of adjunctive treatment with modern AEDs on the market for refractory epilepsy. Of 317 potentially eligible articles reviewed in full text, 124 (39%) fulfilled eligibility criteria. After excluding 69 publications, 55 publications of 54 studies in 11,106 adults and children with refractory epilepsy form the basis of evidence. The overall weighted pooled-risk difference in favor of AEDs over placebo for seizure-freedom in the total sample of adults and children was 6% [95% confidence interval (CI) 4,8, z = 6.47, p < 0.001] and 21% (95% CI 19,24, z = 17.13, p < 0.001) for 50% seizure reduction. Although the presence of moderate heterogeneity may reduce the validity of the results and limit generalizations from the findings, we conclude that the placebo-corrected efficacy of adjunctive treatment with modern AEDs is disappointingly small and suggest that better strategies of finding drugs are needed for refractory epilepsy, which is a major public health problem. [source] New concepts in bilirubin encephalopathyEUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 11 2003J. D. Ostrow Abstract Revised concepts of bilirubin encephalopathy have been revealed by studies of bilirubin toxicity in cultured CNS cells and in congenitally jaundiced Gunn rats. Bilirubin neurotoxicity is related to the unbound (free) fraction of unconjugated bilirubin (Bf), of which the dominant species at physiological pH is the protonated diacid, which can passively diffuse across cell membranes. As the binding affinity of plasma albumin for bilirubin decreases strikingly as albumin concentration increases, previously reported Bf values were underestimated. Newer diagnostic tests can detect reversible neurotoxicity before permanent damage occurs from precipitation of bilirubin (kernicterus). Early toxicity can occur at Bf only modestly above aqueous saturation and affects astrocytes and neurons, causing mitochondrial damage, resulting in impaired energy metabolism and apoptosis, plus cell-membrane perturbation, which causes enzyme leakage and hampers transport of neurotransmitters. The concentrations of unbound bilirubin in the cerebro-spinal fluid and CNS cells are probably limited mainly by active export of bilirubin back into plasma, mediated by ABC transporters present in the brain capillary endothelium and choroid plexus epithelium. Intracellular bilirubin levels may be diminished also by oxidation, conjugation and binding to cytosolic proteins. These new concepts may explain the varied susceptibility of neonates to develop encephalopathy at any given plasma bilirubin level and the selective distribution of CNS lesions in bilirubin encephalopathy. They also can suggest better strategies for predicting, preventing and treating this syndrome. [source] Themes of liver transplantation,HEPATOLOGY, Issue 6 2010Thomas E. Starzl Liver transplantation was the product of five interlocking themes. These began in 1958-1959 with canine studies of then theoretical hepatotrophic molecules in portal venous blood (Theme I) and with the contemporaneous parallel development of liver and multivisceral transplant models (Theme II). Further Theme I investigations showed that insulin was the principal, although not the only, portal hepatotrophic factor. In addition to resolving long-standing controversies about the pathophysiology of portacaval shunt, the hepatotrophic studies blazed new trails in the regulation of liver size, function, and regeneration. They also targeted inborn metabolic errors (e.g., familial hyperlipoproteinemia) whose palliation by portal diversion presaged definitive correction with liver replacement. Clinical use of the Theme II transplant models depended on multiple drug immunosuppression (Theme III, Immunology), guided by an empirical algorithm of pattern recognition and therapeutic response. Successful liver replacement was first accomplished in 1967 with azathioprine, prednisone, and antilymphoid globulin. With this regimen, the world's longest surviving liver recipient is now 40 years postoperative. Incremental improvements in survival outcome occurred (Theme IV) when azathioprine was replaced by cyclosporine (1979), which was replaced in turn by tacrolimus (1989). However, the biologic meaning of alloengraftment remained enigmatic until multilineage donor leukocyte microchimerism was discovered in 1992 in long-surviving organ recipients. Seminal mechanisms were then identified (clonal exhaustion-deletion and immune ignorance) that linked organ engraftment and the acquired tolerance of bone marrow transplantation and eventually clarified the relationship of transplantation immunology to the immunology of infections, neoplasms, and autoimmune disorders. With this insight, better strategies of immunosuppression have evolved. As liver and other kinds of organ transplantation became accepted as healthcare standards, the ethical, legal, equity, and the other humanism issues of Theme V have been resolved less conclusively than the medical-scientific problems of Themes I-IV. HEPATOLOGY 2010 [source] A, aggregation and possible implications in Alzheimer's disease pathogenesisJOURNAL OF CELLULAR AND MOLECULAR MEDICINE, Issue 3 2009Prashant R. Bharadwaj ,,Introduction ,,Amyloid Structure ,,Mechanism of Amyloid aggregation ,,A,: a natively unfolded protein? ,,Ambiguities in synthetic Ab studies ,,Formation of Amyloid plaques ,,Role of Ab in AD Pathogenesis ,,Conclusion Abstract Amyloid , protein (A,) has been associated with Alzheimer's disease (AD) because it is a major component of the extracellular plaque found in AD brains. Increased A, levels correlate with the cognitive decline observed in AD. Sporadic AD cases are thought to be chiefly associated with lack of A, clearance from the brain, unlike familial AD which shows increased A, production. A, aggregation leading to deposition is an essential event in AD. However, the factors involved in A, aggregation and accumulation in sporadic AD have not been completely characterized. This review summarizes studies that have examined the factors that affect A, aggregation and toxicity. By necessity these are studies that are performed with recombinant-derived or chemically synthesized A,. The studies therefore are not done in animals but in cell culture, which includes neuronal cells, other mammalian cells and, in some cases, non-mammalian cells that also appear susceptible to A, toxicity. An understanding of A, oligomerization may lead to better strategies to prevent AD. [source] Over-expression of glycerol dehydrogenase and 1,3-propanediol oxidoreductase in Klebsiella pneumoniae and their effects on conversion of glycerol into 1,3-propanediol in resting cell systemJOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 4 2009Li Zhao Abstract BACKGROUND: Glycerol dehydrogenase [EC.1.1.1.6] and 1,3-propanediol oxidoreductase [EC.1.1.1.202] were proved to be two of the key enzymes for glycerol conversion to 1,3-propanediol in Klebsiella pneumoniae under anaerobic conditions. For insight into their significance on 1,3-propanediol production under micro-aerobic conditions, these two enzymes were over-expressed in K. pneumoniae individually, and their effects on conversion of glycerol into 1,3-propanediol in a resting cell system under micro-aerobic conditions were investigated. RESULTS: In the resting cell system, over-expression of 1,3-propanediol oxidoreductase led to faster glycerol conversion and 1,3-propanediol production. After a 12 h conversion process, it improved the yield of 1,3-propanediol by 20.4% (222.1 mmol L,1 versus 184.4 mmol L,1) and enhanced the conversion ratio of glycerol into 1,3-propanediol from 50.8% to 59.8% (mol mol,1). Over-expression of glycerol dehydrogenase in K. pneumoniae had no significant influence both on 1,3-propanediol yield and on the conversion ratio of glycerol into 1,3-propanediol in the resting cell system. CONCLUSION: The results were important for an understanding of the significance of glycerol dehydrogenase and 1,3-propanediol oxidoreductase in 1,3-proanediol production under micro-aerobic conditions, and for developing better strategies to improve 1,3-propanediol yield. Copyright © 2008 Society of Chemical Industry [source] Perspectives on professional values among nurses in TaiwanJOURNAL OF CLINICAL NURSING, Issue 10 2009Fu-Jin Shih Aim., The purpose of this study was to identify the most important contemporary professional nursing values for nursing clinicians and educators in Taiwan. Background., Nursing values are constructed by members of political and social systems, including professional nursing organisations and educational institutions. Nurses' personal value systems shape the development of these professional values. An understanding of nurses' perceptions of professional values will enable the profession to examine consistencies with those reflected in existing and emerging educational and practice environments. Design., A qualitative descriptive study was conducted using the focus-group discussion method. Methods., A purposive sample of 300 registered nurses in Taiwan, consisting of 270 nursing clinicians and 30 faculty members, participated in 22 focus-group interviews. Data were analysed using a systematic process of content analysis. Results., Six prominent values related to professional nursing were identified: (a) caring for clients with a humanistic spirit; (b) providing professionally competent and holistic care; (c) fostering growth and discovering the meaning of life; (d) experiencing the ,give-and-take' of caring for others; (e) receiving fair compensation; and (f) raising the public's awareness of health promotion. Four background contexts framed the way participants viewed the appropriation of these values: (a) appraising nursing values through multiple perspectives; (b) acquiring nursing values through self-realisation; (c) recognising nursing values through professional competency and humanistic concerns and (d) fulfilling nursing values through coexisting self-actualisation. A conceptual framework was developed to represent this phenomenon. Conclusion., The most important professional nursing values according to the perspectives of nurses in Taiwan were identified. These values reflect benefits to society, to nurses themselves and to the interdisciplinary team. Relevance to clinical practice., Nurses' awareness of their own values and of how these values influence their behaviour is an essential component of humanistic nursing care. Nursing educators need to develop better strategies for reflection and integration of both personal and professional philosophies and values. [source] Strategic help in user interfaces for information retrievalJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 5 2002Giorgio Brajnik Although no unified definition of the concept of search strategy in Information Retrieval (IR) exists so far, its importance is manifest: nonexpert users, directly interacting with an IR system, apply a limited portfolio of simple actions; they do not know how to react in critical situations; and they often do not even realize that their difficulties are due to strategic problems. A user interface to an IR system should therefore provide some strategic help, focusing user's attention on strategic issues and providing tools to generate better strategies. Because neither the user nor the system can autonomously solve the information problem, but they complement each other, we propose a collaborative coaching approach, in which the two partners cooperate: the user retains the control of the session and the system provides suggestions. The effectiveness of the approach is demonstrated by a conceptual analysis, a prototype knowledge-based system named FIRE, and its evaluation through informal laboratory experiments. [source] Long-term effectiveness of diet-plus-exercise interventions vs. diet-only interventions for weight loss: a meta-analysisOBESITY REVIEWS, Issue 3 2009T. Wu Summary Diet and exercise are two of the commonest strategies to reduce weight. Whether a diet-plus-exercise intervention is more effective for weight loss than a diet-only intervention in the long-term has not been conclusively established. The objective of this study was to systemically review the effect of diet-plus-exercise interventions vs. diet-only interventions on both long-term and short-term weight loss. Studies were retrieved by searching MEDLINE and Cochrane Library (1966 , June 2008). Studies were included if they were randomized controlled trials comparing the effect of diet-plus-exercise interventions vs. diet-only interventions on weight loss for a minimum of 6 months among obese or overweight adults. Eighteen studies met our inclusion criteria. Data were independently extracted by two investigators using a standardized protocol. We found that the overall standardized mean differences between diet-plus-exercise interventions and diet-only interventions at the end of follow-up were ,0.25 (95% confidence interval [CI],0.36 to ,0.14), with a P -value for heterogeneity of 0.4. Because there were two outcome measurements, weight (kg) and body mass index (kg m,2), we also stratified the results by weight and body mass index outcome. The pooled weight loss was 1.14 kg (95% CI 0.21 to 2.07) or 0.50 kg m,2 (95% CI 0.21 to 0.79) greater for the diet-plus-exercise group than the diet-only group. We did not detect significant heterogeneity in either stratum. Even in studies lasting 2 years or longer, diet-plus-exercise interventions provided significantly greater weight loss than diet-only interventions. In summary, a combined diet-plus-exercise programme provided greater long-term weight loss than a diet-only programme. However, both diet-only and diet-plus-exercise programmes are associated with partial weight regain, and future studies should explore better strategies to limit weight regain and achieve greater long-term weight loss. [source] Incidence and mortality of interstitial lung disease in rheumatoid arthritis: A population-based studyARTHRITIS & RHEUMATISM, Issue 6 2010Tim Bongartz Objective Interstitial lung disease (ILD) has been recognized as an important comorbidity in rheumatoid arthritis (RA). We undertook the current study to assess incidence, predictors, and mortality of RA-associated ILD. Methods We examined a population-based incidence cohort of patients with RA and a matched cohort of individuals without RA. All subjects were followed up longitudinally. The lifetime risk of ILD was estimated. Cox proportional hazards models were used to compare the incidence of ILD between cohorts, to investigate predictors, and to explore the impact of ILD on survival. Results Patients with RA (n = 582) and subjects without RA (n = 603) were followed up for a mean of 16.4 and 19.3 years, respectively. The lifetime risk of developing ILD was 7.7% for RA patients and 0.9% for non-RA subjects. This difference translated into a hazard ratio (HR) of 8.96 (95% confidence interval [95% CI] 4.02,19.94). The risk of developing ILD was higher in RA patients who were older at the time of disease onset, in male patients, and in individuals with more severe RA. The risk of death for RA patients with ILD was 3 times higher than in RA patients without ILD (HR 2.86 [95% CI 1.98,4.12]). Median survival after ILD diagnosis was only 2.6 years. ILD contributed ,13% to the excess mortality of RA patients when compared with the general population. Conclusion Our results emphasize the increased risk of ILD in patients with RA. The devastating impact of ILD on survival provides evidence that development of better strategies for the treatment of ILD could significantly lower the excess mortality among individuals with RA. [source] Immunological characterization of a Blo t 12 isoallergen: identification of immunoglobulin E epitopesCLINICAL & EXPERIMENTAL ALLERGY, Issue 4 2009J. Zakzuk Summary Background Differences in the IgE response to isoallergens could have clinical implications; therefore, its analysis will contribute to the design of better strategies for the diagnosis and treatment of allergic respiratory diseases. Several isoforms have been described from mites but there is no information about the clinical impact of Blomia tropicalis isoallergens. Objective To evaluate the differences in the IgE response against two Blo t 12 isoallergens. Methods The IgE-binding properties of Blo t 12 isoallergens were analysed by ELISA, a skin prick test and ELISA cross-inhibition. Epitope mapping was performed using synthetic overlapping peptides. Fold recognition methods were used to model the chitin-binding domain of the two isoallergens. Results The frequency and strength of the IgE response were greater for Blo t 12.0101 than for Blo t 12.0102. Three IgE-binding areas were identified in Blo t 12.0101; one of them included two residues that are different in Blo t 12.0102. Modelling of the chitin-binding domains of these allergens predicted that they have structural differences that could influence antibody recognition of one of these epitopes. Conclusion In silico structural analysis and immunological characterization of Blo t 12 reveals that allergen polymorphism influences IgE reactivity. Blo t 12.0101 is the most IgE-reactive isoform in Cartagena. The identified IgE epitopes could be mutated to obtain hypoallergenic molecules of potential use for immunotherapy. [source] Crunch Time: A Policy to Avoid the ,Announcement Effect' when Terminating a SubsidyGERMAN ECONOMIC REVIEW, Issue 1 2010Marc Gürtler Irreversibility; investment; announcement effect; subsidy; tax Abstract. If the government announces the termination of a subsidy paid for an irreversible investment under uncertainty, investors might decide to realize their investment so as to obtain the subsidy. These investors might have postponed an investment if future payment were assured. Depending on the degree of uncertainty and the time preference, the termination of the subsidy might cost the government more in toto than granting the subsidy on a continuing basis. A better strategy would be to reduce the subsidy in parts rather than to terminate the subsidy in its entirety. [source] Combining inflation density forecastsJOURNAL OF FORECASTING, Issue 1-2 2010Christian Kascha Abstract In this paper, we empirically evaluate competing approaches for combining inflation density forecasts in terms of Kullback,Leibler divergence. In particular, we apply a similar suite of models to four different datasets and aim at identifying combination methods that perform well throughout different series and variations of the model suite. We pool individual densities using linear and logarithmic combination methods. The suite consists of linear forecasting models with moving estimation windows to account for structural change. We find that combining densities is a much better strategy than selecting a particular model ex ante. While combinations do not always perform better than the best individual model, combinations always yield accurate forecasts and, as we show analytically, provide insurance against selecting inappropriate models. Logarithmic combinations can be advantageous, in particular if symmetric densities are preferred. Copyright © 2010 John Wiley & Sons, Ltd. [source] Local and Foreign Models of Reproduction in Nyanza Province, KenyaPOPULATION AND DEVELOPMENT REVIEW, Issue 4 2000Susan Cotts Watkins This article uses colonial archival records, surveys conducted in the 1960s, and surveys and focus group discussions in the 1990s to describe three distinct but temporally overlapping cultural models of reproduction in a rural community in Kenya between the 1930s and the present. The first model, "large families are rich," was slowly undermined by developments brought about by the integration of Kenya into the British empire. This provoked the collective formulation of a second local model, "small families are progressive," which retained the same goal of wealth but viewed a smaller family as a better strategy for achieving it. The third model, introduced by the global networks of the international population movement in the 1960s, augmented the second model with the deliberate control of fertility using clinic provided methods of family planning. By the 1990s this global model had begun to be domesticated as local clinics routinely promoted family planning and as men and women in Nyanza began to use family planning and to tell others of their motivations and experiences. [source] Spot-futures spread, time-varying correlation, and hedging with currency futuresTHE JOURNAL OF FUTURES MARKETS, Issue 10 2006Donald Lien This article investigates the effects of the spot-futures spread on the return and risk structure in currency markets. With the use of a bivariate dynamic conditional correlation GARCH framework, evidence is found of asymmetric effects of positive and negative spreads on the return and the risk structure of spot and futures markets. The implications of the asymmetric effects on futures hedging are examined, and the performance of hedging strategies generated from a model incorporating asymmetric effects is compared with several alternative models. The in-sample comparison results indicate that the asymmetric effect model provides the best hedging strategy for all currency markets examined, except for the Canadian dollar. Out-of-sample comparisons suggest that the asymmetric effect model provides the best strategy for the Australian dollar, the British pound, the deutsche mark, and the Swiss franc markets, and the symmetric effect model provides a better strategy than the asymmetric effect model in the Canadian dollar and the Japanese yen. The worst performance is given by the naďve hedging strategy for both in-sample and out-of-sample comparisons in all currency markets examined. © 2006 Wiley Periodicals, Inc. Jrl Fut Mark 26:1019,1038, 2006 [source] European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007ANNALS OF HUMAN GENETICS, Issue 4 2007Article first published online: 28 MAY 200 Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaďti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naďve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source] Technological progresses in monoclonal antibody production systemsBIOTECHNOLOGY PROGRESS, Issue 2 2010Maria Elisa Rodrigues Abstract Monoclonal antibodies (mAbs) have become vitally important to modern medicine and are currently one of the major biopharmaceutical products in development. However, the high clinical dose requirements of mAbs demand a greater biomanufacturing capacity, leading to the development of new technologies for their large-scale production, with mammalian cell culture dominating the scenario. Although some companies have tried to meet these demands by creating bioreactors of increased capacity, the optimization of cell culture productivity in normal bioreactors appears as a better strategy. This review describes the main technological progresses made with this intent, presenting the advantages and limitations of each production system, as well as suggestions for improvements. New and upgraded bioreactors have emerged both for adherent and suspension cell culture, with disposable reactors attracting increased interest in the last years. Furthermore, the strategies and technologies used to control culture parameters are in constant evolution, aiming at the on-line multiparameter monitoring and considering now parameters not seen as relevant for process optimization in the past. All progresses being made have as primary goal the development of highly productive and economic mAb manufacturing processes that will allow the rapid introduction of the product in the biopharmaceutical market at more accessible prices. © 2010 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source] What if it is the other way around?ACTA PAEDIATRICA, Issue 7 2009Early introduction of peanut, fish seems to be better than avoidance Abstract For many years, the advice to prevent food allergy was to postpone the introduction of allergens like egg, fish and peanut. However, elimination of food allergens during pregnancy and infancy failed to prevent food allergy. Instead, several studies indicate that early introduction of food like fish and peanuts may be beneficial. The most compelling illustration of this has been presented for peanuts. The prevalence of peanut allergy is lower in children in Israel than in the UK, despite introduction of peanut during infancy in Israel. Other studies have reported that early introduction of fish reduced the risk of allergic sensitization and allergic diseases like eczema. Conclusion:, Early introduction rather than avoidance may be a better strategy for the prevention of food allergy. The mechanism may be that early introduction of food allergens during infancy might induce tolerance, thereby preventing the development of allergy. [source] Carotenoid accumulation strategies for becoming a colourful House Finch: analyses of plasma and liver pigments in wild moulting birdsFUNCTIONAL ECOLOGY, Issue 4 2006K. J. MCGRAW Summary 1Male House Finches (Carpodacus mexicanus) colour their sexually selected plumage with carotenoid pigments, and there has been much interest in the factors that affect their ability to become bright red rather than drab yellow. 2There is good support for the notions that health, nutritional condition and total carotenoid intake influence colour expression, but there are also suggestions that acquiring particular types of carotenoids from the diet may be important for developing red plumage. 3We used high-performance liquid chromatography (HPLC) to analyse the types and amounts of endogenous (in plasma and liver) and integumentary (in newly grown feathers) carotenoids in a wild, native population of moulting male and female House Finches from the south-western United States to determine the carotenoid-accumulation strategies for becoming optimally colourful. 4Four plant carotenoids , lutein, zeaxanthin, ,-cryptoxanthin and ,-carotene , were detected in plasma and liver. However, as was found previously, 11 carotenoids were observed in colourful plumage, with xanthophylls (e.g. lutein, dehydrolutein) predominant in yellow feathers and ketocarotenoids (e.g. adonirubin, 3-hydroxy-echinenone) in red feathers. This indicates endogenous modification of ingested carotenoids. 5Birds that accumulated more of one type of carotenoid in plasma and liver did not necessarily accumulate more of all other types, suggesting that individuals are not employing a simple ,more is better' strategy for coloration. Instead, when forward stepwise regression was used to examine the ability of individual types of carotenoids in plasma and liver to explain variation in red plumage pigments and plumage redness, we found that the lone variable remaining in all models was ,-cryptoxanthin concentration. 6This supports the idea that, unlike some other songbirds (e.g. yellow Carduelis finches), there is a specialized biochemical strategy that male House Finches follow to become red and most sexually attractive , to accumulate as much ,-cryptoxanthin in the body as possible. ,-Cryptoxanthin is a less common dietary carotenoid than the typical xanthophylls and carotenes in grains and fruits and may be limited enough in the diet that, to become colourful, House Finches might adopt selective foraging strategies for the most ,-cryptoxanthin-rich foods. [source] Role of mineral nutrition in minimizing cadmium accumulation by plantsJOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 6 2010Nadeem Sarwar Abstract Cadmium (Cd) is a highly toxic heavy metal for both plants and animals. The presence of Cd in agricultural soils is of great concern regarding its entry into the food chain. Cadmium enters into the soil,plant environment mainly through anthropogenic activities. Compounds of Cd are more soluble than other heavy metals, so it is more available and readily taken up by plants and accumulates in different edible plant parts through which it enters the food chain. A number of approaches are being used to minimize the entry of Cd into the food chain. Proper plant nutrition is one of the good strategies to alleviate the damaging effects of Cd on plants and to avoid its entry into the food chain. Plant nutrients play a very important role in developing plant tolerance to Cd toxicity and thus, low Cd accumulation in different plant parts. In this report, the role of some macronutrients (nitrogen, phosphorus, sulfur and calcium), micronutrients (zinc, iron and manganese), and silicon (a beneficial nutrient) has been discussed in detail as to how these nutrients play their role in decreasing Cd uptake and accumulation in crop plants. Copyright © 2010 Society of Chemical Industry [source] Overexpression of profilin reduces the migration of invasive breast cancer cellsCYTOSKELETON, Issue 2 2004Partha Roy Abstract The exact role profilin plays in cell migration is not clear. In this study, we have evaluated the effect of overexpression of profilin on the migration of breast cancer cells. Overexpression was carried out by stably expressing GFP-profilin in BT474 cells. It was observed that even a moderate level of overexpression of profilin significantly impaired the ability of BT474 cells to spread on fibronectin-coated substrate and migrate in response to EGF. GFP-profilin expressing cells also showed increased resistance to detachment in response to trypsin and increased tyrosine phosphorylation of focal adhesion kinase (FAK) and paxillin compared to the parental and GFP-expressing (control) cell lines. These results suggest that perturbation of profilin levels may offer a good strategy for controlling the metastatic potential of breast cancer cells. Cell Motil. Cytoskeleton 57:84,95, 2004. © 2004 Wiley-Liss, Inc. [source] Understanding the HER family in breast cancer: interaction with ligands, dimerization and treatmentsHISTOPATHOLOGY, Issue 5 2010Fabrício F T Barros Barros F F T, Powe D G, Ellis I O & Green A R (2010) Histopathology56, 560,572 Understanding the HER family in breast cancer: interaction with ligands, dimerization and treatments Breast carcinoma is the most frequent type of cancer affecting women. Among the recently described molecular and phenotypic classes of breast cancer, human epidermal growth factor receptor 2 (HER2)-positive tumours are associated with a poor prognosis. HER2 plays an important role in cancer progression being targeted to provide predictive and prognostic information. Moreover, HER2 is related to cancer resistance against a variety of therapies; however, trastuzumab (herceptin) has proved successful in treatment of this subgroup. Nevertheless, resistance to this drug may be acquired by patients after a period of treatment, which indicates that other molecular mechanisms might influence success of this therapy. Dimerization between members of the HER family may contribute to resistance against treatments due to different combinations that trigger different downstream pathways. This is promoted by ligands, which are expressed as transmembrane precursor protein molecules and have a conserved epidermal growth factor-like domain. Through resistance to trastuzumab, other drugs are being developed to interact in different domains of HER2 protein. It might be a good strategy to apply new drugs simultaneously to trastuzumab due to act in different domains of HER2. The study of interaction between receptors/ligands will characterize specifically their signalling pathway and understand which strategy to acquire. [source] Strategic decision-making in healthcare organizations: it is time to get serious,INTERNATIONAL JOURNAL OF HEALTH PLANNING AND MANAGEMENT, Issue 3 2006David W. Young Abstract New and continuing environmental demands and competitive forces require healthcare organizations to be increasingly careful in thinking about their strategies. They must do so in a highly unusual (multi-actor) marketplace where a variety of system interdependencies complicate decision-making. A good strategy requires an attempt to understand the real, as distinct from the perceived, environment, and is characterized by explicit tradeoffs along three dimensions: service or program variety, patient needs, and patient access. The quality of these tradeoffs can be assessed in terms of whether the strategy is (a) attuned to critical success factors in the organization's environment, (b) highly focused, (c) linked to the organization's capabilities, and (d) accompanied by an activity set that is difficult for competitors to imitate. An organization also must be capable of adapting appropriately to changes in its environment. Thus, even the best strategy must be reviewed constantly if it is to remain viable. A strategy's sustainability can be adversely affected by increased buyer or supplier power, lowered barriers to entry, growing rivalry, the threat of substitutes, and increased slack in resource usage. By thinking more creatively in the future than they have in the past, healthcare organizations can make tradeoffs and choose a focused strategic position. They then can design an activity set that is appropriate for that position, and that will assist them to achieve both financial viability and superior programmatic performance. A well-designed activity set also will assist them to sustain their performance in the face of changing environmental demands and competitive forces. Copyright © 2006 John Wiley & Sons, Ltd. [source] The haemodynamic response to propranolol in cirrhosis with arterial hypertension: a comparative analysis with normotensive cirrhotic patientsALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 1 2010P. Sharma Aliment Pharmacol Ther 2010; 32: 105,112 Summary Background, Cirrhosis with arterial hypertension is not uncommon. Haemodynamic alterations in these patients and the effects of beta-blocker on hepatic venous pressure gradient (HVPG) and systemic haemodynamics have not been evaluated. Aims, To compare the systemic haemodynamic alterations in hypertensive and normotensive cirrhotics, and to investigate the effects of propranolol on these parameters. Methods, A retrospective analysis of consecutive hypertensive cirrhotic patients (n = 33) who underwent haemodynamic assessment and paired HVPG measurement was done. Normotensive cirrhotics (n = 50) served as controls. Results, Hypertensive patients had a significantly higher heart rate, systemic (SVRI), and pulmonary vascular resistance. There was a significant reduction in mean arterial pressure (MAP) in the hypertensive cirrhotic group from 112 (107,130) mmHg to 95 (77,114) mmHg (P < 0.01), but no change in the normotensives. SVRI remained the same in the hypertensive cirrhotic group, but it increased in the normotensives. There was no correlation between MAP reduction and HVPG reduction. Conclusions, The frequency of HVPG response with propranolol treatment in hypertensive cirrhotics is similar to normotensive cirrhotics. Propranolol treatment reduces MAP significantly in hypertensive patients with cirrhosis. Treatment with a nonselective beta-blocker is a good strategy for hypertensive cirrhotic patients. [source] Unemployment May Be Lower if Unions Bargain over Wages and EmploymentLABOUR, Issue 1 2002Hartmut Egger This paper addresses the question under which circumstances unemployment can be lower if unions bargain over wages and employment in a general equilibrium framework. Thereby, it turns out that the unemployment rate may negatively depend on the wage rate, if the unemployment compensation scheme contains a constant real term in addition to the replacement ratio component. This is, compared with a pure replacement ratio scheme, the more plausible formalization of the real world's compensation systems, at least for European countries. Besides the theoretical analysis, the paper also derives political implications by identifying the relevant parameters for the decision on whether weakening unions will be a good strategy for an economy to overcome its unemployment problem. [source] Student affairs professionals and the mediaNEW DIRECTIONS FOR STUDENT SERVICES, Issue 100 2002Ted Montgomery A good strategy for working with external media is essential to the success of student affairs professionals. Examples of practices that lead to effective engagement with the various media are examined. [source] Towards a consistent numerical compressible non-hydrostatic model using generalized Hamiltonian toolsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 635 2008Almut Gassmann Abstract A set of compressible non-hydrostatic equations for a turbulence-averaged model atmosphere comprising dry air and water in three phases plus precipitating fluxes is presented, in which common approximations are introduced in such a way that no inconsistencies occur in the associated budget equations for energy, mass and Ertel's potential vorticity. These conservation properties are a prerequisite for any climate simulation or NWP model. It is shown that a Poisson bracket form for the ideal fluid part of the full-physics equation set can be found, while turbulent friction and diabatic heating are added as separate ,dissipative' terms. This Poisson bracket is represented as a sum of a two-fold antisymmetric triple bracket (a Nambu bracket represented as helicity bracket) plus two antisymmetric brackets (so-called mass and thermodynamic brackets of the Poisson type). The advantage of this approach is that the given conservation properties and the structure of the brackets provide a good strategy for the construction of their discrete analogues. It is shown how discrete brackets are constructed to retain their antisymmetric properties throughout the spatial discretisation process, and a method is demonstrated how the time scheme can also be incorporated in this philosophy. Copyright © 2008 Royal Meteorological Society [source] A Synthetic Mechano Growth Factor E Peptide Enhances Myogenic Precursor Cell Transplantation SuccessAMERICAN JOURNAL OF TRANSPLANTATION, Issue 10 2007P. Mills Myogenic precursor cell (MPC) transplantation is a good strategy to introduce dystrophin expression in muscles of Duchenne muscular dystrophy (DMD) patients. Insulin-like growth factor (IGF-1) promotes MPC activities, such as survival, proliferation, migration and differentiation, which could enhance the success of their transplantation. Alternative splicing of the IGF-1 mRNA produces different muscle isoforms. The mechano growth factor (MGF) is an isoform, especially expressed after a mechanical stress. A 24 amino acids peptide corresponding to the C-terminal part of the MGF E domain (MGF-Ct24E peptide) was synthesized. This peptide had been shown to enhance the proliferation and delay the terminal differentiation of C2C12 myoblasts. The present study showed that the MGF-Ct24E peptide improved human MPC transplantation by modulating their proliferation and differentiation. Indeed, intramuscular or systemic delivery of this synthetic peptide significantly promoted engraftment of human MPCs in mice. In vitro experiments demonstrated that the MGF-Ct24E peptide enhanced MPC proliferation by a different mechanism than the binding to the IGF-1 receptor. Moreover, MGF-Ct24E peptide delayed human MPC differentiation while having no outcome on survival. Those combined effects are probably responsible for the enhanced transplantation success. Thus, the MGF-Ct24E peptide is an interesting agent to increase MPC transplantation success in DMD patients. [source] Single-Molecule Behavior of Dendritic Poly(ethylene glycol) Structures towards Lithium IonsCHEMISTRY - A EUROPEAN JOURNAL, Issue 40 2009Daihua Tang Dr. PEG-ged out! Dendritic poly(ethylene glycol) (PEG) D exhibits excellent single-molecule behavior to lithium ions, and has been characterized by MALDI-TOF-MS and TOF-ESI-MS. Since commercially available linear PEG structures are not monocomponent, constructing dendritic structures may become a good strategy to achieve higher molecular-weight PEG moieties. [source] |