Home About us Contact | |||
Mixing
Kinds of Mixing Terms modified by Mixing Selected AbstractsINCREASING WATER SUPPLY BY MIXING OF FRESH AND SALINE GROUND WATERS,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 5 2003Zekai Sen ABSTRACT: The quality of ground water in any aquifer takes its final form due to natural mixture of waters, which may originate from different sources. Water quality varies from one aquifer to another and even within the same aquifer itself. Different ground water quality is obtained from wells and is mixed in a common reservoir prior to any consumption. This artificial mixing enables an increase in available ground water of a desired quality for agricultural or residential purposes. The question remains as to what proportions of water from different wells should be mixed together to achieve a desired water quality for this artificial mixture. Two sets of laboratory experiments were carried out, namely, the addition of saline water to a fixed volume of fresh water. After each addition, the mixture volume and the electric conductivity value of the artificially mixed water were recorded. The experiments were carried out under the same laboratory temperature of 20°C. A standard curve was developed first experimentally and then confirmed theoretically. This curve is useful in determining either the volume or discharge ratio from two wells to achieve a predetermined electrical conductivity value of the artificial mixture. The application of the curve is given for two wells within the Quaternary deposits in the western part of the Kingdom of Saudi Arabia. [source] Hybrid Simulation of Miscible Mixing with Viscous FingeringCOMPUTER GRAPHICS FORUM, Issue 2 2010Seung-Ho Shiny Abstract By modeling mass transfer phenomena, we simulate solids and liquids dissolving or changing to other substances. We also deal with the very small-scale phenomena that occur when a fluid spreads out at the interface of another fluid. We model the pressure at the interfaces between fluids with Darcy's Law and represent the viscous fingering phenomenon in which a fluid interface spreads out with a fractal-like shape. We use hybrid grid-based simulation and smoothed particle hydrodynamics (SPH) to simulate intermolecular diffusion and attraction using particles at a computable scale. We have produced animations showing fluids mixing and objects dissolving. [source] Bimolecular Crystals of Fullerenes in Conjugated Polymers and the Implications of Molecular Mixing for Solar CellsADVANCED FUNCTIONAL MATERIALS, Issue 8 2009A. C. Mayer The performance of polymer:fullerene bulk heterojunction solar cells is heavily influenced by the interpenetrating nanostructure formed by the two semiconductors because the size of the phases, the nature of the interface, and molecular packing affect exciton dissociation, recombination, and charge transport. Here, X-ray diffraction is used to demonstrate the formation of stable, well-ordered bimolecular crystals of fullerene intercalated between the side-chains of the semiconducting polymer poly(2,5-bis(3-tetradecylthiophen-2-yl)thieno[3,2- b]thiophene. It is shown that fullerene intercalation is general and is likely to occur in blends with both amorphous and semicrystalline polymers when there is enough free volume between the side-chains to accommodate the fullerene molecule. These findings offer explanations for why luminescence is completely quenched in crystals much larger than exciton diffusion lengths, how the hole mobility of poly(2-methoxy-5-(3,,7,-dimethyloxy)-p-phylene vinylene) increases by over 2 orders of magnitude when blended with fullerene derivatives, and why large-scale phase separation occurs in some polymer:fullerene blend ratios while thermodynamically stable mixing on the molecular scale occurs for others. Furthermore, it is shown that intercalation of fullerenes between side chains mostly determines the optimum polymer:fullerene blending ratios. These discoveries suggest a method of intentionally designing bimolecular crystals and tuning their properties to create novel materials for photovoltaic and other applications. [source] The Impact of Interfacial Mixing on Förster Transfer at Conjugated Polymer HeterojunctionsADVANCED FUNCTIONAL MATERIALS, Issue 1 2009Anthony M. Higgins Abstract Neutron reflectivity and photoluminescence measurements are reported on bilayers of polyfluorene-based conjugated polymers. By using a novel thermal processing procedure it is possible to control the width of the interface between poly(9,9-dioctylfluorene) (F8) and poly(9,9-dioctylfluorene- alt -benzothiadiazole) (F8BT), and measure the impact of interfacial roughness on the resonant energy transfer of excitons at the interface (Förster transfer). It is found that increasing the root mean square (rms) roughness of the F8/F8BT interface over the range of ,1,nm to ,5,nm leads to a greatly enhanced Förster transfer from F8 to F8BT molecules. By comparing photoluminescence measurements with simple calculations it is concluded that the level of enhancement of the F8BT peak at rough interfaces can only be adequately explained if mixing of F8 and F8BT at a molecular level dominates over the interfacial roughness due to thermally excited capillary waves. [source] Strontium Isotopic Identification of Water-Rock Interaction and Ground Water MixingGROUND WATER, Issue 3 2004Carol D. Frost 87Sr/86Sr ratios of ground waters in the Bighorn and Laramie basins' carbonate and carbonate-cemented aquifer systems, Wyoming, United States, reflect the distinctive strontium isotope signatures of the minerals in their respective aquifers. Well water samples from the Madison Aquifer (Bighorn Basin) have strontium isotopic ratios that match their carbonate host rocks. Casper Aquifer ground waters (Laramie Basin) have strontium isotopic ratios that differ from the bulk host rock; however, stepwise leaching of Casper Sandstone indicates that most of the strontium in Casper Aquifer ground waters is acquired from preferential dissolution of carbonate cement. Strontium isotope data from both Bighorn and Laramie basins, along with dye tracing experiments in the Bighorn Basin and tritium data from the Laramie Basin, suggest that waters in carbonate or carbonate-cemented aquifers acquire their strontium isotope composition very quickly,on the order of decades. Strontium isotopes were also used successfully to verify previously identified mixed Redbeds-Casper ground waters in the Laramie Basin. The strontium isotopic compositions of ground waters near Precambrian outcrops also suggest previously unrecognized mixing between Casper and Precambrian aquifers. These results demonstrate the utility of strontium isotopic ratio data in identifying ground water sources and aquifer interactions. [source] In situ Mixing of Organic Matter Decreases Hydraulic Conductivity of Denitrification Walls in Sand AquifersGROUND WATER MONITORING & REMEDIATION, Issue 1 2008Gregory F. Barkle In a previous study, a denitrification wall was constructed in a sand aquifer using sawdust as the carbon substrate. Ground water bypassed around this sawdust wall due to reduced hydraulic conductivity. We investigated potential reasons for this by testing two new walls and conducting laboratory studies. The first wall was constructed by mixing aquifer material in situ without substrate addition to investigate the effects of the construction technique (mixed wall). A second, biochip wall, was constructed using coarse wood chips to determine the effect of size of the particles in the amendment on hydraulic conductivity. The aquifer hydraulic conductivity was 35.4 m/d, while in the mixed wall it was 2.8 m/d and in the biochip wall 3.4 m/d. This indicated that the mixing of the aquifer sands below the water table allowed the particles to re-sort themselves into a matrix with a significantly lower hydraulic conductivity than the process that originally formed the aquifer. The addition of a coarser substrate in the biochip wall significantly increased total porosity and decreased bulk density, but hydraulic conductivity remained low compared to the aquifer. Laboratory cores of aquifer sand mixed under dry and wet conditions mimicked the reduction in hydraulic conductivity observed in the field within the mixed wall. The addition of sawdust to the laboratory cores resulted in a significantly higher hydraulic conductivity when mixed dry compared to cores mixed wet. This reduction in the hydraulic conductivity of the sand/sawdust cores mixed under saturated conditions repeated what occurred in the field in the original sawdust wall. This indicated that laboratory investigations can be a useful tool to highlight potential reductions in field hydraulic conductivities that may occur when differing materials are mixed under field conditions. [source] Influence of Wet Mechanical Mixing on Microstructure and Vickers Hardness of Nanocrystalline Ceramic,Metal CompositesINTERNATIONAL JOURNAL OF APPLIED CERAMIC TECHNOLOGY, Issue 5 2008Tatsuo Kumagai Nanocrystalline (nc) ceramic,metal composite bulk samples have been fabricated by consolidation of mixture of attrition-milled (AM) amorphous base ceramic ((ZrO2,3 mol% Y2O3),20 mol% Al2O3) and AM amorphous base metallic (Ti,48 mol% Al) powders using a pulse-current pressure sintering system. Microstructural observations revealed that the ceramic and metallic colonies appear blocky in morphology in the composite bulk samples, and both the ceramic and the metallic colonies consist of a large number of equiaxed fine grains with the sizes of 78,82 and 81,86 nm, respectively. Mechanical mixing treatments by wet ball milling in ethanol before consolidation process are effective for refinement of the ceramic and metallic colonies. In all the obtained composite bulk samples, the ceramic colonies consist of the dominant phase of tetragonal (t) ZrO2 solid solution (ss) together with the minor phases of monoclinic (m) ZrO2ss and ,-Al2O3. On the other hand, the dominant phase in the metallic colonies changes from Ti3Al (,2) to Tiss (,) with an increase in the t -ZrO2ss volume fraction by abrasion of 3 mol% yttria-stabilized tetragonal polycrystalline zirconia balls during wet mechanical mixing treatments. Such a phase transformation from ,2 to , is considered to be due to the decrease in the aluminum content in the metallic colonies by combination of aluminum with oxygen (i.e., the formation of ,-Al2O3), which is probably taken from ethanol (C2H5OH) into the powders during wet mechanical mixing treatments. The obtained nc composite bulk samples show good Vickers hardness values, which are considerably higher than those estimated from the rule of mixture. [source] Energy metabolism in young pigs as affected by establishment of new groups prior to transportJOURNAL OF ANIMAL PHYSIOLOGY AND NUTRITION, Issue 5-6 2002M. J. W. HEETKAMP Energy metabolism was studied in 9-week-old-pigs as affected by mixing just before transport. In each of three trials, two groups of 20 pigs (two litters of 10) were randomly assigned to one of two treatments: control and mixing. Each group was housed in one of two climatic chambers with each subgroup in one of two pens. In each trial, the two litters within the mixing treatment were mixed, just before transport, at the start of a 2-week experimental period. In the control treatment, the social structure of both litters in each trial was not altered. In both treatments, large alterations of energy partitioning from week 1 to week 2, are probably signs of recovering from transportation and/or adaptation to new feeding and housing conditions. Mixing just before transport did not change total energy metabolism but only increased nonactivity-related heat production by 3.1% for the total experimental period. Most likely, long-term performance is also not affected negatively by mixing. Animals seem to be able to change energy expenditure on activity when more energy is required for other physiological processes. This symptom of possible reallocation of energy between different vital live processes (e.g. behavior, protein turn-over) might be one of the first indications of an impaired well-being. [source] Solid,liquid mass transfer characteristics of an unbaffled agitated vessel with an unsteadily forward,reverse rotating impellerJOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 5 2008Shuichi Tezura Abstract To develop an enhanced form of solid-liquid apparatus, an unbaffled agitated vessel has been constructed, fitted with an agitation system using an impeller whose rotation alternates unsteadily in direction, i.e. a forward-reverse rotating impeller. In this vessel, solid-liquid mass transfer was studied using a disc turbine impeller with six flat blades. The effect of impeller rotation rate as an operating variable on the mass transfer coefficient was evaluated experimentally using various geometrical conditions of the apparatus, such as impeller diameter and height, in relation to the impeller power consumption. Mixing of gas above the free surface into the bulk liquid, i.e. surface aeration, which accompanied the solid-liquid agitation, was also investigated. Comparison of the mass transfer characteristics between this type of vessel and a baffled vessel with a unidirectional rotating impeller underscored the sufficient solid-liquid contact for prevention of gas mixing in the forward-reverse rotation mode of the impeller. Copyright © 2008 Society of Chemical Industry [source] Ultrasonic Investigation of the Effect of Vegetable Shortening and Mixing Time on the Mechanical Properties of Bread DoughJOURNAL OF FOOD SCIENCE, Issue 9 2009K.L. Mehta ABSTRACT:, Mixing is a critical stage in breadmaking since it controls gluten development and nucleation of gas bubbles in the dough. Bubbles affect the rheology of the dough and largely govern the quality of the final product. This study used ultrasound (at a frequency where it is sensitive to the presence of bubbles) to nondestructively examine dough properties as a function of mixing time in doughs prepared from strong red spring wheat flour with various amounts of shortening (0%, 2%, 4%, 8% flour weight basis). The doughs were mixed for various times at atmospheric pressure or under vacuum (to minimize bubble nucleation). Ultrasonic velocity and attenuation (nominally at 50 kHz) were measured in the dough, and dough density was measured independently from specific gravity determinations. Ultrasonic velocity decreased substantially as mixing time increased (and more bubbles were entrained) for all doughs mixed in air; for example, in doughs made without shortening, velocity decreased from 165 to 105 ms,1, although superimposed on this overall decrease was a peak in velocity at optimum mixing time. Changes in attenuation coefficient due to the addition of shortening were evident in both air-mixed and vacuum-mixed doughs, suggesting that ultrasound was sensitive to changes in the properties of the dough matrix during dough development and to plasticization of the gluten polymers by the shortening. Due to its ability to probe the effect of mixing times and ingredients on dough properties, ultrasound has the potential to be deployed as an online quality control tool in the baking industry. [source] Mixing of two binary nonequilibrium phases in one dimensionAICHE JOURNAL, Issue 8 2009Kjetil B. Haugen Abstract The mixing of nonequilibrium phases has important applications in improved oil recovery and geological CO2 -storage. The rate of mixing is often controlled by diffusion and modeling requires diffusion coefficients at subsurface temperature and pressure. High-pressure diffusion coefficients are commonly inferred from changes in bulk properties as two phases equilibrate in a PVT cell. However, models relating measured quantities to diffusion coefficients usually ignore convective mass transport. This work presents a comprehensive model of mixing of two nonequilibrium binary phases in one-dimension. Mass transport due to bulk velocity triggered by compressibility and nonideality is taken into account. Ignoring this phenomenon violates local mass balance and does not allow for changes in phase volumes. Simulations of two PVT cell experiments show that models ignoring bulk velocity may significantly overestimate the diffusion coefficients. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Mixing of components in two-component aggregationAICHE JOURNAL, Issue 9 2006Themis Matsoukas Abstract The problem of binary component aggregation with kernels that are independent of composition is considered. The bivariate distribution as the product of two distributions is studied, one that refers to the size of the aggregates, and one that describes the distribution of the component of interest (solute), and obtain the governing equations for all three. The distribution of solute within aggregates of size v has a steady-state solution, that is independent of the size distribution: it is a Gaussian function whose mean and variance are both proportional to the aggregate size v. To quantify the degree of blending, the sum-square X2, of the deviation of the amount solute from its mean, is studied. Two cases are identified for which X2 is constant during aggregation: (a) "partially mixed" seeds regardless of kernel; and (b) sum-type kernels regardless of seed distribution. Simulations confirm the results for these two cases, and further indicate that in the general case, X2 is nearly constant. The degree of mixing is determined solely by the initial distribution of components, but does not depend on the kernel. Optimum initial conditions that minimize the time required to reach a desired level of homogeneity between components are identified. © 2006 American Institute of Chemical Engineers AIChE J, 2006 [source] Mixing of shear-thinning fluids with yield stress in stirred tanksAICHE JOURNAL, Issue 7 2006P. E. Arratia Abstract Mixing of shear-thinning fluids with yield stress is investigated in a three-dimensional (3-D) flow both in experiments and in simulations. Experiments are conducted in a stirred tank using tracer visualization and velocity measurements. Bulk flow visualization shows the familiar cavern formation around the impeller with stagnant zones surrounding it. Detailed flow visualization inside caverns reveals the main ingredients of chaotic flow: lobe formation, stretching, folding, and self-similar mixing patterns. For multiple impeller systems, however, we find strong compartmentalization characterized by robust segregation between adjacent caverns, hindering mixing performance. Mixing efficiency is enhanced by moving the shaft off-center, which breaks spatial symmetry. The displacement of the shaft from the tank centerline has a beneficial effect on manifold structure: segregated regions are destroyed, separatrices are eliminated, and axial circulation is improved. Numerical simulations are performed by solving the incompressible Reynolds Averaged Navier Stokes equation with a Galerkin Least-Squares finite-element formulation and a macroscopic rheological model. Simulations are able to capture the main features of the flow and are used to investigate stretching statistics and scale behavior. © 2006 American Institute of Chemical Engineers AIChE J, 2006 [source] Stereoregular P(MMA)-clay nanocomposites by metallocene catalysts: In situ synthesis and stereocomplex formationJOURNAL OF POLYMER SCIENCE (IN TWO SECTIONS), Issue 13 2007Wesley R. Mariott Abstract This contribution reports the synthesis and characterization of stereochemically controlled, as well as crystalline stereocomplex, P(MMA)-clay nanocomposites using metallocene complexes and alane-intercalated clay activators. The ligand elimination and exchange reactions involving Lewis acids E(C6F5)3 (E = Al, B) and an organically modified montmorillonite clay were employed to synthesize the alane-intercalated clay activators. When combined with dimethyl metallocenes of various symmetries, these clay activators brought about efficient MMA polymerizations leading to in situ polymerized, stereochemically controlled P(MMA)-intercalated clay nanocomposites. The most noticeable thermal property enhancement observed for the clay nanocomposite P(MMA), when compared with the pristine P(MMA) having similar molecular weight and stereomicrostructure, has a considerable increase in Tg (,10 °C). Mixing of dilute THF solutions of two diastereomeric nanocomposites in a 1:2 isotactic to syndiotactic ratio, followed by reprecipitation or crystallization procedures, yielded unique double-stranded helical stereocomplex P(MMA)-clay nanocomposites with a predominantly exfoliated clay morphology. Remarkably, the resulting crystalline stereocomplex P(MMA) matrix is resistant to the boiling-THF extraction and its clay nanocomposites exhibit high Tm of 201 to 210 °C. Furthermore, the stereocomplex P(MMA)-clay nanocomposite shows a one-step, narrow decomposition temperature window and a single, high maximum rate decomposition temperature of 377 °C. © 2007 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 45: 2581,2592, 2007 [source] Nano-Level Mixing of ZnO into Poly(methyl methacrylate)MACROMOLECULAR CHEMISTRY AND PHYSICS, Issue 17 2010Mukesh Agrawal Abstract A simple, facile and versatile approach is presented for the preparation of PMMA/ZnO nanocomposite materials, which possess high transparency, no color, good thermal stability, UV absorption and improved mechanical properties. The employed process involved mixing of ZnO nanoparticles dispersed in DMAc with the PMMA matrix dissolved in the same solvent. The effect of ZnO content on the physical properties of the PMMA matrix is studied. A significant improvement in mechanical properties was observed with the incorporation of 0.5 wt.-% ZnO particles. The beauty of the described approach lies in the fact that despite being a simple and facile approach, it offers nano-level (2,5,nm) mixing of ZnO nanoparticles into a polymer matrix. [source] Melt Mixing of Ethylene/Butyl Acrylate/Glycidyl Methacrylate Terpolymers with LDPE and PETMACROMOLECULAR MATERIALS & ENGINEERING, Issue 2 2009Aida Benhamida Abstract The chemical modification by melt-mixing of an EBAGMA terpolymer with LDPE and PET was investigated with the aim to use these EBAGMA/LDPE and EBAGMA/PET blends (in equal weight quantities) as compatibilizer master batches to improve the compatibility of the LDPE/PET system. It is shown that when the EBAGMA terpolymer is melt blended with LDPE, almost 40% of the initial amount of EBAGMA is linked to the LDPE backbone. In contrast, in the case of EBAGMA/PET, FT-IR spectra indicate the total reactivity between the two components through the reaction of the epoxy group of EBAGMA with the PET terminal groups. SEM analysis shows that both master batches present two well-interconnected phases. [source] Characterization of HDPE /Polyamide 6/ Nanocomposites Using Scanning-and Transmission Electron MicroscopyMACROMOLECULAR SYMPOSIA, Issue 1 2007Eleonora Erdmann Abstract Summary: Preparation and morphology of high density polyethylene (HDPE)/ polyamide 6 (PA 6)/modified clay nanocomposites were studied. The ability of PA 6 in dispersing clays was used to prepare modified delaminated clays, which were then mixed with HDPE. Mixing was performed using melt processing in a torque rheometer equipped with roller rotors. After etching the materials with boiling toluene and formic acid at room temperature, the morphology was examined by SEM analyses, showing that the PA 6 formed the continuous phase and HDPE the dispersed phase. X-ray diffraction patterns show that the (001) peak of the clay is dramatically decreased and shifted to lower angles, indicating that intercalated/exfoliated nanocomposites are obtained. TEM analyses confirmed the typical structure of exfoliated nanocomposites. A scheme for the mechanism of exfoliation and/or intercalation of these HDPE /PA 6/ /organoclay nanocomposites is proposed. [source] Quantifying Fluid Mixing with the Shannon EntropyMACROMOLECULAR THEORY AND SIMULATIONS, Issue 8 2006Marco Camesasca Abstract Summary: We introduce a methodology to quantify the quality of mixing in various systems, including polymeric ones, by adapting the Shannon information entropy. For illustrative purposes we use particle advection of two species in a two-dimensional cavity flow. We compute the entropy by using the probability of finding a suitable chosen group/complex of particles of a given species, at a given location. By choosing the size of the group to be in direct proportion to the overall concentration of the components in the mixture we ensure that the entropic measure is maximized for the case of perfect mixing, that is, when at each location the component concentration is equal to the corresponding overall component concentrations. The scale of observation role in evaluating mixing is analyzed using the entropic methodology. We also illustrate the effect of initial conditions on mixing in a laminar system, typical in operations involving polymers. [source] Damage to DNA in Bacterioplankton: A Model of Damage by Ultraviolet Radiation and its Repair as Influenced by Vertical Mixing ,PHOTOCHEMISTRY & PHOTOBIOLOGY, Issue 1 2000Yannick Huot ABSTRACT A model of UV-induced DNA damage in oceanic bacterioplankton was developed and tested against previously published and novel measurements of cyclobutane pyrimidine dimers (CPD) in surface layers of the ocean. The model describes the effects of solar irradiance, wind-forced mixing of bacterioplankton and optical properties of the water on net DNA damage in the water column. The biological part includes the induction of CPD by UV radiation and repair of this damage through photoreactivation and excision. The modeled damage is compared with measured variability of CPD in the ocean: diel variation in natural bacterioplankton communities at the surface and in vertical profiles under different wind conditions (net damage as influenced by repair and mixing); in situ incubation of natural assemblages of bacterioplankton (damage and repair, no mixing); and in situ incubation of DNA solutions (no repair, no mixing). The model predictions are generally consistent with the measurements, showing similar patterns with depth, time and wind speed. A sensitivity analysis assesses the effect on net DNA damage of varying ozone thickness, colored dissolved organic matter concentration, chlorophyll concentration, wind speed and mixed layer depth. Ozone thickness and mixed layer depth are the most important factors affecting net DNA damage in the mixed layer. From the model, the total amplification factor (TAF; a relative measure of the increase of damage associated with a decrease in ozone thickness) for net DNA damage in the euphotic zone is 1.7, as compared with 2.1,2.2 for irradiance weighted for damage to DNA at the surface. [source] Structure and performance of impact modified and oriented sPS/SEBS blensPOLYMER ENGINEERING & SCIENCE, Issue 4 2001R. J. Yan Blends of syndiotactic styrene/p-methyl styrene copolymer (SPMS) and poly (styrene)-block -ploy(ethene-co-butylene)-block-polystyrene (SEBS) as well as theiruniaxial drwing behavior andd performance were investigated. Mixing was performed using a batch mixer at 280°C. Morphology was evaluted using scanning electron microscopy (SEM).Thermal properties, orientation and tensile properties were determined using differential scanning calorimetry (DSC), the spectrographic birefringence technique, and a tensile testing machine, respectively. The blends of SPMS/SEBS, 90/10 and 80/20 showed a two-phase structure with an SEBS disperse phase in SPMS matrix. The average sizes of the SEBS paticles and tensile properties of the blends were affected by blending time and compositions. No significant effects on the modulus and strength were observed for the blends containing 10%SEBS or below. The quenched SPMS and SPMS/SEBS (90/10) blends were drawn at 110°C. and their crystallinity and orientation development compared. These were similar for both samples at low draw rations (<3.2), but were much faster for SPMS at higher draw ratios. The orientation process is shown to substantially invrease the strength and modulus in the drawing direction for SPMS and the blends. The toughness (energy under the stress-strain curve) increased upon addition of SEBS and orientation, with a marked effect of the latter. SEM observation reveals that the dispersed SEBS has been extended to about the same draw ratio as the bulk blend in the drawn blends, indicating effcient stress transfer at the interface. [source] Energy Efficiency of Two-Phase Mixing in a Modified Bubble ColumnTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 3 2007Subrata Kumar Majumder Abstract Energy efficiency for gas liquid mixing in a modified downflow bubble column reactor has been analyzed in this paper. Efficiencies of the different parts of the bubble column have been assessed on the basis of energy dissipation. Prediction of the energy dissipation coefficient as well as energy utilization efficiency due to gas-liquid mixing as a function of different physical, geometric and dynamic variables of the system has been done by correlation method. The distribution of energy utilization in the different zones of the column has also been analyzed. Experiments were carried out with air-water and air-aqueous solutions of carboxy methyl cellulose with different concentrations. Dans cet article on a analysé l'efficacité énergétique pour le mélange gaz-liquide dans un réacteur à colonne à bulles à écoulement descendant modifié. Les efficacités des différentes régions de la colonne à bulles ont été évaluées sur la base de la dissipation d'énergie. On a utilisé une méthode de corrélation pour prédire le coefficient de dissipation d'énergie ainsi que l'efficacité d'utilisation de l'énergie due au mélange gaz-liquide en fonction des différentes variables physiques, géométriques et dynamiques du système. La distribution de l'utilisation de l'énergie dans les différentes régions de la colonne a également été analysée. Des expériences ont été menées avec des solutions air-eau et air-solutions aqueuses de carboxyméthylcellulose à différentes concentrations. [source] Against Segregation: Ethnic Mixing in Liberal StatesTHE JOURNAL OF POLITICAL PHILOSOPHY, Issue 3 2003Margo Trappenburg First page of article [source] Guest Editorial: Genre MixingTHE JOURNAL OF POPULAR CULTURE, Issue 4 2008Kathryn Edney No abstract is available for this article. [source] Sensitivity of moist convection to environmental humidityTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 604 2004S. H. Derbyshire Abstract As part of the EUROCS (EUROpean Cloud Systems study) project, cloud-resolving model (CRM) simulations and parallel single-column model (SCM) tests of the sensitivity of moist atmospheric convection to midtropospheric humidity are presented. This sensitivity is broadly supported by observations and some previous model studies, but is still poorly quantified. Mixing between clouds and environment is a key mechanism, central to many of the fundamental differences between convection schemes. Here, we define an idealized quasi-steady ,testbed', in which the large-scale environment is assumed to adjust the local mean profiles on a timescale of one hour. We then test sensitivity to the target profiles at heights above 2 km. Two independent CRMs agree reasonably well in their response to the different background profiles and both show strong deep precipitating convection in the more moist cases, but only shallow convection in the driest case. The CRM results also appear to be numerically robust. All the SCMs, most of which are one-dimensional versions of global climate models (GCMs), show sensitivity to humidity but differ in various ways from the CRMs. Some of the SCMs are improved in the light of these comparisons, with GCM improvements documented elsewhere. © Crown copyright, 2004. [source] FASTEX IOP 18: A very deep tropopause fold.THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 577 2001I: Synoptic description, modelling Abstract The life cycle of a very deep tropopause fold (820 hPa) is documented with aircraft and ship observations during the Intensive Observing Period 18 of the Fronts and Atlantic Storm-Track EXperiment (FASTEX). The initial setting involves a coherent tropopause disturbance and an associated Arctic tropopause fold. The confluence episode that results from the phasing up of the tropopause disturbance and a southern ridge, ends in the formation of an intense jet streak, the dynamics of which are associated with the development of a polar tropopause fold. A diagnostic analysis suggests that the final dramatic stratospheric intrusion is the consequence of the vertical superposition of the Arctic and polar tropopause folds. The Mesoscale Non-Hydrostatic (Meso-NH) model is used to discuss this hypothesis. Mixing of the passive stratospheric tracer within the marine boundary layer is investigated with sensitivity tests which unplug, in turn, the model physical parametrizations. Finally, upper-level forcings associated with the development of the tropopause fold are investigated in detail in a companion paper. [source] Decay of a cut-off low and contribution to stratosphere-troposphere exchangeTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 564 2000H. Gouget Abstract We present a case study of the decay of a cut-off low over north-west Europe in June 1996, to establish how the stratospheric air initially contained within it was transferred to the troposphere. Two mechanisms for stratosphere-troposphere exchange are examined: direct convective erosion of the base of the low, and filamentation of the outer layers of the low along the flank of the polar jet stream. The approach taken relies on a combination of in-situ ozone and humidity measurements by MOZAIC (Measurement of Ozone and water vapour by Airbus In-service aircraft) aircraft and ozonesondes, and the European Centre for Medium-Range Weather Forecasts analyses. MOZAIC ozone is used to choose two analyses eight days apart at the genesis (14 June 1996) and decay (22 June 1996) of the low which have a consistent ozone/potential-vorticity relationship. Trajectories (both isentropic and three dimensional (3D)) between these two analyses reveal a consistent pattern; at the base of the low (310 K, 450 mb) all the trajectories attain tropospheric PV values whereas, at 320 K, those trajectories that leave the low experience a decrease in PV and those that do not leave the low retain their initial PV. We propose that air parcels leaving the low were stretched into thin filaments along the flank of the jet stream, which made them vulnerable to 3D mixing. A MOZAIC flight on 21 June 1996 provides direct evidence for this process. Up to 22 June 1996 (by which time the low had lost its closed circulation) the satellite images showed very little convection beneath the corresponding PV anomaly. Mixing was only effective at the very base of the stratospheric air at 310 K. On 22 June the remaining remnant of high PV was advected into a region of deep convection over central and eastern Europe, mixing the remaining stratospheric air into the troposphere. Of the initial mass of 1015 kg of stratospheric air contained in the low, 6 × 1014 kg was stripped into filaments along the jet and 4 × 1014 kg remained to be mixed by convection during the period 22,23 June 1996. [source] European Mathematical Genetics Meeting, Heidelberg, Germany, 12th,13th April 2007ANNALS OF HUMAN GENETICS, Issue 4 2007Article first published online: 28 MAY 200 Saurabh Ghosh 11 Indian Statistical Institute, Kolkata, India High correlations between two quantitative traits may be either due to common genetic factors or common environmental factors or a combination of both. In this study, we develop statistical methods to extract the contribution of a common QTL to the total correlation between the components of a bivariate phenotype. Using data on bivariate phenotypes and marker genotypes for sib-pairs, we propose a test for linkage between a common QTL and a marker locus based on the conditional cross-sib trait correlations (trait 1 of sib 1 , trait 2 of sib 2 and conversely) given the identity-by-descent sharing at the marker locus. The null hypothesis cannot be rejected unless there exists a common QTL. We use Monte-Carlo simulations to evaluate the performance of the proposed test under different trait parameters and quantitative trait distributions. An application of the method is illustrated using data on two alcohol-related phenotypes from the Collaborative Study On The Genetics Of Alcoholism project. Rémi Kazma 1 , Catherine Bonaïti-Pellié 1 , Emmanuelle Génin 12 INSERM UMR-S535 and Université Paris Sud, Villejuif, 94817, France Keywords: Gene-environment interaction, sibling recurrence risk, exposure correlation Gene-environment interactions may play important roles in complex disease susceptibility but their detection is often difficult. Here we show how gene-environment interactions can be detected by investigating the degree of familial aggregation according to the exposure of the probands. In case of gene-environment interaction, the distribution of genotypes of affected individuals, and consequently the risk in relatives, depends on their exposure. We developed a test comparing the risks in sibs according to the proband exposure. To evaluate the properties of this new test, we derived the formulas for calculating the expected risks in sibs according to the exposure of probands for various values of exposure frequency, relative risk due to exposure alone, frequencies of latent susceptibility genotypes, genetic relative risks and interaction coefficients. We find that the ratio of risks when the proband is exposed and not exposed is a good indicator of the interaction effect. We evaluate the power of the test for various sample sizes of affected individuals. We conclude that this test is valuable for diseases with moderate familial aggregation, only when the role of the exposure has been clearly evidenced. Since a correlation for exposure among sibs might lead to a difference in risks among sibs in the different proband exposure strata, we also add an exposure correlation coefficient in the model. Interestingly, we find that when this correlation is correctly accounted for, the power of the test is not decreased and might even be significantly increased. Andrea Callegaro 1 , Hans J.C. Van Houwelingen 1 , Jeanine Houwing-Duistermaat 13 Dept. of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands Keywords: Survival analysis, age at onset, score test, linkage analysis Non parametric linkage (NPL) analysis compares the identical by descent (IBD) sharing in sibling pairs to the expected IBD sharing under the hypothesis of no linkage. Often information is available on the marginal cumulative hazards (for example breast cancer incidence curves). Our aim is to extend the NPL methods by taking into account the age at onset of selected sibling pairs using these known marginal hazards. Li and Zhong (2002) proposed a (retrospective) likelihood ratio test based on an additive frailty model for genetic linkage analysis. From their model we derive a score statistic for selected samples which turns out to be a weighed NPL method. The weights depend on the marginal cumulative hazards and on the frailty parameter. A second approach is based on a simple gamma shared frailty model. Here, we simply test whether the score function of the frailty parameter depends on the excess IBD. We compare the performance of these methods using simulated data. Céline Bellenguez 1 , Carole Ober 2 , Catherine Bourgain 14 INSERM U535 and University Paris Sud, Villejuif, France 5 Department of Human Genetics, The University of Chicago, USA Keywords: Linkage analysis, linkage disequilibrium, high density SNP data Compared with microsatellite markers, high density SNP maps should be more informative for linkage analyses. However, because they are much closer, SNPs present important linkage disequilibrium (LD), which biases classical nonparametric multipoint analyses. This problem is even stronger in population isolates where LD extends over larger regions with a more stochastic pattern. We investigate the issue of linkage analysis with a 500K SNP map in a large and inbred 1840-member Hutterite pedigree, phenotyped for asthma. Using an efficient pedigree breaking strategy, we first identified linked regions with a 5cM microsatellite map, on which we focused to evaluate the SNP map. The only method that models LD in the NPL analysis is limited in both the pedigree size and the number of markers (Abecasis and Wigginton, 2005) and therefore could not be used. Instead, we studied methods that identify sets of SNPs with maximum linkage information content in our pedigree and no LD-driven bias. Both algorithms that directly remove pairs of SNPs in high LD and clustering methods were evaluated. Null simulations were performed to control that Zlr calculated with the SNP sets were not falsely inflated. Preliminary results suggest that although LD is strong in such populations, linkage information content slightly better than that of microsatellite maps can be extracted from dense SNP maps, provided that a careful marker selection is conducted. In particular, we show that the specific LD pattern requires considering LD between a wide range of marker pairs rather than only in predefined blocks. Peter Van Loo 1,2,3 , Stein Aerts 1,2 , Diether Lambrechts 4,5 , Bernard Thienpont 2 , Sunit Maity 4,5 , Bert Coessens 3 , Frederik De Smet 4,5 , Leon-Charles Tranchevent 3 , Bart De Moor 2 , Koen Devriendt 3 , Peter Marynen 1,2 , Bassem Hassan 1,2 , Peter Carmeliet 4,5 , Yves Moreau 36 Department of Molecular and Developmental Genetics, VIB, Belgium 7 Department of Human Genetics, University of Leuven, Belgium 8 Bioinformatics group, Department of Electrical Engineering, University of Leuven, Belgium 9 Department of Transgene Technology and Gene Therapy, VIB, Belgium 10 Center for Transgene Technology and Gene Therapy, University of Leuven, Belgium Keywords: Bioinformatics, gene prioritization, data fusion The identification of genes involved in health and disease remains a formidable challenge. Here, we describe a novel bioinformatics method to prioritize candidate genes underlying pathways or diseases, based on their similarity to genes known to be involved in these processes. It is freely accessible as an interactive software tool, ENDEAVOUR, at http://www.esat.kuleuven.be/endeavour. Unlike previous methods, ENDEAVOUR generates distinct prioritizations from multiple heterogeneous data sources, which are then integrated, or fused, into one global ranking using order statistics. ENDEAVOUR prioritizes candidate genes in a three-step process. First, information about a disease or pathway is gathered from a set of known "training" genes by consulting multiple data sources. Next, the candidate genes are ranked based on similarity with the training properties obtained in the first step, resulting in one prioritized list for each data source. Finally, ENDEAVOUR fuses each of these rankings into a single global ranking, providing an overall prioritization of the candidate genes. Validation of ENDEAVOUR revealed it was able to efficiently prioritize 627 genes in disease data sets and 76 genes in biological pathway sets, identify candidates of 16 mono- or polygenic diseases, and discover regulatory genes of myeloid differentiation. Furthermore, the approach identified YPEL1 as a novel gene involved in craniofacial development from a 2-Mb chromosomal region, deleted in some patients with DiGeorge-like birth defects. Finally, we are currently evaluating a pipeline combining array-CGH, ENDEAVOUR and in vivo validation in zebrafish to identify novel genes involved in congenital heart defects. Mark Broom 1 , Graeme Ruxton 2 , Rebecca Kilner 311 Mathematics Dept., University of Sussex, UK 12 Division of Environmental and Evolutionary Biology, University of Glasgow, UK 13 Department of Zoology, University of Cambridge, UK Keywords: Evolutionarily stable strategy, parasitism, asymmetric game Brood parasites chicks vary in the harm that they do to their companions in the nest. In this presentation we use game-theoretic methods to model this variation. Our model considers hosts which potentially abandon single nestlings and instead choose to re-allocate their reproductive effort to future breeding, irrespective of whether the abandoned chick is the host's young or a brood parasite's. The parasite chick must decide whether or not to kill host young by balancing the benefits from reduced competition in the nest against the risk of desertion by host parents. The model predicts that three different types of evolutionarily stable strategies can exist. (1) Hosts routinely rear depleted broods, the brood parasite always kills host young and the host never then abandons the nest. (2) When adult survival after deserting single offspring is very high, hosts always abandon broods of a single nestling and the parasite never kills host offspring, effectively holding them as hostages to prevent nest desertion. (3) Intermediate strategies, in which parasites sometimes kill their nest-mates and host parents sometimes desert nests that contain only a single chick, can also be evolutionarily stable. We provide quantitative descriptions of how the values given to ecological and behavioral parameters of the host-parasite system influence the likelihood of each strategy and compare our results with real host-brood parasite associations in nature. Martin Harrison 114 Mathematics Dept, University of Sussex, UK Keywords: Brood parasitism, games, host, parasite The interaction between hosts and parasites in bird populations has been studied extensively. Game theoretical methods have been used to model this interaction previously, but this has not been studied extensively taking into account the sequential nature of this game. We consider a model allowing the host and parasite to make a number of decisions, which depend on a number of natural factors. The host lays an egg, a parasite bird will arrive at the nest with a certain probability and then chooses to destroy a number of the host eggs and lay one of it's own. With some destruction occurring, either natural or through the actions of the parasite, the host chooses to continue, eject an egg (hoping to eject the parasite) or abandon the nest. Once the eggs have hatched the game then falls to the parasite chick versus the host. The chick chooses to destroy or eject a number of eggs. The final decision is made by the host, choosing whether to raise or abandon the chicks that are in the nest. We consider various natural parameters and probabilities which influence these decisions. We then use this model to look at real-world situations of the interactions of the Reed Warbler and two different parasites, the Common Cuckoo and the Brown-Headed Cowbird. These two parasites have different methods in the way that they parasitize the nests of their hosts. The hosts in turn have a different reaction to these parasites. Arne Jochens 1 , Amke Caliebe 2 , Uwe Roesler 1 , Michael Krawczak 215 Mathematical Seminar, University of Kiel, Germany 16 Institute of Medical Informatics and Statistics, University of Kiel, Germany Keywords: Stepwise mutation model, microsatellite, recursion equation, temporal behaviour We consider the stepwise mutation model which occurs, e.g., in microsatellite loci. Let X(t,i) denote the allelic state of individual i at time t. We compute expectation, variance and covariance of X(t,i), i=1,,,N, and provide a recursion equation for P(X(t,i)=z). Because the variance of X(t,i) goes to infinity as t grows, for the description of the temporal behaviour, we regard the scaled process X(t,i)-X(t,1). The results furnish a better understanding of the behaviour of the stepwise mutation model and may in future be used to derive tests for neutrality under this model. Paul O'Reilly 1 , Ewan Birney 2 , David Balding 117 Statistical Genetics, Department of Epidemiology and Public Health, Imperial, College London, UK 18 European Bioinformatics Institute, EMBL, Cambridge, UK Keywords: Positive selection, Recombination rate, LD, Genome-wide, Natural Selection In recent years efforts to develop population genetics methods that estimate rates of recombination and levels of natural selection in the human genome have intensified. However, since the two processes have an intimately related impact on genetic variation their inference is vulnerable to confounding. Genomic regions subject to recent selection are likely to have a relatively recent common ancestor and consequently less opportunity for historical recombinations that are detectable in contemporary populations. Here we show that selection can reduce the population-based recombination rate estimate substantially. In genome-wide studies for detecting selection we observe a tendency to highlight loci that are subject to low levels of recombination. We find that the outlier approach commonly adopted in such studies may have low power unless variable recombination is accounted for. We introduce a new genome-wide method for detecting selection that exploits the sensitivity to recent selection of methods for estimating recombination rates, while accounting for variable recombination using pedigree data. Through simulations we demonstrate the high power of the Ped/Pop approach to discriminate between neutral and adaptive evolution, particularly in the context of choosing outliers from a genome-wide distribution. Although methods have been developed showing good power to detect selection ,in action', the corresponding window of opportunity is small. In contrast, the power of the Ped/Pop method is maintained for many generations after the fixation of an advantageous variant Sarah Griffiths 1 , Frank Dudbridge 120 MRC Biostatistics Unit, Cambridge, UK Keywords: Genetic association, multimarker tag, haplotype, likelihood analysis In association studies it is generally too expensive to genotype all variants in all subjects. We can exploit linkage disequilibrium between SNPs to select a subset that captures the variation in a training data set obtained either through direct resequencing or a public resource such as the HapMap. These ,tag SNPs' are then genotyped in the whole sample. Multimarker tagging is a more aggressive adaptation of pairwise tagging that allows for combinations of two or more tag SNPs to predict an untyped SNP. Here we describe a new method for directly testing the association of an untyped SNP using a multimarker tag. Previously, other investigators have suggested testing a specific tag haplotype, or performing a weighted analysis using weights derived from the training data. However these approaches do not properly account for the imperfect correlation between the tag haplotype and the untyped SNP. Here we describe a straightforward approach to testing untyped SNPs using a missing-data likelihood analysis, including the tag markers as nuisance parameters. The training data is stacked on top of the main body of genotype data so there is information on how the tag markers predict the genotype of the untyped SNP. The uncertainty in this prediction is automatically taken into account in the likelihood analysis. This approach yields more power and also a more accurate prediction of the odds ratio of the untyped SNP. Anke Schulz 1 , Christine Fischer 2 , Jenny Chang-Claude 1 , Lars Beckmann 121 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany 22 Institute of Human Genetics, University of Heidelberg, Germany Keywords: Haplotype, haplotype sharing, entropy, Mantel statistics, marker selection We previously introduced a new method to map genes involved in complex diseases, using haplotype sharing-based Mantel statistics to correlate genetic and phenotypic similarity. Although the Mantel statistic is powerful in narrowing down candidate regions, the precise localization of a gene is hampered in genomic regions where linkage disequilibrium is so high that neighboring markers are found to be significant at similar magnitude and we are not able to discriminate between them. Here, we present a new approach to localize susceptibility genes by combining haplotype sharing-based Mantel statistics with an iterative entropy-based marker selection algorithm. For each marker at which the Mantel statistic is evaluated, the algorithm selects a subset of surrounding markers. The subset is chosen to maximize multilocus linkage disequilibrium, which is measured by the normalized entropy difference introduced by Nothnagel et al. (2002). We evaluated the algorithm with respect to type I error and power. Its ability to localize the disease variant was compared to the localization (i) without marker selection and (ii) considering haplotype block structure. Case-control samples were simulated from a set of 18 haplotypes, consisting of 15 SNPs in two haplotype blocks. The new algorithm gave correct type I error and yielded similar power to detect the disease locus compared to the alternative approaches. The neighboring markers were clearly less often significant than the causal locus, and also less often significant compared to the alternative approaches. Thus the new algorithm improved the precision of the localization of susceptibility genes. Mark M. Iles 123 Section of Epidemiology and Biostatistics, LIMM, University of Leeds, UK Keywords: tSNP, tagging, association, HapMap Tagging SNPs (tSNPs) are commonly used to capture genetic diversity cost-effectively. However, it is important that the efficacy of tSNPs is correctly estimated, otherwise coverage may be insufficient. If the pilot sample from which tSNPs are chosen is too small or the initial marker map too sparse, tSNP efficacy may be overestimated. An existing estimation method based on bootstrapping goes some way to correct for insufficient sample size and overfitting, but does not completely solve the problem. We describe a novel method, based on exclusion of haplotypes, that improves on the bootstrap approach. Using simulated data, the extent of the sample size problem is investigated and the performance of the bootstrap and the novel method are compared. We incorporate an existing method adjusting for marker density by ,SNP-dropping'. We find that insufficient sample size can cause large overestimates in tSNP efficacy, even with as many as 100 individuals, and the problem worsens as the region studied increases in size. Both the bootstrap and novel method correct much of this overestimate, with our novel method consistently outperforming the bootstrap method. We conclude that a combination of insufficient sample size and overfitting may lead to overestimation of tSNP efficacy and underpowering of studies based on tSNPs. Our novel approach corrects for much of this bias and is superior to the previous method. Sample sizes larger than previously suggested may still be required for accurate estimation of tSNP efficacy. This has obvious ramifications for the selection of tSNPs from HapMap data. Claudio Verzilli 1 , Juliet Chapman 1 , Aroon Hingorani 2 , Juan Pablo-Casas 1 , Tina Shah 2 , Liam Smeeth 1 , John Whittaker 124 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, UK 25 Division of Medicine, University College London, UK Keywords: Meta-analysis, Genetic association studies We present a Bayesian hierarchical model for the meta-analysis of candidate gene studies with a continuous outcome. Such studies often report results from association tests for different, possibly study-specific and non-overlapping markers (typically SNPs) in the same genetic region. Meta analyses of the results at each marker in isolation are seldom appropriate as they ignore the correlation that may exist between markers due to linkage disequlibrium (LD) and cannot assess the relative importance of variants at each marker. Also such marker-wise meta analyses are restricted to only those studies that have typed the marker in question, with a potential loss of power. A better strategy is one which incorporates information about the LD between markers so that any combined estimate of the effect of each variant is corrected for the effect of other variants, as in multiple regression. Here we develop a Bayesian hierarchical linear regression that models the observed genotype group means and uses pairwise LD measurements between markers as prior information to make posterior inference on adjusted effects. The approach is applied to the meta analysis of 24 studies assessing the effect of 7 variants in the C-reactive protein (CRP) gene region on plasma CRP levels, an inflammatory biomarker shown in observational studies to be positively associated with cardiovascular disease. Cathryn M. Lewis 1 , Christopher G. Mathew 1 , Theresa M. Marteau 226 Dept. of Medical and Molecular Genetics, King's College London, UK 27 Department of Psychology, King's College London, UK Keywords: Risk, genetics, CARD15, smoking, model Recently progress has been made in identifying mutations that confer susceptibility to complex diseases, with the potential to use these mutations in determining disease risk. We developed methods to estimate disease risk based on genotype relative risks (for a gene G), exposure to an environmental factor (E), and family history (with recurrence risk ,R for a relative of type R). ,R must be partitioned into the risk due to G (which is modelled independently) and the residual risk. The risk model was then applied to Crohn's disease (CD), a severe gastrointestinal disease for which smoking increases disease risk approximately 2-fold, and mutations in CARD15 confer increased risks of 2.25 (for carriers of a single mutation) and 9.3 (for carriers of two mutations). CARD15 accounts for only a small proportion of the genetic component of CD, with a gene-specific ,S, CARD15 of 1.16, from a total sibling relative risk of ,S= 27. CD risks were estimated for high-risk individuals who are siblings of a CD case, and who also smoke. The CD risk to such individuals who carry two CARD15 mutations is approximately 0.34, and for those carrying a single CARD15 mutation the risk is 0.08, compared to a population prevalence of approximately 0.001. These results imply that complex disease genes may be valuable in estimating with greater precision than has hitherto been possible disease risks in specific, easily identified subgroups of the population with a view to prevention. Yurii Aulchenko 128 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Compression, information, bzip2, genome-wide SNP data, statistical genetics With advances in molecular technology, studies accessing millions of genetic polymorphisms in thousands of study subjects will soon become common. Such studies generate large amounts of data, whose effective storage and management is a challenge to the modern statistical genetics. Standard file compression utilities, such as Zip, Gzip and Bzip2, may be helpful to minimise the storage requirements. Less obvious is the fact that the data compression techniques may be also used in the analysis of genetic data. It is known that the efficiency of a particular compression algorithm depends on the probability structure of the data. In this work, we compared different standard and customised tools using the data from human HapMap project. Secondly, we investigate the potential uses of data compression techniques for the analysis of linkage, association and linkage disequilibrium Suzanne Leal 1 , Bingshan Li 129 Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, USA Keywords: Consanguineous pedigrees, missing genotype data Missing genotype data can increase false-positive evidence for linkage when either parametric or nonparametric analysis is carried out ignoring intermarker linkage disequilibrium (LD). Previously it was demonstrated by Huang et al (2005) that no bias occurs in this situation for affected sib-pairs with unrelated parents when either both parents are genotyped or genotype data is available for two additional unaffected siblings when parental genotypes are missing. However, this is not the case for consanguineous pedigrees, where missing genotype data for any pedigree member within a consanguinity loop can increase false-positive evidence of linkage. The false-positive evidence for linkage is further increased when cryptic consanguinity is present. The amount of false-positive evidence for linkage is highly dependent on which family members are genotyped. When parental genotype data is available, the false-positive evidence for linkage is usually not as strong as when parental genotype data is unavailable. Which family members will aid in the reduction of false-positive evidence of linkage is highly dependent on which other family members are genotyped. For a pedigree with an affected proband whose first-cousin parents have been genotyped, further reduction in the false-positive evidence of linkage can be obtained by including genotype data from additional affected siblings of the proband or genotype data from the proband's sibling-grandparents. When parental genotypes are not available, false-positive evidence for linkage can be reduced by including in the analysis genotype data from either unaffected siblings of the proband or the proband's married-in-grandparents. Najaf Amin 1 , Yurii Aulchenko 130 Department of Epidemiology & Biostatistics, Erasmus Medical Centre Rotterdam, The Netherlands Keywords: Genomic Control, pedigree structure, quantitative traits The Genomic Control (GC) method was originally developed to control for population stratification and cryptic relatedness in association studies. This method assumes that the effect of population substructure on the test statistics is essentially constant across the genome, and therefore unassociated markers can be used to estimate the effect of confounding onto the test statistic. The properties of GC method were extensively investigated for different stratification scenarios, and compared to alternative methods, such as the transmission-disequilibrium test. The potential of this method to correct not for occasional cryptic relations, but for regular pedigree structure, however, was not investigated before. In this work we investigate the potential of the GC method for pedigree-based association analysis of quantitative traits. The power and type one error of the method was compared to standard methods, such as the measured genotype (MG) approach and quantitative trait transmission-disequilibrium test. In human pedigrees, with trait heritability varying from 30 to 80%, the power of MG and GC approach was always higher than that of TDT. GC had correct type 1 error and its power was close to that of MG under moderate heritability (30%), but decreased with higher heritability. William Astle 1 , Chris Holmes 2 , David Balding 131 Department of Epidemiology and Public Health, Imperial College London, UK 32 Department of Statistics, University of Oxford, UK Keywords: Population structure, association studies, genetic epidemiology, statistical genetics In the analysis of population association studies, Genomic Control (Devlin & Roeder, 1999) (GC) adjusts the Armitage test statistic to correct the type I error for the effects of population substructure, but its power is often sub-optimal. Turbo Genomic Control (TGC) generalises GC to incorporate co-variation of relatedness and phenotype, retaining control over type I error while improving power. TGC is similar to the method of Yu et al. (2006), but we extend it to binary (case-control) in addition to quantitative phenotypes, we implement improved estimation of relatedness coefficients, and we derive an explicit statistic that generalizes the Armitage test statistic and is fast to compute. TGC also has similarities to EIGENSTRAT (Price et al., 2006) which is a new method based on principle components analysis. The problems of population structure(Clayton et al., 2005) and cryptic relatedness (Voight & Pritchard, 2005) are essentially the same: if patterns of shared ancestry differ between cases and controls, whether distant (coancestry) or recent (cryptic relatedness), false positives can arise and power can be diminished. With large numbers of widely-spaced genetic markers, coancestry can now be measured accurately for each pair of individuals via patterns of allele-sharing. Instead of modelling subpopulations, we work instead with a coancestry coefficient for each pair of individuals in the study. We explain the relationships between TGC, GC and EIGENSTRAT. We present simulation studies and real data analyses to illustrate the power advantage of TGC in a range of scenarios incorporating both substructure and cryptic relatedness. References Clayton, D. G.et al. (2005) Population structure, differential bias and genomic control in a large-scale case-control association study. Nature Genetics37(11) November 2005. Devlin, B. & Roeder, K. (1999) Genomic control for association studies. Biometics55(4) December 1999. Price, A. L.et al. (2006) Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics38(8) (August 2006). Voight, B. J. & Pritchard, J. K. (2005) Confounding from cryptic relatedness in case-control association studies. Public Library of Science Genetics1(3) September 2005. Yu, J.et al. (2006) A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics38(2) February 2006. Hervé Perdry 1 , Marie-Claude Babron 1 , Françoise Clerget-Darpoux 133 INSERM U535 and Univ. Paris Sud, UMR-S 535, Villejuif, France Keywords: Modifier genes, case-parents trios, ordered transmission disequilibrium test A modifying locus is a polymorphic locus, distinct from the disease locus, which leads to differences in the disease phenotype, either by modifying the penetrance of the disease allele, or by modifying the expression of the disease. The effect of such a locus is a clinical heterogeneity that can be reflected by the values of an appropriate covariate, such as the age of onset, or the severity of the disease. We designed the Ordered Transmission Disequilibrium Test (OTDT) to test for a relation between the clinical heterogeneity, expressed by the covariate, and marker genotypes of a candidate gene. The method applies to trio families with one affected child and his parents. Each family member is genotyped at a bi-allelic marker M of a candidate gene. To each of the families is associated a covariate value; the families are ordered on the values of this covariate. As the TDT (Spielman et al. 1993), the OTDT is based on the observation of the transmission rate T of a given allele at M. The OTDT aims to find a critical value of the covariate which separates the sample of families in two subsamples in which the transmission rates are significantly different. We investigate the power of the method by simulations under various genetic models and covariate distributions. Acknowledgments H Perdry is funded by ARSEP. Pascal Croiseau 1 , Heather Cordell 2 , Emmanuelle Génin 134 INSERM U535 and University Paris Sud, UMR-S535, Villejuif, France 35 Institute of Human Genetics, Newcastle University, UK Keywords: Association, missing data, conditionnal logistic regression Missing data is an important problem in association studies. Several methods used to test for association need that individuals be genotyped at the full set of markers. Individuals with missing data need to be excluded from the analysis. This could involve an important decrease in sample size and a loss of information. If the disease susceptibility locus (DSL) is poorly typed, it is also possible that a marker in linkage disequilibrium gives a stronger association signal than the DSL. One may then falsely conclude that the marker is more likely to be the DSL. We recently developed a Multiple Imputation method to infer missing data on case-parent trios Starting from the observed data, a few number of complete data sets are generated by Markov-Chain Monte Carlo approach. These complete datasets are analysed using standard statistical package and the results are combined as described in Little & Rubin (2002). Here we report the results of simulations performed to examine, for different patterns of missing data, how often the true DSL gives the highest association score among different loci in LD. We found that multiple imputation usually correctly detect the DSL site even if the percentage of missing data is high. This is not the case for the naïve approach that consists in discarding trios with missing data. In conclusion, Multiple imputation presents the advantage of being easy to use and flexible and is therefore a promising tool in the search for DSL involved in complex diseases. Salma Kotti 1 , Heike Bickeböller 2 , Françoise Clerget-Darpoux 136 University Paris Sud, UMR-S535, Villejuif, France 37 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany Keywords: Genotype relative risk, internal controls, Family based analyses Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRRs. We will analytically derive the GRR estimators for the 1:1 and 1:3 matching and will present the results at the meeting. Family based analyses using internal controls are very popular both for detecting the effect of a genetic factor and for estimating the relative disease risk on the corresponding genotypes. Two different procedures are often applied to reconstitute internal controls. The first one considers one pseudocontrol genotype formed by the parental non-transmitted alleles called also 1:1 matching of alleles, while the second corresponds to three pseudocontrols corresponding to all genotypes formed by the parental alleles except the one of the case (1:3 matching). Many studies have compared between the two procedures in terms of the power and have concluded that the difference depends on the underlying genetic model and the allele frequencies. However, the estimation of the Genotype Relative Risk (GRR) under the two procedures has not been studied. Based on the fact that on the 1:1 matching, the control group is composed of the alleles untransmitted to the affected child and on the 1:3 matching, the control group is composed amongst alleles already transmitted to the affected child, we expect a difference on the GRR estimation. In fact, we suspect that the second procedure leads to biased estimation of the GRR. We will analytically derive the GRR estimator for the 1:1 and 1:3 matching and will present the results at the meeting. Luigi Palla 1 , David Siegmund 239 Department of Mathematics,Free University Amsterdam, The Netherlands 40 Department of Statistics, Stanford University, California, USA Keywords: TDT, assortative mating, inbreeding, statistical power A substantial amount of Assortative Mating (AM) is often recorded on physical and psychological, dichotomous as well as quantitative traits that are supposed to have a multifactorial genetic component. In particular AM has the effect of increasing the genetic variance, even more than inbreeding because it acts across loci beside within loci, when the trait has a multifactorial origin. Under the assumption of a polygenic model for AM dating back to Wright (1921) and refined by Crow and Felsenstein (1968,1982), the effect of assortative mating on the power to detect genetic association in the Transmission Disequilibrium Test (TDT) is explored as parameters, such as the effective number of genes and the allelic frequency vary. The power is reflected by the non centrality parameter of the TDT and is expressed as a function of the number of trios, the relative risk of the heterozygous genotype and the allele frequency (Siegmund and Yakir, 2007). The noncentrality parameter of the relevant score statistic is updated considering the effect of AM which is expressed in terms of an ,effective' inbreeding coefficient. In particular, for dichotomous traits it is apparent that the higher the number of genes involved in the trait, the lower the loss in power due to AM. Finally an attempt is made to extend this relation to the Q-TDT (Rabinowitz, 1997), which involves considering the effect of AM also on the phenotypic variance of the trait of interest, under the assumption that AM affects only its additive genetic component. References Crow, & Felsenstein, (1968). The effect of assortative mating on the genetic composition of a population. Eugen.Quart.15, 87,97. Rabinowitz,, 1997. A Transmission Disequilibrium Test for Quantitative Trait Loci. Human Heredity47, 342,350. Siegmund, & Yakir, (2007) Statistics of gene mapping, Springer. Wright, (1921). System of mating.III. Assortative mating based on somatic resemblance. Genetics6, 144,161. Jérémie Nsengimana 1 , Ben D Brown 2 , Alistair S Hall 2 , Jenny H Barrett 141 Leeds Institute of Molecular Medicine, University of Leeds, UK 42 Leeds Institute for Genetics, Health and Therapeutics, University of Leeds, UK Keywords: Inflammatory genes, haplotype, coronary artery disease Genetic Risk of Acute Coronary Events (GRACE) is an initiative to collect cases of coronary artery disease (CAD) and their unaffected siblings in the UK and to use them to map genetic variants increasing disease risk. The aim of the present study was to test the association between CAD and 51 single nucleotide polymorphisms (SNPs) and their haplotypes from 35 inflammatory genes. Genotype data were available for 1154 persons affected before age 66 (including 48% before age 50) and their 1545 unaffected siblings (891 discordant families). Each SNP was tested for association to CAD, and haplotypes within genes or gene clusters were tested using FBAT (Rabinowitz & Laird, 2000). For the most significant results, genetic effect size was estimated using conditional logistic regression (CLR) within STATA adjusting for other risk factors. Haplotypes were assigned using HAPLORE (Zhang et al., 2005), which considers all parental mating types consistent with offspring genotypes and assigns them a probability of occurence. This probability was used in CLR to weight the haplotypes. In the single SNP analysis, several SNPs showed some evidence for association, including one SNP in the interleukin-1A gene. Analysing haplotypes in the interleukin-1 gene cluster, a common 3-SNP haplotype was found to increase the risk of CAD (P = 0.009). In an additive genetic model adjusting for covariates the odds ratio (OR) for this haplotype is 1.56 (95% CI: 1.16-2.10, p = 0.004) for early-onset CAD (before age 50). This study illustrates the utility of haplotype analysis in family-based association studies to investigate candidate genes. References Rabinowitz, D. & Laird, N. M. (2000) Hum Hered50, 211,223. Zhang, K., Sun, F. & Zhao, H. (2005) Bioinformatics21, 90,103. Andrea Foulkes 1 , Recai Yucel 1 , Xiaohong Li 143 Division of Biostatistics, University of Massachusetts, USA Keywords: Haploytpe, high-dimensional, mixed modeling The explosion of molecular level information coupled with large epidemiological studies presents an exciting opportunity to uncover the genetic underpinnings of complex diseases; however, several analytical challenges remain to be addressed. Characterizing the components to complex diseases inevitably requires consideration of synergies across multiple genetic loci and environmental and demographic factors. In addition, it is critical to capture information on allelic phase, that is whether alleles within a gene are in cis (on the same chromosome) or in trans (on different chromosomes.) In associations studies of unrelated individuals, this alignment of alleles within a chromosomal copy is generally not observed. We address the potential ambiguity in allelic phase in this high dimensional data setting using mixed effects models. Both a semi-parametric and fully likelihood-based approach to estimation are considered to account for missingness in cluster identifiers. In the first case, we apply a multiple imputation procedure coupled with a first stage expectation maximization algorithm for parameter estimation. A bootstrap approach is employed to assess sensitivity to variability induced by parameter estimation. Secondly, a fully likelihood-based approach using an expectation conditional maximization algorithm is described. Notably, these models allow for characterizing high-order gene-gene interactions while providing a flexible statistical framework to account for the confounding or mediating role of person specific covariates. The proposed method is applied to data arising from a cohort of human immunodeficiency virus type-1 (HIV-1) infected individuals at risk for therapy associated dyslipidemia. Simulation studies demonstrate reasonable power and control of family-wise type 1 error rates. Vivien Marquard 1 , Lars Beckmann 1 , Jenny Chang-Claude 144 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Genotyping errors, type I error, haplotype-based association methods It has been shown in several simulation studies that genotyping errors may have a great impact on the type I error of statistical methods used in genetic association analysis of complex diseases. Our aim was to investigate type I error rates in a case-control study, when differential and non-differential genotyping errors were introduced in realistic scenarios. We simulated case-control data sets, where individual genotypes were drawn from a haplotype distribution of 18 haplotypes with 15 markers in the APM1 gene. Genotyping errors were introduced following the unrestricted and symmetric with 0 edges error models described by Heid et al. (2006). In six scenarios, errors resulted from changes of one allele to another with predefined probabilities of 1%, 2.5% or 10%, respectively. A multiple number of errors per haplotype was possible and could vary between 0 and 15, the number of markers investigated. We examined three association methods: Mantel statistics using haplotype-sharing; a haplotype-specific score test; and Armitage trend test for single markers. The type I error rates were not influenced for any of all the three methods for a genotyping error rate of less than 1%. For higher error rates and differential errors, the type I error of the Mantel statistic was only slightly and of the Armitage trend test moderately increased. The type I error rates of the score test were highly increased. The type I error rates were correct for all three methods for non-differential errors. Further investigations will be carried out with different frequencies of differential error rates and focus on power. Arne Neumann 1 , Dörthe Malzahn 1 , Martina Müller 2 , Heike Bickeböller 145 Department of Genetic Epidemiology, Medical School, University of Göttingen, Germany 46 GSF-National Research Center for Environment and Health, Neuherberg & IBE-Institute of Epidemiology, Ludwig-Maximilians University München, Germany Keywords: Interaction, longitudinal, nonparametric Longitudinal data show the time dependent course of phenotypic traits. In this contribution, we consider longitudinal cohort studies and investigate the association between two candidate genes and a dependent quantitative longitudinal phenotype. The set-up defines a factorial design which allows us to test simultaneously for the overall gene effect of the loci as well as for possible gene-gene and gene time interaction. The latter would induce genetically based time-profile differences in the longitudinal phenotype. We adopt a non-parametric statistical test to genetic epidemiological cohort studies and investigate its performance by simulation studies. The statistical test was originally developed for longitudinal clinical studies (Brunner, Munzel, Puri, 1999 J Multivariate Anal 70:286-317). It is non-parametric in the sense that no assumptions are made about the underlying distribution of the quantitative phenotype. Longitudinal observations belonging to the same individual can be arbitrarily dependent on one another for the different time points whereas trait observations of different individuals are independent. The two loci are assumed to be statistically independent. Our simulations show that the nonparametric test is comparable with ANOVA in terms of power of detecting gene-gene and gene-time interaction in an ANOVA favourable setting. Rebecca Hein 1 , Lars Beckmann 1 , Jenny Chang-Claude 147 Division of Cancer Epidemiology, German Cancer Research Center (DKFZ) Heidelberg, Germany Keywords: Indirect association studies, interaction effects, linkage disequilibrium, marker allele frequency Association studies accounting for gene-environment interactions (GxE) may be useful for detecting genetic effects and identifying important environmental effect modifiers. Current technology facilitates very dense marker spacing in genetic association studies; however, the true disease variant(s) may not be genotyped. In this situation, an association between a gene and a phenotype may still be detectable, using genetic markers associated with the true disease variant(s) (indirect association). Zondervan and Cardon [2004] showed that the odds ratios (OR) of markers which are associated with the disease variant depend highly on the linkage disequilibrium (LD) between the variant and the markers, and whether the allele frequencies match and thereby influence the sample size needed to detect genetic association. We examined the influence of LD and allele frequencies on the sample size needed to detect GxE in indirect association studies, and provide tables for sample size estimation. For discordant allele frequencies and incomplete LD, sample sizes can be unfeasibly large. The influence of both factors is stronger for disease loci with small rather than moderate to high disease allele frequencies. A decline in D' of e.g. 5% has less impact on sample size than increasing the difference in allele frequencies by the same percentage. Assuming 80% power, large interaction effects can be detected using smaller sample sizes than those needed for the detection of main effects. The detection of interaction effects involving rare alleles may not be possible. Focussing only on marker density can be a limited strategy in indirect association studies for GxE. Cyril Dalmasso 1 , Emmanuelle Génin 2 , Catherine Bourgain 2 , Philippe Broët 148 JE 2492 , Univ. Paris-Sud, France 49 INSERM UMR-S 535 and University Paris Sud, Villejuif, France Keywords: Linkage analysis, Multiple testing, False Discovery Rate, Mixture model In the context of genome-wide linkage analyses, where a large number of statistical tests are simultaneously performed, the False Discovery Rate (FDR) that is defined as the expected proportion of false discoveries among all discoveries is nowadays widely used for taking into account the multiple testing problem. Other related criteria have been considered such as the local False Discovery Rate (lFDR) that is a variant of the FDR giving to each test its own measure of significance. The lFDR is defined as the posterior probability that a null hypothesis is true. Most of the proposed methods for estimating the lFDR or the FDR rely on distributional assumption under the null hypothesis. However, in observational studies, the empirical null distribution may be very different from the theoretical one. In this work, we propose a mixture model based approach that provides estimates of the lFDR and the FDR in the context of large-scale variance component linkage analyses. In particular, this approach allows estimating the empirical null distribution, this latter being a key quantity for any simultaneous inference procedure. The proposed method is applied on a real dataset. Arief Gusnanto 1 , Frank Dudbridge 150 MRC Biostatistics Unit, Cambridge UK Keywords: Significance, genome-wide, association, permutation, multiplicity Genome-wide association scans have introduced statistical challenges, mainly in the multiplicity of thousands of tests. The question of what constitutes a significant finding remains somewhat unresolved. Permutation testing is very time-consuming, whereas Bayesian arguments struggle to distinguish direct from indirect association. It seems attractive to summarise the multiplicity in a simple form that allows users to avoid time-consuming permutations. A standard significance level would facilitate reporting of results and reduce the need for permutation tests. This is potentially important because current scans do not have full coverage of the whole genome, and yet, the implicit multiplicity is genome-wide. We discuss some proposed summaries, with reference to the empirical null distribution of the multiple tests, approximated through a large number of random permutations. Using genome-wide data from the Wellcome Trust Case-Control Consortium, we use a sub-sampling approach with increasing density to estimate the nominal p-value to obtain family-wise significance of 5%. The results indicate that the significance level is converging to about 1e-7 as the marker spacing becomes infinitely dense. We considered the concept of an effective number of independent tests, and showed that when used in a Bonferroni correction, the number varies with the overall significance level, but is roughly constant in the region of interest. We compared several estimators of the effective number of tests, and showed that in the region of significance of interest, Patterson's eigenvalue based estimator gives approximately the right family-wise error rate. Michael Nothnagel 1 , Amke Caliebe 1 , Michael Krawczak 151 Institute of Medical Informatics and Statistics, University Clinic Schleswig-Holstein, University of Kiel, Germany Keywords: Association scans, Bayesian framework, posterior odds, genetic risk, multiplicative model Whole-genome association scans have been suggested to be a cost-efficient way to survey genetic variation and to map genetic disease factors. We used a Bayesian framework to investigate the posterior odds of a genuine association under multiplicative disease models. We demonstrate that the p value alone is not a sufficient means to evaluate the findings in association studies. We suggest that likelihood ratios should accompany p values in association reports. We argue, that, given the reported results of whole-genome scans, more associations should have been successfully replicated if the consistently made assumptions about considerable genetic risks were correct. We conclude that it is very likely that the vast majority of relative genetic risks are only of the order of 1.2 or lower. Clive Hoggart 1 , Maria De Iorio 1 , John Whittakker 2 , David Balding 152 Department of Epidemiology and Public Health, Imperial College London, UK 53 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: Genome-wide association analyses, shrinkage priors, Lasso Testing one SNP at a time does not fully realise the potential of genome-wide association studies to identify multiple causal variants of small effect, which is a plausible scenario for many complex diseases. Moreover, many simulation studies assume a single causal variant and so more complex realities are ignored. Analysing large numbers of variants simultaneously is now becoming feasible, thanks to developments in Bayesian stochastic search methods. We pose the problem of SNP selection as variable selection in a regression model. In contrast to single SNP tests this approach simultaneously models the effect of all SNPs. SNPs are selected by a Bayesian interpretation of the lasso (Tibshirani, 1996); the maximum a posterior (MAP) estimate of the regression coefficients, which have been given independent, double exponential prior distributions. The double exponential distribution is an example of a shrinkage prior, MAP estimates with shrinkage priors can be zero, thus all SNPs with non zero regression coefficients are selected. In addition to the commonly-used double exponential (Laplace) prior, we also implement the normal exponential gamma prior distribution. We show that use of the Laplace prior improves SNP selection in comparison with single -SNP tests, and that the normal exponential gamma prior leads to a further improvement. Our method is fast and can handle very large numbers of SNPs: we demonstrate its performance using both simulated and real genome-wide data sets with 500 K SNPs, which can be analysed in 2 hours on a desktop workstation. Mickael Guedj 1,2 , Jerome Wojcik 2 , Gregory Nuel 154 Laboratoire Statistique et Génome, Université d'Evry, Evry France 55 Serono Pharmaceutical Research Institute, Plan-les-Ouates, Switzerland Keywords: Local Replication, Local Score, Association In gene-mapping, replication of initial findings has been put forwards as the approach of choice for filtering false-positives from true signals for underlying loci. In practice, such replications are however too poorly observed. Besides the statistical and technical-related factors (lack of power, multiple-testing, stratification, quality control,) inconsistent conclusions obtained from independent populations might result from real biological differences. In particular, the high degree of variation in the strength of LD among populations of different origins is a major challenge to the discovery of genes. Seeking for Local Replications (defined as the presence of a signal of association in a same genomic region among populations) instead of strict replications (same locus, same risk allele) may lead to more reliable results. Recently, a multi-markers approach based on the Local Score statistic has been proposed as a simple and efficient way to select candidate genomic regions at the first stage of genome-wide association studies. Here we propose an extension of this approach adapted to replicated association studies. Based on simulations, this method appears promising. In particular it outperforms classical simple-marker strategies to detect modest-effect genes. Additionally it constitutes, to our knowledge, a first framework dedicated to the detection of such Local Replications. Juliet Chapman 1 , Claudio Verzilli 1 , John Whittaker 156 Department of Epidemiology and Public Health, London School of Hygiene and Tropical Medicine, UK Keywords: FDR, Association studies, Bayesian model selection As genomewide association studies become commonplace there is debate as to how such studies might be analysed and what we might hope to gain from the data. It is clear that standard single locus approaches are limited in that they do not adjust for the effects of other loci and problematic since it is not obvious how to adjust for multiple comparisons. False discovery rates have been suggested, but it is unclear how well these will cope with highly correlated genetic data. We consider the validity of standard false discovery rates in large scale association studies. We also show that a Bayesian procedure has advantages in detecting causal loci amongst a large number of dependant SNPs and investigate properties of a Bayesian FDR. Peter Kraft 157 Harvard School of Public Health, Boston USA Keywords: Gene-environment interaction, genome-wide association scans Appropriately analyzed two-stage designs,where a subset of available subjects are genotyped on a genome-wide panel of markers at the first stage and then a much smaller subset of the most promising markers are genotyped on the remaining subjects,can have nearly as much power as a single-stage study where all subjects are genotyped on the genome-wide panel yet can be much less expensive. Typically, the "most promising" markers are selected based on evidence for a marginal association between genotypes and disease. Subsequently, the few markers found to be associated with disease at the end of the second stage are interrogated for evidence of gene-environment interaction, mainly to understand their impact on disease etiology and public health impact. However, this approach may miss variants which have a sizeable effect restricted to one exposure stratum and therefore only a modest marginal effect. We have proposed to use information on the joint effects of genes and a discrete list of environmental exposures at the initial screening stage to select promising markers for the second stage [Kraft et al Hum Hered 2007]. This approach optimizes power to detect variants that have a sizeable marginal effect and variants that have a small marginal effect but a sizeable effect in a stratum defined by an environmental exposure. As an example, I discuss a proposed genome-wide association scan for Type II diabetes susceptibility variants based in several large nested case-control studies. Beate Glaser 1 , Peter Holmans 158 Biostatistics and Bioinformatics Unit, Cardiff University, School of Medicine, Heath Park, Cardiff, UK Keywords: Combined case-control and trios analysis, Power, False-positive rate, Simulation, Association studies The statistical power of genetic association studies can be enhanced by combining the analysis of case-control with parent-offspring trio samples. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power. This work was performed with the aim of identifying the most powerful method among available combined techniques including test statistics developed by Kazeem and Farrall (2005), Nagelkerke and colleagues (2004) and Dudbridge (2006), as well as a simple combination of ,2-statistics from single samples. Simulation studies were performed to investigate their power under different additive, multiplicative, dominant and recessive disease models. False-positive rates were determined by studying the type I error rates under null models including models with unequal allele frequencies between the single case-control and trios samples. We identified three techniques with equivalent power and false-positive rates, which included modifications of the three main approaches: 1) the unmodified combined Odds ratio estimate by Kazeem & Farrall (2005), 2) a modified approach of the combined risk ratio estimate by Nagelkerke & colleagues (2004) and 3) a modified technique for a combined risk ratio estimate by Dudbridge (2006). Our work highlights the importance of studies investigating test performance criteria of novel methods, as they will help users to select the optimal approach within a range of available analysis techniques. David Almorza 1 , M.V. Kandus 2 , Juan Carlos Salerno 2 , Rafael Boggio 359 Facultad de Ciencias del Trabajo, University of Cádiz, Spain 60 Instituto de Genética IGEAF, Buenos Aires, Argentina 61 Universidad Nacional de La Plata, Buenos Aires, Argentina Keywords: Principal component analysis, maize, ear weight, inbred lines The objective of this work was to evaluate the relationship among different traits of the ear of maize inbred lines and to group genotypes according to its performance. Ten inbred lines developed at IGEAF (INTA Castelar) and five public inbred lines as checks were used. A field trial was carried out in Castelar, Buenos Aires (34° 36' S , 58° 39' W) using a complete randomize design with three replications. At harvest, individual weight (P.E.), diameter (D.E.), row number (N.H.) and length (L.E.) of the ear were assessed. A principal component analysis, PCA, (Infostat 2005) was used, and the variability of the data was depicted with a biplot. Principal components 1 and 2 (CP1 and CP2) explained 90% of the data variability. CP1 was correlated with P.E., L.E. and D.E., meanwhile CP2 was correlated with N.H. We found that individual weight (P.E.) was more correlated with diameter of the ear (D.E.) than with length (L.E). Five groups of inbred lines were distinguished: with high P.E. and mean N.H. (04-70, 04-73, 04-101 and MO17), with high P.E. but less N.H. (04-61 and B14), with mean P.E. and N.H. (B73, 04-123 and 04-96), with high N.H. but less P.E. (LP109, 04-8, 04-91 and 04-76) and with low P.E. and low N.H. (LP521 and 04-104). The use of PCA showed which variables had more incidence in ear weight and how is the correlation among them. Moreover, the different groups found with this analysis allow the evaluation of inbred lines by several traits simultaneously. Sven Knüppel 1 , Anja Bauerfeind 1 , Klaus Rohde 162 Department of Bioinformatics, MDC Berlin, Germany Keywords: Haplotypes, association studies, case-control, nuclear families The area of gene chip technology provides a plethora of phase-unknown SNP genotypes in order to find significant association to some genetic trait. To circumvent possibly low information content of a single SNP one groups successive SNPs and estimates haplotypes. Haplotype estimation, however, may reveal ambiguous haplotype pairs and bias the application of statistical methods. Zaykin et al. (Hum Hered, 53:79-91, 2002) proposed the construction of a design matrix to take this ambiguity into account. Here we present a set of functions written for the Statistical package R, which carries out haplotype estimation on the basis of the EM-algorithm for individuals (case-control) or nuclear families. The construction of a design matrix on basis of estimated haplotypes or haplotype pairs allows application of standard methods for association studies (linear, logistic regression), as well as statistical methods as haplotype sharing statistics and TDT-Test. Applications of these methods to genome-wide association screens will be demonstrated. Manuela Zucknick 1 , Chris Holmes 2 , Sylvia Richardson 163 Department of Epidemiology and Public Health, Imperial College London, UK 64 Department of Statistics, Oxford Center for Gene Function, University of Oxford, UK Keywords: Bayesian, variable selection, MCMC, large p, small n, structured dependence In large-scale genomic applications vast numbers of markers or genes are scanned to find a few candidates which are linked to a particular phenotype. Statistically, this is a variable selection problem in the "large p, small n" situation where many more variables than samples are available. An additional feature is the complex dependence structure which is often observed among the markers/genes due to linkage disequilibrium or their joint involvement in biological processes. Bayesian variable selection methods using indicator variables are well suited to the problem. Binary phenotypes like disease status are common and both Bayesian probit and logistic regression can be applied in this context. We argue that logistic regression models are both easier to tune and to interpret than probit models and implement the approach by Holmes & Held (2006). Because the model space is vast, MCMC methods are used as stochastic search algorithms with the aim to quickly find regions of high posterior probability. In a trade-off between fast-updating but slow-moving single-gene Metropolis-Hastings samplers and computationally expensive full Gibbs sampling, we propose to employ the dependence structure among the genes/markers to help decide which variables to update together. Also, parallel tempering methods are used to aid bold moves and help avoid getting trapped in local optima. Mixing and convergence of the resulting Markov chains are evaluated and compared to standard samplers in both a simulation study and in an application to a gene expression data set. Reference Holmes, C. C. & Held, L. (2006) Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis1, 145,168. Dawn Teare 165 MMGE, University of Sheffield, UK Keywords: CNP, family-based analysis, MCMC Evidence is accumulating that segmental copy number polymorphisms (CNPs) may represent a significant portion of human genetic variation. These highly polymorphic systems require handling as phenotypes rather than co-dominant markers, placing new demands on family-based analyses. We present an integrated approach to meet these challenges in the form of a graphical model, where the underlying discrete CNP phenotype is inferred from the (single or replicate) quantitative measure within the analysis, whilst assuming an allele based system segregating through the pedigree. [source] Ceramic-On-Metal for Total Hip Replacement: Mixing and Matching Can Lead to High WearARTIFICIAL ORGANS, Issue 4 2010Saverio Affatato Abstract Ceramic-on-ceramic and metal-on-metal bearing surfaces are often employed for total hip replacement because of their resistance to wear. However, they have some limits: brittleness is a major concern for ceramic, and ion release is a drawback for metal. To reduce the effect of these limitations, a hybrid coupling of ceramic-on-metal has been proposed. The theoretical advantage of this new coupling might lead orthopedic surgeons to use it indiscriminately. We asked whether the wear rate of this innovative solution was comparable with that of ceramic-on-ceramic, which is considered to be the gold standard for wear resistance. In a hip simulator study, we tested the wear pattern of a hybrid ceramic-on-metal coupling supplied by the same distributor; in particular, three different configurations were tested for 5 million cycles: 36-mm ceramic-on-ceramic, 32-mm and 36-mm ceramic-on-metal. These combinations were gravimetrically and geometrically evaluated. After 5 million cycles, the volumetric loss for the metal acetabular cups (, 36-mm) was 20-fold greater than that of the ceramic cups of the same size (, 36-mm); a volumetric loss of 4.35 mm3 and 0.26 mm3 was observed, respectively, for ceramic-on-metal and ceramic-on-ceramic combinations. Significant statistical differences were observed between all 36-mm different combinations (P < 0.0001). The increased diameter of the 36-mm ceramic-on-metal configuration resulted in a lower volumetric loss compared with that of the 32-mm ceramic-on-metal configuration. Our findings showed an increase in wear for the proposed hybrid specimens with respect to that of the ceramic-on-ceramic ones. This confirms that even in the case of ceramic-on-metal bearings, mixing and matching could not prove effective wear behavior, not even comparable with that of the ceramic-on-ceramic gold standard. [source] Effect of radial angle on mixing time for a double jet mixerASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 3 2010P. Manjula Abstract Mixing is one of the common unit operations employed in chemical industries. It is used for blending of liquids, flocculation, homogenization of mixtures, ensuring proper heat and mass transfer in various operations, prevention of deposition of solid particles, etc. Earlier research aspects were focused on experimental estimation of mixing time and proposing suitable correlations for the prediction of mixing time, the recent one being on flow visualization. However, most of the results reported in the literature deal with liquid flow with multi jets, whereas the effect of radial angle on mixing time was not studied. This study describes the effect of radial angle on mixing time as determined by experiment and simulation. A computational fluid dynamics (CFD) modeling is done for a jet mixing tank having two jets for a water,water system. Nozzle configuration for jet1 was fixed on the basis of our earlier studies (2/3rd position, flow rate 9l/m, nozzle angle 45° and nozzle diameter 10 mm). Mixing times were estimated for different jet2 configurations (jet angle 30°, 45° and 60°; radial angles 60°, 120°, 180°) located at different tank heights (2/3rd and 1/3rd from the bottom of the tank). The results obtained for mixing time for jet mixing in a tank with two jets are analyzed and the suitable nozzle angle, radial angle and position are proposed for the jet2 of the jet mixer considered in the present study. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source] Photosynthetic efficiency of Chlorella sorokiniana in a turbulently mixed short light-path photobioreactorBIOTECHNOLOGY PROGRESS, Issue 3 2010Anna M. J. Kliphuis Abstract To be able to study the effect of mixing as well as any other parameter on productivity of algal cultures, we designed a lab-scale photobioreactor in which a short light path (SLP) of (12 mm) is combined with controlled mixing and aeration. Mixing is provided by rotating an inner tube in the cylindrical cultivation vessel creating Taylor vortex flow and as such mixing can be uncoupled from aeration. Gas exchange is monitored on-line to gain insight in growth and productivity. The maximal productivity, hence photosynthetic efficiency, of Chlorella sorokiniana cultures at high light intensities (1,500 ,mol m,1 s,1) was investigated in this Taylor vortex flow SLP photobioreactor. We performed duplicate batch experiments at three different mixing rates: 70, 110, and 140 rpm, all in the turbulent Taylor vortex flow regime. For the mixing rate of 140 rpm, we calculated a quantum requirement for oxygen evolution of 21.2 mol PAR photons per mol O2 and a yield of biomass on light energy of 0.8 g biomass per mol PAR photons. The maximal photosynthetic efficiency was found at relatively low biomass densities (2.3 g L,1) at which light was just attenuated before reaching the rear of the culture. When increasing the mixing rate twofold, we only found a small increase in productivity. On the basis of these results, we conclude that the maximal productivity and photosynthetic efficiency for C. sorokiniana can be found at that biomass concentration where no significant dark zone can develop and that the influence of mixing-induced light/dark fluctuations is marginal. © 2010 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source] |