Major Bottleneck (major + bottleneck)

Distribution by Scientific Domains


Selected Abstracts


Sparsely Precomputing The Light Transport Matrix for Real-Time Rendering

COMPUTER GRAPHICS FORUM, Issue 4 2010
Fu-Chung Huang
Precomputation-based methods have enabled real-time rendering with natural illumination, all-frequency shadows, and global illumination. However, a major bottleneck is the precomputation time, that can take hours to days. While the final real-time data structures are typically heavily compressed with clustered principal component analysis and/or wavelets, a full light transport matrix still needs to be precomputed for a synthetic scene, often by exhaustive sampling and raytracing. This is expensive and makes rapid prototyping of new scenes prohibitive. In this paper, we show that the precomputation can be made much more efficient by adaptive and sparse sampling of light transport. We first select a small subset of "dense vertices", where we sample the angular dimensions more completely (but still adaptively). The remaining "sparse vertices" require only a few angular samples, isolating features of the light transport. They can then be interpolated from nearby dense vertices using locally low rank approximations. We demonstrate sparse sampling and precomputation 5 × faster than previous methods. [source]


Increasing data reuse of sparse algebra codes on simultaneous multithreading architectures

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2009
J. C. Pichel
Abstract In this paper the problem of the locality of sparse algebra codes on simultaneous multithreading (SMT) architectures is studied. In these kind of architectures many hardware structures are dynamically shared among the running threads. This puts a lot of stress on the memory hierarchy, and a poor locality, both inter-thread and intra-thread, may become a major bottleneck in the performance of a code. This behavior is even more pronounced when the code is irregular, which is the case of sparse matrix ones. Therefore, techniques that increase the locality of irregular codes on SMT architectures are important to achieve high performance. This paper proposes a data reordering technique specially tuned for these kind of architectures and codes. It is based on a locality model developed by the authors in previous works. The technique has been tested, first, using a simulator of a SMT architecture, and subsequently, on a real architecture as Intel's Hyper-Threading. Important reductions in the number of cache misses have been achieved, even when the number of running threads grows. When applying the locality improvement technique, we also decrease the total execution time and improve the scalability of the code. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Neuropharmaceuticals in the environment: Mianserin-induced neuroendocrine disruption in zebrafish (Danio rerio) using cDNA microarrays

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 10 2006
Karlijn van der Ven
Abstract Because of their environmental occurrence and high biological activity, human pharmaceuticals have received increasing attention from environmental and health agencies. A major bottleneck in their risk assessment is the lack of relevant and specific effect data. We developed an approach using gene expression analysis in quantifying adverse effects of neuroendocrine pharmaceuticals in the environment. We studied effects of mianserin on zebrafish (Danio rerio) gene expression using a brain-specific, custom microarray, with real-time polymerase chain reaction as confirmation. After exposure (0, 25, and 250 ,g/L) for 2, 4, and 14 d, RNA was extracted from brain tissue and used for microarray hybridization. In parallel, we investigated the impact of exposure on egg production, fertilization, and hatching. After 2 d of exposure, microarray analysis showed a clear effect of mianserin on important neuroendocrine-related genes (e.g., aromatase and estrogen receptor), indicating that antidepressants can modulate neuroendocrine processes. This initial neuroendocrine effect was followed by a "late gene expression effect" on neuronal plasticity, supporting the current concept regarding the mode of action for antidepressants in mammals. Clear adverse effects on egg viability were seen after 14 d of exposure at the highest concentration tested. Based on the specific molecular impact and the effects on reproduction, we conclude that further investigation of the adverse effects on the brain-liver-gonad axis is needed for a correct ecological risk assessment of antidepressants. [source]


In search for more accurate alignments in the twilight zone

PROTEIN SCIENCE, Issue 7 2002
Lukasz Jaroszewski
Abstract A major bottleneck in comparative modeling is the alignment quality; this is especially true for proteins whose distant relationships could be reliably recognized only by recent advances in fold recognition. The best algorithms excel in recognizing distant homologs but often produce incorrect alignments for over 50% of protein pairs in large fold-prediction benchmarks. The alignments obtained by sequence,sequence or sequence,structure matching algorithms differ significantly from the structural alignments. To study this problem, we developed a simplified method to explicitly enumerate all possible alignments for a pair of proteins. This allowed us to estimate the number of significantly different alignments for a given scoring method that score better than the structural alignment. Using several examples of distantly related proteins, we show that for standard sequence,sequence alignment methods, the number of significantly different alignments is usually large, often about 1010 alternatives. This distance decreases when the alignment method is improved, but the number is still too large for the brute force enumeration approach. More effective strategies were needed, so we evaluated and compared two well-known approaches for searching the space of suboptimal alignments. We combined their best features and produced a hybrid method, which yielded alignments that surpassed the original alignments for about 50% of protein pairs with minimal computational effort. [source]


Automated reporting from gel-based proteomics experiments using the open source Proteios database application

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 5 2007
Fredrik Levander Dr.
Abstract The assembly of data from different parts of proteomics workflow is often a major bottleneck in proteomics. Furthermore, there is an increasing demand for the publication of details about protein identifications due to the problems with false-positive and false-negative identifications. In this report, we describe how the open-source Proteios software has been expanded to automate the assembly of the different parts of a gel-based proteomics workflow. In Proteios it is possible to generate protein identification reports that contain all the information currently required by proteomics journals. It is also possible for the user to specify maximum allowed false positive ratios, and reports are automatically generated with the corresponding score cut-offs calculated. When protein identification is conducted using multiple search engines, the score thresholds that correlate to the predetermined error rate are also explicitly calculated for proteins that appear on the result lists of more than one search engine. [source]


An automatable screen for the rapid identification of proteins amenable to refolding

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2006
Nathan P. Cowieson Dr.
Abstract Insoluble expression of heterologous proteins in Escherichia coli is a major bottleneck of many structural genomics and high-throughput protein biochemistry projects. Many of these proteins may be amenable to refolding, but their identification is hampered by a lack of high-throughput methods. We have developed a matrix-assisted refolding approach in which correctly folded proteins are distinguished from misfolded proteins by their elution from affinity resin under non-denaturing conditions. Misfolded proteins remain adhered to the resin, presumably via hydrophobic interactions. The assay can be applied to insoluble proteins on an individual basis but is particularly well suited for high-throughput applications because it is rapid, automatable and has no rigorous sample preparation requirements. The efficacy of the screen is demonstrated on small-scale expression samples for 15,proteins. Refolding is then validated by large-scale expressions using SEC and circular dichroism. [source]


Hierarchical modeling of genome-wide Short Tandem Repeat (STR) markers infers native American prehistory

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 2 2010
Cecil M. Lewis Jr.
Abstract This study examines a genome-wide dataset of 678 Short Tandem Repeat loci characterized in 444 individuals representing 29 Native American populations as well as the Tundra Netsi and Yakut populations from Siberia. Using these data, the study tests four current hypotheses regarding the hierarchical distribution of neutral genetic variation in native South American populations: (1) the western region of South America harbors more variation than the eastern region of South America, (2) Central American and western South American populations cluster exclusively, (3) populations speaking the Chibchan-Paezan and Equatorial-Tucanoan language stock emerge as a group within an otherwise South American clade, (4) Chibchan-Paezan populations in Central America emerge together at the tips of the Chibchan-Paezan cluster. This study finds that hierarchical models with the best fit place Central American populations, and populations speaking the Chibchan-Paezan language stock, at a basal position or separated from the South American group, which is more consistent with a serial founder effect into South America than that previously described. Western (Andean) South America is found to harbor similar levels of variation as eastern (Equatorial-Tucanoan and Ge-Pano-Carib) South America, which is inconsistent with an initial west coast migration into South America. Moreover, in all relevant models, the estimates of genetic diversity within geographic regions suggest a major bottleneck or founder effect occurring within the North American subcontinent, before the peopling of Central and South America. Am J Phys Anthropol 2010. © 2009 Wiley-Liss, Inc. [source]


Recombinant protein expression and solubility screening in Escherichia coli: a comparative study

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 10 2006
Nick S. Berrow
Producing soluble proteins in Escherichia coli is still a major bottleneck for structural proteomics. Therefore, screening for soluble expression on a small scale is an attractive way of identifying constructs that are likely to be amenable to structural analysis. A variety of expression-screening methods have been developed within the Structural Proteomics In Europe (SPINE) consortium and to assist the further refinement of such approaches, eight laboratories participating in the network have benchmarked their protocols. For this study, the solubility profiles of a common set of 96 His6 -tagged proteins were assessed by expression screening in E. coli. The level of soluble expression for each target was scored according to estimated protein yield. By reference to a subset of the proteins, it is demonstrated that the small-scale result can provide a useful indicator of the amount of soluble protein likely to be produced on a large scale (i.e. sufficient for structural studies). In general, there was agreement between the different groups as to which targets were not soluble and which were the most soluble. However, for a large number of the targets there were wide discrepancies in the results reported from the different screening methods, which is correlated with variations in the procedures and the range of parameters explored. Given finite resources, it appears that the question of how to most effectively explore `expression space' is similar to several other multi-parameter problems faced by crystallographers, such as crystallization. [source]


A novel microplate-based screening strategy to assess the cellulolytic potential of Trichoderma strains

BIOTECHNOLOGY & BIOENGINEERING, Issue 3 2010
Stefano Cianchetta
Abstract Bioconversion of lignocellulosic biomass to fuel requires a hydrolysis step to obtain fermentable sugars, generally accomplished by fungal enzymes. An assorted library of cellulolytic microbial strains should facilitate the development of optimal enzyme cocktails specific for locally available feedstocks. Only a limited number of strains can be simultaneously assayed in screening based on large volume cultivation methods, as in shake flasks. This study describes a miniaturization strategy aimed at allowing parallel assessment of large numbers of fungal strains. Trichoderma strains were cultivated stationary on microcrystalline cellulose using flat bottom 24-well plates containing an agarized medium. Supernatants obtained by a rapid centrifugation step of the whole culture plates were evaluated for extracellular total cellulase activity, measured as filter paper activity, using a microplate-based assay. The results obtained were consistent with those observed in shake-flask experiments and more than 300 Trichoderma strains were accordingly characterized for cellulase production. Five strains, displaying on shake-flasks at least 80% of the activity shown by the hyper-cellulolytic mutant Trichoderma Rut-C30, were correctly recognized by the screening on 24-well plates, demonstrating the feasibility of this approach. Cellulase activity distribution for the entire Trichoderma collection is also reported. One strain (T. harzianum Ba8/86) displayed the closest profile to the reference strain Rut-C30 in time course experiments. The method is scalable and addresses a major bottleneck in screening programs, allowing small-scale parallel cultivation and rapid supernatant extraction. It can also be easily integrated with high-throughput enzyme assays and could be suitable for automation. Biotechnol. Bioeng. 2010;107: 461,468. © 2010 Wiley Periodicals, Inc. [source]


Framework for the Rapid Optimization of Soluble Protein Expression in Escherichia coli Combining Microscale Experiments and Statistical Experimental Design

BIOTECHNOLOGY PROGRESS, Issue 4 2007
R. S. Islam
A major bottleneck in drug discovery is the production of soluble human recombinant protein in sufficient quantities for analysis. This problem is compounded by the complex relationship between protein yield and the large number of variables which affect it. Here, we describe a generic framework for the rapid identification and optimization of factors affecting soluble protein yield in microwell plate fermentations as a prelude to the predictive and reliable scaleup of optimized culture conditions. Recombinant expression of firefly luciferase in Escherichia coli was used as a model system. Two rounds of statistical design of experiments (DoE) were employed to first screen (D-optimal design) and then optimize (central composite face design) the yield of soluble protein. Biological variables from the initial screening experiments included medium type and growth and induction conditions. To provide insight into the impact of the engineering environment on cell growth and expression, plate geometry, shaking speed, and liquid fill volume were included as factors since these strongly influence oxygen transfer into the wells. Compared to standard reference conditions, both the screening and optimization designs gave up to 3-fold increases in the soluble protein yield, i.e., a 9-fold increase overall. In general the highest protein yields were obtained when cells were induced at a relatively low biomass concentration and then allowed to grow slowly up to a high final biomass concentration, >8 g·L,1. Consideration and analysis of the model results showed 6 of the original 10 variables to be important at the screening stage and 3 after optimization. The latter included the microwell plate shaking speeds pre- and postinduction, indicating the importance of oxygen transfer into the microwells and identifying this as a critical parameter for subsequent scale translation studies. The optimization process, also known as response surface methodology (RSM), predicted there to be a distinct optimum set of conditions for protein expression which could be verified experimentally. This work provides a generic approach to protein expression optimization in which both biological and engineering variables are investigated from the initial screening stage. The application of DoE reduces the total number of experiments needed to be performed, while experimentation at the microwell scale increases experimental throughput and reduces cost. [source]


Hydrogen in Porous Tetrahydrofuran Clathrate Hydrate

CHEMPHYSCHEM, Issue 9 2008
Fokko M. Mulder Dr.
Abstract The lack of practical methods for hydrogen storage is still a major bottleneck in the realization of an energy economy based on hydrogen as energy carrier.1 Storage within solid-state clathrate hydrates,2,4 and in the clathrate hydrate of tetrahydrofuran (THF), has been recently reported.5,,6 In the latter case, stabilization by THF is claimed to reduce the operation pressure by several orders of magnitude close to room temperature. Here, we apply in situ neutron diffraction to show that,in contrast to previous reports[5,,6],hydrogen (deuterium) occupies the small cages of the clathrate hydrate only to 30,% (at 274 K and 90.5 bar). Such a D2 load is equivalent to 0.27 wt.,% of stored H2. In addition, we show that a surplus of D2O results in the formation of additional D2O ice Ih instead of in the production of sub-stoichiometric clathrate that is stabilized by loaded hydrogen (as was reported in ref. 6). Structure-refinement studies show that [D8]THF is dynamically disordered, while it fills each of the large cages of [D8]THF,17D2O stoichiometrically. Our results show that the clathrate hydrate takes up hydrogen rapidly at pressures between 60 and 90 bar (at about 270 K). At temperatures above ,220 K, the H-storage characteristics of the clathrate hydrate have similarities with those of surface-adsorption materials, such as nanoporous zeolites and metal,organic frameworks,7,,8 but at lower temperatures, the adsorption rates slow down because of reduced D2 diffusion between the small cages. [source]


Neural Signal Manager: a collection of classical and innovative tools for multi-channel spike train analysis

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 11 2009
Antonio Novellino
Abstract Recent developments in the neuroengineering field and the widespread use of the micro electrode arrays (MEAs) for electrophysiological investigations made available new approaches for studying the dynamics of dissociated neuronal networks as well as acute/organotypic slices maintained ex vivo. Importantly, the extraction of relevant parameters from these neural populations is likely to involve long-term measurements, lasting from a few hours to entire days. The processing of huge amounts of electrophysiological data, in terms of computational time and automation of the procedures, is actually one of the major bottlenecks for both in vivo and in vitro recordings. In this paper we present a collection of algorithms implemented within a new software package, named the Neural Signal Manager (NSM), aimed at analyzing a huge quantity of data recorded by means of MEAs in a fast and efficient way. The NSM offers different approaches for both spike and burst analysis, and integrates state-of-the-art statistical algorithms, such as the inter-spike interval histogram or the post stimulus time histogram, with some recent ones, such as the burst detection and its related statistics. In order to show the potentialities of the software, the application of the developed algorithms to a set of spontaneous activity recordings from dissociated cultures at different ages is presented in the Results section. Copyright © 2008 John Wiley & Sons, Ltd. [source]