Running Time (running + time)

Distribution by Scientific Domains


Selected Abstracts


DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

COMPUTER GRAPHICS FORUM, Issue 3 2004
Avneesh Sud
We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


Hierarchical Context-based Pixel Ordering

COMPUTER GRAPHICS FORUM, Issue 3 2003
Ziv Bar-Joseph
Abstract We present a context-based scanning algorithm which reorders the input image using a hierarchical representationof the image. Our algorithm optimally orders (permutes) the leaves corresponding to the pixels, by minimizing thesum of distances between neighboring pixels. The reordering results in an improved autocorrelation betweennearby pixels which leads to a smoother image. This allows us, for the first time, to improve image compressionrates using context-based scans. The results presented in this paper greatly improve upon previous work in bothcompression rate and running time. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometryand Object Modeling I.3.6 [Computer Graphics]: Methodology and Techniques [source]


An Adaptive Sampling Scheme for Out-of-Core Simplification

COMPUTER GRAPHICS FORUM, Issue 2 2002
Guangzheng Fei
Current out-of-core simplification algorithms can efficiently simplify large models that are too complex to be loaded in to the main memory at one time. However, these algorithms do not preserve surface details well since adaptive sampling, a typical strategy for detail preservation, remains to be an open issue for out-of-core simplification. In this paper, we present an adaptive sampling scheme, called the balanced retriangulation (BR), for out-of-core simplification. A key idea behind BR is that we can use Garland's quadric error matrix to analyze the global distribution of surface details. Based on this analysis, a local retriangulation achieves adaptive sampling by restoring detailed areas with cell split operations while further simplifying smooth areas with edge collapse operations. For a given triangle budget, BR preserves surface details significantly better than uniform sampling algorithms such as uniform clustering. Like uniform clustering, our algorithm has linear running time and small memory requirement. [source]


Running Test of Contactwire-less Tramcar Using Lithium Ion Battery

IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 3 2008
Hironori Ozawa Non-member
Abstract The basic specification of lithium ion battery-type tramcar was established using DC600V-type tramcar with a weight of 40 t. Forty five kilo Watt hour of manganese-type lithium ion battery was used. The running test was carried out for the first time in Japan in the business line. The relation between running time and voltage, current and integrating watt was investigated in detail. The tramcar was run when the lithium ion battery module was discharged between 660 and 480 V. On one charge, the tramcar could run for about 25 km. The mileage of contactwire-less tramcar improved two times that of tramcar. The running performance of contactwire-less tramcar was equivalent to the tramcar. Copyright © 2008 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]


Portfolio management using value at risk: A comparison between genetic algorithms and particle swarm optimization

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2009
V. A. F. Dallagnol
In this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk calculated using historical simulation where several strategies for handling the constraints of the problem were implemented. The results of the experiments performed show that, generally speaking, the methods are capable of consistently finding good solutions quite close to the best solution found in a reasonable amount of time. In addition, it is demonstrated statistically that the algorithms, on average, do not all consistently achieve the same best solution. PSO turned out to be faster than GA, both in terms of number of iterations and in terms of total running time. However, PSO appears to be much more sensitive to the initial position of the particles than GA. Tests were also made regarding the number of particles needed to solve the problem, and 50 particles/chromosomes seem to be enough for problems up to 20 assets. © 2009 Wiley Periodicals, Inc. [source]


Checking identities is computationally intractable NP-hard and therefore human provers will always be needed

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1-2 2004
Vladik Kreinovich
A 1990 article in the American Mathematical Monthly has shown that most combinatorial identities of the type described in Monthly problems can be solved by known identity checking algorithms. A natural question arises: are these algorithms always feasible or can the number of computational steps be so big that application of these algorithms sometimes is not physically feasible? We prove that the problem of checking identities is nondeterministic polynomial (NP) hard, and thus (unless NP = P) for every algorithm that solves it, there are cases in which this algorithm would require exponentially long running time and thus will not be feasible. This means that no matter how successful computers are in checking identities, human mathematicians will always be needed to check some of them. © 2004 Wiley Periodicals, Inc. [source]


Parallel Algorithms for Dynamic Shortest Path Problems

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2002
Ismail Chabini
The development of intelligent transportation systems (ITS) and the resulting need for the solution of a variety of dynamic traffic network models and management problems require faster-than-real-time computation of shortest path problems in dynamic networks. Recently, a sequential algorithm was developed to compute shortest paths in discrete time dynamic networks from all nodes and all departure times to one destination node. The algorithm is known as algorithm DOT and has an optimal worst-case running-time complexity. This implies that no algorithm with a better worst-case computational complexity can be discovered. Consequently, in order to derive algorithms to solve all-to-one shortest path problems in dynamic networks, one would need to explore avenues other than the design of sequential solution algorithms only. The use of commercially-available high-performance computing platforms to develop parallel implementations of sequential algorithms is an example of such avenue. This paper reports on the design, implementation, and computational testing of parallel dynamic shortest path algorithms. We develop two shared-memory and two message-passing dynamic shortest path algorithm implementations, which are derived from algorithm DOT using the following parallelization strategies: decomposition by destination and decomposition by transportation network topology. The algorithms are coded using two types of parallel computing environments: a message-passing environment based on the parallel virtual machine (PVM) library and a multi-threading environment based on the SUN Microsystems Multi-Threads (MT) library. We also develop a time-based parallel version of algorithm DOT for the case of minimum time paths in FIFO networks, and a theoretical parallelization of algorithm DOT on an ,ideal' theoretical parallel machine. Performances of the implementations are analyzed and evaluated using large transportation networks, and two types of parallel computing platforms: a distributed network of Unix workstations and a SUN shared-memory machine containing eight processors. Satisfactory speed-ups in the running time of sequential algorithms are achieved, in particular for shared-memory machines. Numerical results indicate that shared-memory computers constitute the most appropriate type of parallel computing platforms for the computation of dynamic shortest paths for real-time ITS applications. [source]


Rotamer optimization for protein design through MAP estimation and problem-size reduction

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 12 2009
Eun-Jong Hong
Abstract The search for the global minimum energy conformation (GMEC) of protein side chains is an important computational challenge in protein structure prediction and design. Using rotamer models, the problem is formulated as a NP-hard optimization problem. Dead-end elimination (DEE) methods combined with systematic A* search (DEE/A*) has proven useful, but may not be strong enough as we attempt to solve protein design problems where a large number of similar rotamers is eligible and the network of interactions between residues is dense. In this work, we present an exact solution method, named BroMAP (branch-and-bound rotamer optimization using MAP estimation), for such protein design problems. The design goal of BroMAP is to be able to expand smaller search trees than conventional branch-and-bound methods while performing only a moderate amount of computation in each node, thereby reducing the total running time. To achieve that, BroMAP attempts reduction of the problem size within each node through DEE and elimination by lower bounds from approximate maximum-a-posteriori (MAP) estimation. The lower bounds are also exploited in branching and subproblem selection for fast discovery of strong upper bounds. Our computational results show that BroMAP tends to be faster than DEE/A* for large protein design cases. BroMAP also solved cases that were not solved by DEE/A* within the maximum allowed time, and did not incur significant disadvantage for cases where DEE/A* performed well. Therefore, BroMAP is particularly applicable to large protein design problems where DEE/A* struggles and can also substitute for DEE/A* in general GMEC search. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2009 [source]


Use of Lysozyme, Nisin, and EDTA Combined Treatments for Maintaining Quality of Packed Ostrich Patties

JOURNAL OF FOOD SCIENCE, Issue 3 2010
Marianna Mastromatteo
ABSTRACT:, The antimicrobial effectiveness of lysozyme, nisin, and ethylene diamine tetraacetic acid (EDTA) combination treatments (Mix1: 250 ppm lysozyme, 250 ppm nisin, 5 mM EDTA; Mix2: 500 ppm lysozyme, 500 ppm nisin, 5 mM EDTA) on bacterial growth of ostrich patties packaged in air, vacuum, and 2 different modified atmospheres (MAP1: 80% O2, 20% CO2; MAP2: 5% O2, 30% CO2, 65% N2) was evaluated. Moreover, the lipid oxidation was evaluated as well as color and sensory characteristics. The growth of total viable counts and lactic acid bacteria were strongly inhibited by the antimicrobial treatments in all the running time (Inhibition Index >97%) whereas for Enterobacteriaceae,and Pseudomonas,spp. lower inhibition indices from 12% to about 28% were observed. The lipid oxidation was more pronounced in the control respect to the treated meat patties. Moreover, the mixture at low concentration of lysozyme and nisin showed the best antioxidative effect. High concentrations of lysozyme and nisin showed the greatest color loss. Also, off-odors for the untreated patties developed faster than the treated samples. Practical Application: Great interest is developing in food bio-preservation, because of the ever-increasing needs to protect consumers' health and to valorize the naturalness and safety of food products. [source]


Poster Sessions AP13: Novel Techniques and Technologies

JOURNAL OF NEUROCHEMISTRY, Issue 2002
J. K. Yao
Studies of the antioxidant defense system and the monoamine metabolic pathways are often complicated by cumbersome analytical methods, which require separate and multistep extraction and chemical reaction procedures. Thus, measurements of multiple parameters are limited in relatively small biological samples. High performance liquid chromatography (HPLC) coupled with a Coulometric Multi-Electrode Array System (CMEAS) provides us a convenient and most sensitive tool to measure low molecular weight, redox-active compounds in biological sample. The deproteinized sample was analyzed on a HPLC coupled with a 16-channel CMEAS, which incremented from 60 to 960 mV in 60 mV steps. Each sample was run on a single column (Meta-250, 4.6 × 250 mm) under a 150-minute complex gradient that ranged from 0% B (A: 1.1% pentane sulfonic acid) to 20% B (B: 0.1 m lithium acetate in mixture of methanol, acetonenitrile and isopropanol), with a flow rate of 0.5 mL/min. We have developed an automated procedure to simultaneously measure various antioxidant, oxidative stress marker, and monoamine metabolites in a single column with binary gradient. No other chemical reactions are necessary. In order to reduce the running time and yet achieve a reproducible retention time by the autosampler injection, our gradient elution profile was modified to produce a shorter equilibration time and to compensate for the initial contamination of mobile phase B following the first injection. Without the use of two columns in series and peak suppresser/gradient mixer, we have simplified the previously published method to measure over 20 different antioxidants, oxidative stress markers and monoamine metabolites simultaneously in biological samples. [source]


Quantitative detection of hepatitis B virus DNA in serum by a new rapid real-time fluorescence PCR assay

JOURNAL OF VIRAL HEPATITIS, Issue 6 2001
R. Jardi
A sensitive and accurate HBV DNA quantification assay is essential for monitoring hepatitis B virus (HBV) replication. This study evaluated a real-time PCR method performed in the LightCyclerTM analyser for quantitative HBV DNA assay. HBV DNA results with this method were compared with those obtained using a branched-chain DNA (bDNA) solution hybridization assay. Real-time PCR was performed using two adjacent fluorescently labelled probes and primers corresponding to the HBV core gene. The same standard employed in the bDNA assay was used for calibration. Serum samples came from 193 HBV surface antigen (HBsAg)-positive patients (34 HBV e antigen (HBeAg)-positive and 93 with antibody to HBeAg (anti-HBe)), and 66 asymptomatic HBV carriers. In addition, we analysed serum samples from 8 anti-HBe-positive patients who had been receiving lamivudine treatment for more than three years. A linear standard curve was seen in the range from 103 to 108 copies/mL. In the reproducibility analysis, intra-assay coefficient of variation (CVs) at two known HBV DNA concentrations were 4% and 2% and interassay CVs were 6% and 4%. The median of serum HBV DNA by real-time PCR was 9.2 × 108 copies/mL in HBeAg-positive patients with persistently elevated alanine aminotransferase (ALT) levels, 1.3 × 107 copies/mL in anti-HBe-positive cases with persistently elevated ALT levels, 3.7 × 104 copies/mL in anti-HBe-positive patients with fluctuating ALT levels and 104 copies/mL in asymptomatic HBV carriers. The differences in HBV DNA levels among the various groups studied were statistically significant (P < 0.05). The cut-off between chronic hepatitis patients and asymptomatic carriers was found to be at a serum HBV DNA concentration of 5 × 104 copies/mL. Of the 109 serum samples with a viral load < 7.5 × 105 (negative by bDNA assay) 44 (40%) were positive by real-time PCR: 24 (56%) chronic hepatitis and 20 (33%) asymptomatic carriers. There was a positive association between HBV DNA levels determined by real-time PCR and ALT levels (P < 0.05), which was not observed with the bDNA assay for HBV DNA quantification. At 12 months of lamivudine treatment, 6 patients (75%) showed HBV DNA levels < 5 × 104 copies/mL (range < 103,2 × 103), significantly lower than at baseline. At 36 months, 2 of 8 (25%) showed HBV DNA levels persistently lower than 5 × 104 copies/mL (1.7 × 103, 6 × 103). The LightCycler quantitative real-time PCR is a practical, sensitive, reproducible single-tube assay with a wide dynamic range of detection. The assay is automatic except for DNA extraction and the running time is only 70 min. The LightCycler real-time PCR is useful for identifying different states of HBV infection and for evaluating the efficacy of viral therapy. [source]


Minimum boundary touching tilings of polyominoes

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 1 2006
Andreas Spillner
Abstract We study the problem of tiling a polyomino P with as few squares as possible such that every square in the tiling has a non-empty intersection with the boundary of P . Our main result is an algorithm which given a simply connected polyomino P computes such a tiling of P . We indicate how one can improve the running time of this algorithm for the more restricted row-column-convex polyominoes. Finally we show that a related decision problem is in NP for rectangular polyominoes. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Algorithms for the multi-item multi-vehicles dynamic lot sizing problem

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 2 2006
Shoshana Anily
Abstract We consider a two-stage supply chain, in which multi-items are shipped from a manufacturing facility or a central warehouse to a downstream retailer that faces deterministic external demand for each of the items over a finite planning horizon. The items are shipped through identical capacitated vehicles, each incurring a fixed cost per trip. In addition, there exist item-dependent variable shipping costs and inventory holding costs at the retailer for items stored at the end of the period; these costs are constant over time. The sum of all costs must be minimized while satisfying the external demand without backlogging. In this paper we develop a search algorithm to solve the problem optimally. Our search algorithm, although exponential in the worst case, is very efficient empirically due to new properties of the optimal solution that we found, which allow us to restrict the number of solutions examined. Second, we perform a computational study that compares the empirical running time of our search methods to other available exact solution methods to the problem. Finally, we characterize the conditions under which each of the solution methods is likely to be faster than the others and suggest efficient heuristic solutions that we recommend using when the problem is large in all dimensions. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2006. [source]


Exact algorithms for the master ring problem,

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 2 2008
Hadas Shachnai
Abstract We consider the master ring problem (MRP) which often arises in optical network design. Given a network which consists of a collection of interconnected rings R1,,,RK, with n1,,,nK distinct nodes, respectively, we need to find an ordering of the nodes in the network that respects the ordering of every individual ring, if one exists. We show that MRP is NP-complete, and therefore, it is unlikely to be solvable by a polynomial time algorithm. Our main result is an algorithm which solves MRP in steps, for some polynomial Q, as the nk values become large. For the ring clearance problem, a special case of practical interest, our algorithm achieves this running time for rings of any size nk , 2. This yields the first nontrivial improvement, by factor of , over the running time of the naive algorithm, which exhaustively enumerates all possible solutions. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008 [source]


A network flow algorithm to minimize beam-on time for unconstrained multileaf collimator problems in cancer radiation therapy

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2005
Ravindra K. Ahuja
Abstract In this article, we study the modulation of intensity matrices arising in cancer radiation therapy using multileaf collimators. This problem can be formulated by decomposing a given m × n integer matrix into a positive linear combination of (0, 1) matrices with the strict consecutive 1's property in rows. We consider a special case in which no technical constraints have to be taken into account. In this situation, the rows of the intensity matrix are independent of each other and the problem is equivalent to decomposing m intensity rows,independent of each other,into positive linear combinations of (0, 1) rows with the consecutive 1's property. We demonstrate that this problem can be transformed into a minimum cost flow problem in a directed network that has the following special structures: (1) the network is acyclic; (2) it is a complete graph (that is, there is an arc (i, j) whenever i < j); (3) each arc cost is 1; and (4) each arc is uncapacitated (that is, it has infinite capacity). We show that using this special structure, the minimum cost flow problem can be solved in O(n) time. Because we need to solve m such problems, the total running time of our algorithm is O(nm), which is an optimal algorithm to decompose a given m × n integer matrix into a positive linear combination of (0, 1) matrices. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 45(1), 36,41 2005 [source]


On d -threshold graphs and d -dimensional bin packing

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2004
Alberto Caprara
Abstract We illustrate efficient algorithms to find a maximum stable set and a maximum matching in a graph with n nodes given by the edge union of d threshold graphs on the same node set, in case the d graphs in the union are known. Actually, because the edge set of a threshold graph can be implicitly represented by assigning values to the nodes, we assume that we know these values for each of the d graphs in the union. We present an O(n log n + nd,1) time algorithm to find a maximum stable set and an O(n2) time algorithm to find a maximum matching, in case d is constant. For the case d = 2, the running time of the latter is reduced to O(n log n) provided an additional technical condition is satisfied. The natural application of our results is the fast computation of lower bounds for the d -dimensional bin packing problem, for which the compatibility relations between items are represented by the edge union of d threshold graphs with one node for each item, the value of the node for the i -th graph being equal to the size of the item on the i -th dimension. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 44(4), 266,280 2004 [source]


A randomized algorithm for gossiping in radio networks

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 2 2004
Marek Chrobak
Abstract We present an O(n log4n)-time randomized algorithm for gossiping in radio networks with unknown topology. This is the first algorithm for gossiping in this model whose running time is only a polylogarithmic factor away from the optimum. The fastest previously known (deterministic) algorithm for this problem works in time O(n3/2log2n). © 2004 Wiley Periodicals, Inc. [source]


Speeding up the FMMR perfect sampling algorithm: A case study revisited

RANDOM STRUCTURES AND ALGORITHMS, Issue 4 2003
Robert P. Dobrow
Abstract In a previous paper by the second author, two Markov chain Monte Carlo perfect sampling algorithms,one called coupling from the past (CFTP) and the other (FMMR) based on rejection sampling,are compared using as a case study the move-to-front (MTF) self-organizing list chain. Here we revisit that case study and, in particular, exploit the dependence of FMMR on the user-chosen initial state. We give a stochastic monotonicity result for the running time of FMMR applied to MTF and thus identify the initial state that gives the stochastically smallest running time; by contrast, the initial state used in the previous study gives the stochastically largest running time. By changing from worst choice to best choice of initial state we achieve remarkable speedup of FMMR for MTF; for example, we reduce the running time (as measured in Markov chain steps) from exponential in the length n of the list nearly down to n when the items in the list are requested according to a geometric distribution. For this same example, the running time for CFTP grows exponentially in n. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 2003 [source]


High-throughput determination of carbocysteine in human plasma by liquid chromatography/tandem mass spectrometry: application to a bioequivalence study of two formulations in healthy volunteers

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 7 2006
Hui-chang Bi
A rapid and sensitive liquid chromatography/tandem mass spectrometry (LC/MS/MS) method to determine carbocysteine in human plasma was developed and fully validated. After methanol-induced protein precipitation of the plasma samples, carbocysteine was subjected to LC/MS/MS analysis using electrospray ionization (ESI). The MS system was operated in the selected ion monitoring (SRM) mode. Chromatographic separation was performed on a Hypurity C18 column (i.d. 2.1,mm,×,50,mm, particle size 5,µm). The method had a chromatographic running time of 2.0,min and linear calibration curves over the concentration ranges of 0.1,20,µg/mL for carbocysteine. The lower limit of quantification (LLOQ) of the method was 0.1,µg/mL for carbocysteine. The intra- and inter-day precision was less than 7% for all quality control samples at concentrations of 0.5, 2.0, and 10.0,µg/mL. These results indicate that the method was efficient with a simple preparation procedure and a very short running time (2.0,min) for carbocysteine compared with methods reported in the literature and had high selectivity, acceptable accuracy, precision and sensitivity. The validated LC/MS/MS method has been successfully used to a bioequivalence study of two tablet formulations of carbocysteine in healthy volunteers. Copyright © 2006 John Wiley & Sons, Ltd. [source]


PedStr Software for Cutting Large Pedigrees for Haplotyping, IBD Computation and Multipoint Linkage Analysis

ANNALS OF HUMAN GENETICS, Issue 5 2009
Anatoly V. Kirichenko
Summary We propose an automatic heuristic algorithm for splitting large pedigrees into fragments of no more than a user-specified bit size. The algorithm specifically aims to split large pedigrees where many close relatives are genotyped and to produce a set of sub-pedigrees for haplotype reconstruction, IBD computation or multipoint linkage analysis with the help of the Lander-Green-Kruglyak algorithm. We demonstrate that a set of overlapping pedigree fragments constructed with the help of our algorithm allows fast and effective haplotype reconstruction and detection of an allele's parental origin. Moreover, we compared pedigree fragments constructed with the help of our algorithm and existing programs PedCut and Jenti for multipoint linkage analysis. Our algorithm demonstrated significantly higher linkage power than the algorithm of Jenti and significantly shorter running time than the algorithm of PedCut. The software package PedStr implementing our algorithms is available at http://mga.bionet.nsc.ru/soft/index.html. [source]


Strategy of Circulatory Support with Percutaneous Cardiopulmonary Support

ARTIFICIAL ORGANS, Issue 8 2000
Mitsumasa Hata
Abstract: We evaluated the efficacy and problems of circulatory support with percutaneous cardiopulmonary support (PCPS) for severe cardiogenic shock and discussed our strategy of mechanical circulatory assist for severe cardiopulmonary failure. We also described the effects of an alternative way of PCPS as venoarterial (VA) bypass from the right atrium (RA) to the ascending aorta (Ao), which was used recently in 3 patients. Over the past 9 years, 30 patients (20 men and 10 women; mean age: 61 years) received perioperative PCPS at our institution. Indications of PCPS were cardiopulmonary bypass weaning in 13 patients, postoperative low output syndrome (LOS) in 14 patients, and preoperative cardiogenic shock in 3 patients. Approaches of the PCPS system were the femoral artery to the femoral vein (F-F) in 21 patients, the RA to the femoral artery (RA-FA) in 5 patients, the RA to the Ao (RA-Ao) in 3 patients, and the right and left atrium to the Ao in 1 patient. Seventeen (56.7%) patients were weaned from mechanical circulatory support (Group 1) and the remaining 13 patients were not (Group 2). In Group 1, PCPS running time was 33.1 ± 13.6 h, which was significantly shorter than that of Group 2 (70.6 ± 44.4 h). Left ventricular ejection fraction was improved from 34.8 ± 12.0% at the pump to 42.5 ± 4.6% after 24 h support in Group 1, which was significantly better than that of Group 2 (21.6 ± 3.5%). In particular, it was 48.6 ± 5.7% in the patients with RA-Ao, which was further improved. Two of 3 patients with RA-Ao were discharged. Thrombectomy was carried out for ischemic complication of the lower extremity in 5 patients with F-F and 1 patient with RA-FA. One patient with F-F needed amputation of the leg due to necrosis. Thirteen patients (43.3%) were discharged. Hospital mortality indicated 17 patients (56.7%). Fifteen patients died with multiple organ failure. In conclusion, our alternate strategy of assisted circulation for severe cardiac failure is as follows. In patients with postcardiotomy cardiogenic shock or LOS, PCPS should be applied first under intraaortic balloon pumping (IABP) assist for a maximum of 2 or 3 days. In older aged patients particularly, the RA-Ao approach of PCPS is superior to control flow rate easily, with less of the left ventricular afterload and ischemic complications of the lower extremity. If native cardiac function does not recover and longer support is necessary, several types of ventricular assist devices should be introduced, according to end-organ function and the expected support period. [source]


A rapid assay for angiotensin-converting enzyme activity using ultra-performance liquid chromatography,mass spectrometry

BIOMEDICAL CHROMATOGRAPHY, Issue 3 2010
Fang Geng
Abstract Angiotensin-converting enzyme (ACE) plays an important role in the renin,angiotensin system and ACE activity is usually assayed in vitro by monitoring the transformation from a substrate to the product catalyzed by ACE. A rapid and sensitive analysis method or ACE activity by quantifying simultaneously the substrate hippuryl,histidyl,leucine and its product hippuric acid using an ultra-performance liquid chromatography coupled with electrospray ionization-mass spectrometry (UPLC-MS) was first developed and applied to assay the inhibitory activities against ACE of several natural phenolic compounds. The established UPLC-MS method showed obvious advantages over the conventional HPLC analysis in shortened running time (3.5,min), lower limit of detection (5,pg) and limit of quantification (18,pg), and high selectivity aided by MS detection in selected ion monitoring (SIM) mode. Among the six natural products screened, five compounds, caffeic acid, caffeoyl acetate, ferulic acid, chlorogenic acid and resveratrol indicated potent in vitro ACE inhibitory activity with IC50 values of 2.527 ± 0.032, 3.129 ± 0.016, 10.898 ± 0.430, 15.076 ± 1.211 and 6.359 ± 0.086,mm, respectively. A structure,activity relationship estimation suggested that the number and the situation of the hydroxyls on the benzene rings and the acrylic acid groups may play the most predominant role in their ACE inhibitory activity. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Specific PCR product primer design using memetic algorithm

BIOTECHNOLOGY PROGRESS, Issue 3 2009
Cheng-Hong Yang
Abstract To provide feasible primer sets for performing a polymerase chain reaction (PCR) experiment, many primer design methods have been proposed. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product size. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this article, a memetic algorithm (MA) is proposed to solve primer design problems associated with providing a specific product size for PCR experiments. The MA is compared with a genetic algorithm (GA) using an accuracy formula to estimate the quality of the primer design and test the running time. Overall, 50 accession nucleotide sequences were sampled for the comparison of the accuracy of the GA and MA for primer design. Five hundred runs of the GA and MA primer design were performed with PCR product lengths of 150,300 bps and 500,800 bps, and two different methods of calculating Tm for each accession nucleotide sequence were tested. A comparison of the accuracy results for the GA and MA primer design showed that the MA primer design yielded better results than the GA primer design. The results further indicate that the proposed method finds optimal or near-optimal primer sets and effective PCR products in a dry dock experiment. Related materials are available online at http://bio.kuas.edu.tw/ma-pd/. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009 [source]


Progressive Simplification of Tetrahedral Meshes Preserving All Isosurface Topologies

COMPUTER GRAPHICS FORUM, Issue 3 2003
Yi-Jen Chiang
In this paper, we propose a novel technique for constructing multiple levels of a tetrahedral volume dataset whilepreserving the topologies of all isosurfaces embedded in the data. Our simplification technique has two majorphases. In the segmentation phase, we segment the volume data into topological-equivalence regions, that is, thesub-volumes within each of which all isosurfaces have the same topology. In the simplification phase, we simplifyeach topological-equivalence region independently, one by one, by collapsing edges from the smallest to the largesterrors (within the user-specified error tolerance, for a given error metrics), and ensure that we do not collapseedges that may cause an isosurface-topology change. We also avoid creating a tetrahedral cell of negative volume(i.e., avoid the fold-over problem). In this way, we guarantee to preserve all isosurface topologies in the entiresimplification process, with a controlled geometric error bound. Our method also involves several additionalnovel ideas, including using the Morse theory and the implicit fully augmented contour tree, identifying typesof edges that are not allowed to be collapsed, and developing efficient techniques to avoid many unnecessary orexpensive checkings, all in an integrated manner. The experiments show that all the resulting isosurfaces preservethe topologies, and have good accuracies in their geometric shapes. Moreover, we obtain nice data-reductionrates, with competitively fast running times. [source]


A new multiphasic buffer system for benzyldimethyl- n -hexadecylammonium chloride polyacrylamide gel electrophoresis of proteins providing efficient stacking

ELECTROPHORESIS, Issue 2 2006
Michael L. Kramer Dr.
Abstract Acidic PAGE systems using cationic detergents such as benzyldimethyl- n -hexadecylammonium chloride (16-BAC) or CTAB have proven useful for the detection of methoxy esters sensitive to alkaline pH, resolving basic proteins such as histones and membrane proteins. However, the interesting phosphate-based system suffered from poor stacking, resulting in broadened bands and long running times. Therefore, a new 16-BAC PAGE system based on the theory of moving boundary electrophoresis with properties comparable to the classical SDS-PAGE system was designed. As a result a new multiphasic analytical 16-BAC PAGE system providing efficient stacking and significantly shorter running times is presented here. It is based on acetic acid and methoxyacetic acid as common ion constituents. This PAGE system takes advantage of the additional counterstacking effect due to a cross boundary electrophoresis system resulting from the selected buffer constituents. Furthermore, the concentration of 16-BAC was optimized by determining its previously unknown CMC. Due to efficient focusing of the introduced tracking dye, methyl green, termination of electrophoresis can now be more easily followed as compared to the Schlieren line. [source]


A Genetic Approach to Detecting Clusters in Point Data Sets

GEOGRAPHICAL ANALYSIS, Issue 3 2005
Jamison Conley
Spatial analysis techniques are widely used throughout geography. However, as the size of geographic data sets increases exponentially, limitations to the traditional methods of spatial analysis become apparent. To overcome some of these limitations, many algorithms for exploratory spatial analysis have been developed. This article presents both a new cluster detection method based on a genetic algorithm, and Programs for Cluster Detection, a toolkit application containing the new method as well as implementations of three established methods: Openshaw's Geographical Analysis Machine (GAM), case point-centered searching (proposed by Besag and Newell), and randomized GAM (proposed by Fotheringham and Zhan). We compare the effectiveness of cluster detection and the runtime performance of these four methods and Kulldorf's spatial scan statistic on a synthetic point data set simulating incidence of a rare disease among a spatially variable background population. The proposed method has faster average running times than the other methods and significantly reduces overreporting of the underlying clusters, thus reducing the user's postprocessing burden. Therefore, the proposed method improves upon previous methods for automated cluster detection. The results of our method are also compared with those of Map Explorer (MAPEX), a previous attempt to develop a genetic algorithm for cluster detection. The results of these comparisons indicate that our method overcomes many of the problems faced by MAPEX, thus, we believe, establishing that genetic algorithms can indeed offer a viable approach to cluster detection. [source]


Comparative study of the continuous phase flow in a cyclone separator using different turbulence models,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
H. Shalaby
Abstract Numerical calculations were carried out at the apex cone and various axial positions of a gas cyclone separator for industrial applications. Two different NS-solvers (a commercial one (CFX 4.4 ANSYS GmbH, Munich, Germany, CFX Solver Documentation, 1998), and a research code (Post-doctoral Thesis, Technical University of Chemnitz, Germany, September, 2002)) based on a pressure correction algorithm of the SIMPLE method have been applied to predict the flow behaviour. The flow was assumed as unsteady, incompressible and isothermal. A k,, turbulence model has been applied first using the commercial code to investigate the gas flow. Due to the nature of cyclone flows, which exhibit highly curved streamlines and anisotropic turbulence, advanced turbulence models such as Reynolds stress model (RSM) and large eddy simulation (LES) have been used as well. The RSM simulation was performed using the commercial package activating the Launder et al.'s (J. Fluid. Mech. 1975; 68(3):537,566) approach, while for the LES calculations the research code has been applied utilizing the Smagorinsky model. It was found that the k,, model cannot predict flow phenomena inside the cyclone properly due to the strong curvature of the streamlines. The RSM results are comparable with LES results in the area of the apex cone plane. However, the application of the LES reveals qualitative agreement with the experimental data, but requires higher computer capacity and longer running times than RSM. This paper is organized into five sections. The first section consists of an introduction and a summary of previous work. Section 2 deals with turbulence modelling including the governing equations and the three turbulence models used. In Section 3, computational parameters are discussed such as computational grids, boundary conditions and the solution algorithm with respect to the use of MISTRAL/PartFlow-3D. In Section 4, prediction profiles of the gas flow at axial and apex cone positions are presented and discussed. Section 5 summarizes and concludes the paper. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Digit ratio (2D:4D) and sprinting speed in boys

AMERICAN JOURNAL OF HUMAN BIOLOGY, Issue 2 2009
J.T. Manning
Digit ratio (2D:4D), a putative correlate of prenatal testosterone, has been found to relate to performance in sport and athletics such that low 2D:4D (high prenatal testosterone) correlates with high performance. Speed in endurance races is strongly related to 2D:4D, and may be one factor that underlies the link between sport and 2D:4D, but nothing is known of the relationship between 2D:4D and sprinting speed. Here we show that running times over 50 m were positively correlated with 2D:4D in a sample of 241 boys (i.e. runners with low 2D:4D ran faster than runners with high 2D:4D). The relationship was also found for 50 m split times (at 20, 30, and 40 m) and was independent of age, BMI, and an index of maturity. However, associations between 2D:4D and sprinting speed were much weaker than those reported for endurance running. This suggests that 2D:4D is a relatively weak predictor of strength and a stronger predictor of efficiency in aerobic exercise. We discuss the effect sizes for relationships between 2D:4D and sport and target traits in general, and identify areas of strength and weakness in digit ratio research. Am. J. Hum. Biol. 2009. © 2008 Wiley-Liss, Inc. [source]