Home About us Contact | |||
Throughput
Kinds of Throughput Terms modified by Throughput Selected AbstractsDefining and maximizing PPT,a novel performance parameter for IEEE 802.11 DCFINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2006Yun Li Abstract Much research has been conducted on saturation throughput of IEEE802.11 DCF, and has led to some improvement. But increasing the successful transmission probability of packet is also important for saving stations' battery energy and decreasing the packet delay. In this paper, we define a new performance parameter, named Product of successful transmission Probability and saturation Throughput (PPT), for 802.11 DCF, which binds successful transmission probability and saturation throughput together. An analysis is given to maximize PPT. An expression of optimal minimum contention windows (CWmin) is obtained analytically for maximizing PPT. For simplicity, we give a name DCF-PPT to the 802.11 DCF that sets its CWmin according to this expression. The performance of DCF-PPT is simulated with different stations in terms of saturation throughput, successful transmission probability and PPT. The simulation results indicate that, compared to 802.11 DCF, DCF-PPT can significantly increase the PPT and successful transmission probability (about 0.95) on condition that the saturation throughput is not decreased. Copyright © 2006 John Wiley & Sons, Ltd. [source] In vitro assessment of cytochrome P450 inhibition: Strategies for increasing LC/MS-based assay throughput using a one-point IC50 method and multiplexing high-performance liquid chromatographyJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 9 2007Tong Lin Abstract A fast and robust LC/MS-based cytochrome P450 (CYP) inhibition assay, using human liver microsomes, has been fully developed and validated for the major human liver CYPs. Probe substrates were phenacetin, diclofenac, S-mephenytoin, and dextromethorphan for CYP1A2, CYP2C9, CYP2C19, and CYP2D6, respectively. Midazolam and testosterone were chosen for CYP3A4. Furafylline, sulfaphenazole, tranylcypromine, quinidine, and ketoconazole were identified as positive control inhibitors for CYP1A2, CYP2C9, CYP2C19, CYP2D6, and CYP3A4, respectively. To increase the throughput of the assay, a one-point method was developed, using data from CYP inhibition assays conducted at one concentration (i.e., 10 µM), to estimate the drug concentration at which the metabolism of the CYP probe substrate was reduced by 50% (IC50). The IC50 values from the one-point assay were validated by correlating the results with IC50 values that were obtained with a traditional eight-point concentration,response curve. Good correlation was achieved with the slopes of the trendlines between 0.95 and 1.02 and with R2 between 0.77 and 1.0. Throughput was increased twofold by using a Cohesive multiplexing high-performance liquid chromatography system. The one-point IC50 estimate is useful for initial compound screening, while the full concentration,response IC50 method provides detailed CYP inhibition data for later stages of drug development. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96: 2485,2493, 2007 [source] Liquid chromatography/tandem mass spectrometric quantification with metabolite screening as a strategy to enhance the early drug discovery processRAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 12 2002Philip R. Tiller Throughput for early discovery drug metabolism studies can be increased with the concomitant acquisition of metabolite screening information and quantitative analysis using ultra-fast gradient chromatographic methods. Typical ultra-fast high-performance liquid chromatography (HPLC) parameters used during early discovery pharmacokinetic (PK) studies, for example, employ full-linear gradients over 1,2,min at very high flow rates (1.5,2,mL/min) on very short HPLC columns (2,×,20,mm). These conditions increase sample throughput by reducing analytical run time without sacrificing chromatographic integrity and may be used to analyze samples generated from a variety of in vitro and in vivo studies. This approach allows acquisition of more information about a lead candidate while maintaining rapid analytical turn-around time. Some examples of this approach are discussed in further detail. Copyright © 2002 John Wiley & Sons, Ltd. [source] Emergency Department Throughput, Crowding, and Financial Outcomes for HospitalsACADEMIC EMERGENCY MEDICINE, Issue 8 2010Daniel A. Handel MD ACADEMIC EMERGENCY MEDICINE 2010; 17:840,847 © 2010 by the Society for Academic Emergency Medicine Abstract Emergency department (ED) crowding has been identified as a major public health problem in the United States by the Institute of Medicine. ED crowding not only is associated with poorer patient outcomes, but it also contributes to lost demand for ED services when patients leave without being seen and hospitals must go on ambulance diversion. However, somewhat paradoxically, ED crowding may financially benefit hospitals. This is because ED crowding allows hospitals to maximize occupancy with well-insured, elective patients while patients wait in the ED. In this article, the authors propose a more holistic model of hospital flow and revenue that contradicts this notion and offer suggestions for improvements in ED and hospital management that may not only reduce crowding and improve quality, but also increase hospital revenues. Also proposed is that increased efficiency and quality in U.S. hospitals will require changes in systematic microeconomic and macroeconomic incentives that drive the delivery of health services in the United States. Finally, the authors address several questions to propose mutually beneficial solutions to ED crowding that include the realignment of hospital incentives, changing culture to promote flow, and several ED-based strategies to improve ED efficiency. [source] Decreasing Lab Turnaround Time Improves Emergency Department Throughput and Decreases Emergency Medical Services Diversion: A Simulation ModelACADEMIC EMERGENCY MEDICINE, Issue 11 2008Alan B. Storrow MD Abstract Background:, The effect of decreasing lab turnaround times on emergency department (ED) efficiency can be estimated through system-level simulation models and help identify important outcome measures to study prospectively. Furthermore, such models may suggest the advantage of bedside or point-of-care testing and how they might affect efficiency measures. Objectives:, The authors used a sophisticated simulation model in place at an adult urban ED with an annual census of 55,000 patient visits. The effect of decreasing turnaround times on emergency medical services (EMS) diversion, ED patient throughput, and total ED length of stay (LOS) was determined. Methods:, Data were generated by using system dynamics analytic modeling and simulation approach on 90 separate days from December 2, 2007, through February 29, 2008. The model was a continuous simulation of ED flow, driven by real-time actual patient data, and had intrinsic error checking to assume reasonable goodness-of-fit. A return of complete laboratory results incrementally at 120, 100, 80, 60, 40, 20, and 10 minutes was compared. Diversion calculation assumed EMS closure when more than 10 patients were in the waiting room and 100% ED bed occupancy had been reached for longer than 30 minutes, as per local practice. LOS was generated from data insertion into the patient flow stream and calculation of time to specific predefined gates. The average accuracy of four separate measurement channels (waiting room volume, ED census, inpatient admit stream, and ED discharge stream), all across 24 hours, was measured by comparing the area under the simulated curve against the area under the measured curve. Each channel's accuracy was summed and averaged for an overall accuracy rating. Results:, As lab turnaround time decreased from 120 to 10 minutes, the total number of diversion days (maximum 57 at 120 minutes, minimum 29 at 10 minutes), average diversion hours per day (10.8 hours vs. 6.0 hours), percentage of days with diversion (63% vs. 32%), and average ED LOS (2.77 hours vs. 2.17 hours) incrementally decreased, while average daily throughput (104 patients vs. 120 patients) increased. All runs were at least 85% accurate. Conclusions:, This simulation model suggests compelling improvement in ED efficiency with decreasing lab turnaround time. Outcomes such as time on EMS diversion, ED LOS, and ED throughput represent important but understudied areas that should be evaluated prospectively. EDs should consider processes that will improve turnaround time, such as point-of-care testing, to obtain these goals. [source] Impact of a Triage Liaison Physician on Emergency Department Overcrowding and Throughput: A Randomized Controlled TrialACADEMIC EMERGENCY MEDICINE, Issue 8 2007Brian R. Holroyd MD BackgroundTriage liaison physicians (TLPs) have been employed in overcrowded emergency departments (EDs); however, their effectiveness remains unclear. ObjectivesTo evaluate the implementation of TLP shifts at an academic tertiary care adult ED using comprehensive outcome reporting. MethodsA six-week TLP clinical research project was conducted between December 9, 2005, and February 9, 2006. A TLP was deployed for nine hours (11 am to 8 pm) daily to initiate patient management, assist triage nurses, answer all medical consult or transfer calls, and manage ED administrative matters. The study was divided into three two-week blocks; within each block, seven days were randomized to TLP shifts and the other seven to control shifts. Outcomes included patient length of stay, proportion of patients who left without complete assessment, staff satisfaction, and episodes of ambulance diversion. ResultsTLPs assessed a median of 14 patients per shift (interquartile range, 13,17), received 15 telephone calls per shift (interquartile range, 14,20), and spent 17,81 minutes per shift consulting on the telephone. The number of patients and their age, gender, and triage score during the TLP and control shifts were similar. Overall, length of stay was decreased by 36 minutes compared with control days (4:21 vs. 4:57; p = 0.001). Left without complete assessment cases decreased from 6.6% to 5.4% (a 20% relative decrease) during the TLP coverage. The ambulance wait time and number of episodes of ambulance diversion were similar on TLP and control days. ConclusionsA TLP improved important outcomes in an overcrowded ED and could improve delivery of emergency medical care in similar tertiary care EDs. [source] The Impact of Input and Output Factors on Emergency Department ThroughputACADEMIC EMERGENCY MEDICINE, Issue 3 2007Phillip V. Asaro MD Objectives: To quantify the impact of input and output factors on emergency department (ED) process outcomes while controlling for patient-level variables. Methods: Using patient- and system-level data from multiple sources, multivariate linear regression models were constructed with length of stay (LOS), wait time, treatment time, and boarding time as dependent variables. The products of the 20th to 80th percentile ranges of the input and output factor variables and their regression coefficients demonstrate the actual impact (in minutes) of each of these factors on throughput outcomes. Results: An increase from the 20th to the 80th percentile in ED arrivals resulted in increases of 42 minutes in wait time, 49 minutes in LOS (admitted patients), and 24 minutes in ED boarding time (admitted patients). For admit percentage (20th to 80th percentile), the increases were 12 minutes in wait time, 15 minutes in LOS, and 1 minute in boarding time. For inpatient bed utilization as of 7 am (20th to 80th percentile), the increases were 4 minutes in wait time, 19 minutes in LOS, and 16 minutes in boarding time. For admitted patients boarded in the ED as of 7 am (20th to 80th percentile), the increases were 35 minutes in wait time, 94 minutes in LOS, and 75 minutes in boarding time. Conclusions: Achieving significant improvement in ED throughput is unlikely without determining the most important factors on process outcomes and taking measures to address variations in ED input and bottlenecks in the ED output stream. [source] Myriad: scalable VR via peer-to-peer connectivity, PC clustering, and transient inconsistencyCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2007Benjamin Schaeffer Abstract Distributed scene graphs are important in virtual reality, both in collaborative virtual environments and in cluster rendering. Modern scalable visualization systems have high local throughput, but collaborative virtual environments (VEs) over a wide-area network (WAN) share data at much lower rates. This complicates the use of one scene graph across the whole application. Myriad is an extension of the Syzygy VR toolkit in which individual scene graphs form a peer-to-peer network. Myriad connections filter scene graph updates and create flexible relationships between nodes of the scene graph. Myriad's sharing is fine-grained: the properties of individual scene graph nodes to share are dynamically specified (in C++ or Python). Myriad permits transient inconsistency, relaxing resource requirements in collaborative VEs. A test application, WorldWideCrowd, demonstrates collaborative prototyping of a 300-avatar crowd animation viewed on two PC-cluster displays and edited on low-powered laptops, desktops, and over a WAN. We have further used our framework to facilitate collaborative educational experiences and as a vehicle for undergraduates to experiment with shared virtual worlds. Copyright © 2006 John Wiley & Sons, Ltd. [source] Fragment-Parallel Composite and FilterCOMPUTER GRAPHICS FORUM, Issue 4 2010Anjul Patney We present a strategy for parallelizing the composite and filter operations suitable for an order-independent rendering pipeline implemented on a modern graphics processor. Conventionally, this task is parallelized across pixels/subpixels, but serialized along individual depth layers. However, our technique extends the domain of parallelization to individual fragments (samples), avoiding a serial dependence on the number of depth layers, which can be a constraint for scenes with high depth complexity. As a result, our technique scales with the number of fragments and can sustain a consistent and predictable throughput in scenes with both low and high depth complexity, including those with a high variability of depth complexity within a single frame. We demonstrate composite/filter performance in excess of 50M fragments/sec for scenes with more than 1500 semi-transparent layers. [source] Maximizing revenue in Grid markets using an economically enhanced resource managerCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010M. Macías Abstract Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd. [source] A standards-based Grid resource brokering service supporting advance reservations, coallocation, and cross-Grid interoperabilityCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2009Erik Elmroth Abstract The problem of Grid-middleware interoperability is addressed by the design and analysis of a feature-rich, standards-based framework for all-to-all cross-middleware job submission. The architecture is designed with focus on generality and flexibility and builds on extensive use, internally and externally, of (proposed) Web and Grid services standards such as WSRF, JSDL, GLUE, and WS-Agreement. The external use provides the foundation for easy integration into specific middlewares, which is performed by the design of a small set of plugins for each middleware. Currently, plugins are provided for integration into Globus Toolkit 4 and NorduGrid/ARC. The internal use of standard formats facilitates customization of the job submission service by replacement of custom components for performing specific well-defined tasks. Most importantly, this enables the easy replacement of resource selection algorithms by algorithms that address the specific needs of a particular Grid environment and job submission scenario. By default, the service implements a decentralized brokering policy, striving to optimize the performance for the individual user by minimizing the response time for each job submitted. The algorithms in our implementation perform resource selection based on performance predictions, and provide support for advance reservations as well as coallocation of multiple resources for coordinated use. The performance of the system is analyzed with focus on overall service throughput (up to over 250 jobs per min) and individual job submission response time (down to under 1,s). Copyright © 2009 John Wiley & Sons, Ltd. [source] Dynamic data replication in LCG 2008CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2008C. Nicholson Abstract To provide performance access to data from high-energy physics experiments such as the Large Hadron Collider (LHC), controlled replication of files among grid sites is required. Dynamic, automated replication in response to jobs may also be useful and has been investigated using the grid simulator OptorSim. In this paper, results are presented from simulations of the LHC Computing Grid in 2008, in a physics analysis scenario. These show, first, that dynamic replication does give improved job throughput; second, that for this complex grid system, simple replication strategies such as Least Recently Used and Least Frequently Used are as effective as more advanced economic models; third, that grid site policies that allow maximum resource sharing are more effective; and lastly, that dynamic replication is particularly effective when data access patterns include some files being accessed more often than others, such as with a Zipf-like distribution. Copyright © 2008 John Wiley & Sons, Ltd. [source] High-level distribution for the rapid production of robust telecoms software: comparing C++ and ERLANGCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2008J. H. Nyström Abstract Currently most distributed telecoms software is engineered using low- and mid-level distributed technologies, but there is a drive to use high-level distribution. This paper reports the first systematic comparison of a high-level distributed programming language in the context of substantial commercial products. Our research strategy is to reengineer some C++/CORBA telecoms applications in ERLANG, a high-level distributed language, and make comparative measurements. Investigating the potential advantages of the high-level ERLANG technology shows that two significant benefits are realized. Firstly, robust configurable systems are easily developed using the high-level constructs for fault tolerance and distribution. The ERLANG code exhibits resilience: sustaining throughput at extreme loads and automatically recovering when load drops; availability: remaining available despite repeated and multiple failures; dynamic reconfigurability: with throughput scaling near-linearly when resources are added or removed. Secondly, ERLANG delivers significant productivity and maintainability benefits: the ERLANG components are less than one-third of the size of their C++ counterparts. The productivity gains are attributed to specific language features, for example, high-level communication saves 22%, and automatic memory management saves 11%,compared with the C++ implementation. Investigating the feasibility of the high-level ERLANG technology demonstrates that it fulfils several essential requirements. The requisite distributed functionality is readily specified, even although control of low-level distributed coordination aspects is abrogated to the ERLANG implementation. At the expense of additional memory residency, excellent time performance is achieved, e.g. three times faster than the C++ implementation, due to ERLANG's lightweight processes. ERLANG interoperates at low cost with conventional technologies, allowing incremental reengineering of large distributed systems. The technology is available on the required hardware/operating system platforms, and is well supported. Copyright © 2007 John Wiley & Sons, Ltd. [source] DS/CDMA throughput of a multi-hop sensor network in a Rayleigh fading underwater acoustic channelCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2007Choong Hock Mar Abstract Asynchronous half-duplex Direct-Sequence Code-Division Multiple-Access (DS/CDMA) is a suitable candidate for the MAC protocol design of underwater acoustic (UWA) sensor networks owing to its many attractive features. Our ad-hoc multi-hop network is infrastructureless in that it is without centralized base stations or power control. Hence, we develop an asynchronous distributed half-duplex control protocol to regulate between the transmitting and receiving phases of transmissions. Furthermore, multi-hop communications are very sensitive to the time variability of the received signal strength in the fading channel and the ambient noise dominated by snapping shrimp in harsh underwater environments, because a broken link in the multi-hop path is enough to disrupt communications and initiate new route searches. In our configuration, we use the Ad hoc On-demand Distance Vector (AODV) routing protocol optimized for UWA networks. Empirical studies show that we can model the channel as a slow-varying frequency non-selective Rayleigh fading channel. We theoretically analyze the throughput of our configuration by considering three salient features: the ability of the receiver to demodulate the data, the effect of our control protocol and the effect of disconnections on the generation of routing packets. The throughput under various operating conditions is then examined. It is observed that at optimal node separation, the throughput is improved by a factor of 10. Copyright © 2007 John Wiley & Sons, Ltd. [source] A performance study of job management systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2004Tarek El-Ghazawi Abstract Job Management Systems (JMSs) efficiently schedule and monitor jobs in parallel and distributed computing environments. Therefore, they are critical for improving the utilization of expensive resources in high-performance computing systems and centers, and an important component of Grid software infrastructure. With many JMSs available commercially and in the public domain, it is difficult to choose an optimum JMS for a given computing environment. In this paper, we present the results of the first empirical study of JMSs reported in the literature. Four commonly used systems, LSF, PBS Pro, Sun Grid Engine/CODINE, and Condor were considered. The study has revealed important strengths and weaknesses of these JMSs under different operational conditions. For example, LSF was shown to exhibit excellent throughput for a wide range of job types and submission rates. Alternatively, CODINE appeared to outperform other systems in terms of the average turn-around time for small jobs, and PBS appeared to excel in terms of turn-around time for relatively larger jobs. Copyright © 2004 John Wiley & Sons, Ltd. [source] The J2EE ECperf benchmark results: transient trophies or technology treasures?CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2004Paul Brebner Abstract ECperf, the widely recognized industry standard J2EE benchmark, has attracted a large number of results submissions and their subsequent publications. However, ECperf places little restriction on the hardware platforms, operating systems and databases utilized in the benchmarking process. This, combined with the existence of only two primary metrics, makes it difficult to accurately compare the performance of the Application Server products themselves. By mining the full-disclosure archives for trends and correlations, we have discovered that J2EE technology is very scalable both in a scale-up and scale-out manner. Other observed trends include a linear correlation between middle-tier total processing power and throughput, as well as between J2EE Application Server license costs and throughput. However, the results clearly indicate that there is an increasing cost per user with increasing capacity systems and scale-up is proportionately more expensive than scale-out. Finally, the correlation between middle-tier processing power and throughput, combined with results obtained from a different ,lighter-weight' benchmark, facilitates an estimate of throughput for different types of J2EE applications. Copyright © 2004 John Wiley & Sons, Ltd. [source] Differentiation trapping screen in live culture for genes expressed in cardiovascular lineagesDEVELOPMENTAL DYNAMICS, Issue 2 2004Weisheng V. Chen Abstract We have developed a gene trap vector that transduces an EGFP-neo fusion gene (Eno) to monitor the expression of trapped genes in living cells and embryos. Upon in vitro differentiation, most gene-trapped embryonic stem (ES) cell clones exhibited detectable green fluorescence in various specialized cell types, which can be followed in the live culture in real time. Populations of ES cell-derived cardiomyocytes, smooth muscle cells, vascular endothelial cells, and hematopoietic cells were readily recognized by their distinctive morphologies coupled with unique activities, allowing efficient screening for clones with trapped genes expressed in cardiovascular lineages. Applying G418 selection in parallel differentiation cultures further increased detection sensitivity and screening throughput by enriching reporter-expressing cells with intensified green fluorescent protein signals. Sequence analyses and chimera studies demonstrated that the expression of trapped genes in vivo closely correlated with the observed lineage specificity in vitro. This provides a strategy to identify and mutate genes expressed in lineages of interest for further functional studies. Developmental Dynamics 229:319,327, 2004. © 2004 Wiley-Liss, Inc. [source] Historical review of sample preparation for chromatographic bioanalysis: pros and consDRUG DEVELOPMENT RESEARCH, Issue 3 2007Min S. Chang Abstract Sample preparation is a major task in a regulated bioanalytical laboratory. The sample preparation procedure significantly impacts assay throughput, data quality, analysis cost, and employee satisfaction. Therefore, selecting and optimizing an appropriate sample preparation method is essential for successful method development. Because of our recent expertise, this article is focused on sample preparation for high-performance liquid chromatography with mass spectrometric detection. Liquid chromatography with mass spectrometric detection (LC-MS) is the most common detection technique for small molecules used in regulated bioanalytical laboratories. The sample preparation technologies discussed are pre-extraction and post-extraction sample processing, protein precipitation (PPT), liquid,liquid extraction (LLE), offline solid-phase extraction (SPE), and online solid-phase extraction. Since all these techniques were in use for more than two decades, numerous applications and variations exist for each technique. We will not attempt to categorize each variation. Rather, the development history, a brief theoretical background, and selected references are presented. The strengths and the limitations of each method are discussed, including the throughput improvement potential. If available, illustrations from presentations at various meetings by our laboratory are used to clarify our opinion. Drug Dev Res 68:107,133, 2007. ©2007 Wiley-Liss, Inc. [source] Precipitation control over inorganic nitrogen import,export budgets across watersheds: a synthesis of long-term ecological researchECOHYDROLOGY, Issue 2 2008E. S. Kane Abstract We investigated long-term and seasonal patterns of N imports and exports, as well as patterns following climate perturbations, across biomes using data from 15 watersheds from nine Long-Term Ecological Research (LTER) sites in North America. Mean dissolved inorganic nitrogen (DIN) import,export budgets (N import via precipitation,N export via stream flow) for common years across all watersheds was highly variable, ranging from a net loss of , 0·17 ± 0·09 kg N ha,1mo,1 to net retention of 0·68 ± 0·08 kg N ha,1mo,1. The net retention of DIN decreased (smaller import,export budget) with increasing precipitation, as well as with increasing variation in precipitation during the winter, spring, and fall. Averaged across all seasons, net DIN retention decreased as the coefficient of variation (CV) in precipitation increased across all sites (r2 = 0·48, p = 0·005). This trend was made stronger when the disturbed watersheds were withheld from the analysis (r2 = 0·80, p < 0·001, n = 11). Thus, DIN exports were either similar to or exceeded imports in the tropical, boreal, and wet coniferous watersheds, whereas imports exceeded exports in temperate deciduous watersheds. In general, forest harvesting, hurricanes, or floods corresponded with periods of increased DIN exports relative to imports. Periods when water throughput within a watershed was likely to be lower (i.e. low snow pack or El Nińo years) corresponded with decreased DIN exports relative to imports. These data provide a basis for ranking diverse sites in terms of their ability to retain DIN in the context of changing precipitation regimes likely to occur in the future. Copyright © 2008 John Wiley & Sons, Ltd. [source] The impact of a supervised injecting facility on ambulance call-outs in Sydney, AustraliaADDICTION, Issue 4 2010Allison M. Salmon ABSTRACT Aims Supervised injecting facilities (SIFs) are effective in reducing the harms associated with injecting drug use among their clientele, but do SIFs ease the burden on ambulance services of attending to overdoses in the community? This study addresses this question, which is yet to be answered, in the growing body of international evidence supporting SIFs efficacy. Design Ecological study of patterns in ambulance attendances at opioid-related overdoses, before and after the opening of a SIF in Sydney, Australia. Setting A SIF opened as a pilot in Sydney's ,red light' district with the aim of accommodating a high throughput of injecting drug users (IDUs) for supervised injecting episodes, recovery and the management of overdoses. Measurements A total of 20 409 ambulance attendances at opioid-related overdoses before and after the opening of the Sydney SIF. Average monthly ambulance attendances at suspected opioid-related overdoses, before (36 months) and after (60 months) the opening of the Sydney Medically Supervised Injecting Centre (MSIC), in the vicinity of the centre and in the rest of New South Wales (NSW). Results The burden on ambulance services of attending to opioid-related overdoses declined significantly in the vicinity of the Sydney SIF after it opened, compared to the rest of NSW. This effect was greatest during operating hours and in the immediate MSIC area, suggesting that SIFs may be most effective in reducing the impact of opioid-related overdose in their immediate vicinity. Conclusions By providing environments in which IDUs receive supervised injection and overdose management and education SIF can reduce the demand for ambulance services, thereby freeing them to attend other medical emergencies within the community. [source] Determination of DNA methylation by COBRA: A comparative study of CGE with LIF detection and conventional gel electrophoresisELECTROPHORESIS, Issue 17 2009Simon Goedecke Abstract DNA methylation as an epigenetic modification of the human genome is under emphatic investigation. Several studies have demonstrated a role of DNA methylation in oncogenesis. In conjunction with histone modifications, DNA methylation may cause the formation of heterochromatin and thus mediate the inactivation of gene transcription. It is important to develop methods that allow for an accurate quantification of the amount of DNA methylation in particular DNA regions, to gain information concerning the threshold of methylation levels necessary for gene inactivation. In this article, a CGE method with on-column LIF detection using SYBR Green is compared with a conventional slab-gel electrophoresis. We thus investigate the validity to analyze DNA methylation in the samples of a combined bisulfite restriction analysis. It is demonstrated that CGE is superior to gel electrophoresis in means of linearity, precision, accuracy, automatization (high throughput), and sample consumption. However, gel electrophoresis is easier to perform (simple devices, no PC usage), and the running costs are comparatively low. A further advantage of CGE is the sparse use of toxic compounds (MeOH and SYBR Green), whereas gel electrophoresis is performed in polyacrylamide gels with ethidium bromide staining. [source] Joule heating in electrokinetic flowELECTROPHORESIS, Issue 1 2008Xiangchun Xuan ProfessorArticle first published online: 30 NOV 200 Abstract Electrokinetic flow is an efficient means to manipulate liquids and samples in lab-on-a-chip devices. It has a number of significant advantages over conventional pressure-driven flow. However, there exists inevitable Joule heating in electrokinetic flow, which is known to cause temperature variations in liquids and draw disturbances to electric, flow and concentration fields via temperature-dependent material properties. Therefore, both the throughput and the resolution of analytic studies performed in microfluidic devices are affected. This article reviews the recent progress on the topic of Joule heating and its effect in electrokinetic flow, particularly the theoretical and experimental accomplishments from the aspects of fluid mechanics and heat/mass transfer. The primary focus is placed on the temperature-induced flow variations and the accompanying phenomena at the whole channel or chip level. [source] Amino acid profiling in plant cell cultures: An inter-laboratory comparison of CE-MS and GC-MSELECTROPHORESIS, Issue 9 2007Brad J. Williams Abstract A CE-MS method for metabolic profiling of amino acids was developed and used in an integrated functional genomics project to study the response of Medicago truncatula liquid suspension cell cultures to stress. This project required the analysis of more than 500 root cell culture extracts. The CE-MS method profiled 20 biologically important amino acids. The CE-MS method required no sample derivatization prior to injection and used minimal sample preparation. The method is described in terms of CE and MS operational parameters, reproducibility of migration times and response ratios, sample preparation, sample throughput, and reliability. This method was then compared with a previously published report that used GC-MS metabolic profiling for the same tissues. The data reveal a high level of similarity between the CE-MS and GC-MS amino acid profiling methods, thus supporting these as complementary technologies for metabolomics. We conclude that CE-MS is a valid alternative to GC-MS for targeted profiling of metabolites, such as amino acids, and possesses some significant advantages over GC-MS. [source] Determination of ethyl sulfate , a marker for recent ethanol consumption , in human urine by CE with indirect UV detectionELECTROPHORESIS, Issue 23 2006Francesc A. Esteve-Turrillas Abstract A CE method for the determination of the ethanol consumption marker ethyl sulfate,(EtS) in human urine was developed. Analysis was performed in negative polarity mode with a background electrolyte composed of 15,mM maleic acid, 1,mM phthalic acid, and 0.05,mM cetyltrimethylammonium bromide (CTAB) at pH,2.5 and indirect UV detection at 220,nm (300,nm reference wavelength). This buffer system provided selective separation conditions for EtS and vinylsulfonic acid, employed as internal standard, from urine matrix components. Sample pretreatment of urine was minimized to a 1:5 dilution with water. The optimized CE method was validated in the range of 5,700,mg/L using seven lots of urine. Intra- and inter-day precision and accuracy values, determined at 5, 60, and 700,mg/L with each lot of urine, fulfilled the requirements according to common guidelines for bioanalytical method validation. The application to forensic urine samples collected at autopsies as well as a successful cross-validation with a LC-MS/MS-based method confirmed the overall validity and real-world suitability of the developed expeditious CE assay (sample throughput 130 per day). [source] Determination of dissociation constants of folic acid, methotrexate, and other photolabile pteridines by pressure-assisted capillary electrophoresisELECTROPHORESIS, Issue 17 2006Zoltán Szakács Abstract Pressure-assisted CE (PACE) was applied to determine the previously inaccessible complete set of pK values for folic acid and eight related multiprotic compounds. PACE allowed the determination of all acidity macroconstants at low (,0.1,mM) concentration without interferences of selfassociation or photodegradation throughout the pH range. The accuracy of the constants was verified by NMR-pH, UV-pH, and potentiometric titrations and the data could be converted into physiological ionic strength. It was shown that even three overlapping pK values can be determined by CE with good precision (<0.06) and accuracy if an appropriately low sample throughput is used. Experimental aspects of PACE for the quantitation of acid,base properties are analyzed. The site-specific basicity data obtained for folic acid and methotrexate (MTX) reveal that apparently slight constitutional differences between folic acid and MTX carry highly different proton-binding propensities at analogous moieties, especially at the pteridine N1,locus, providing straightforward explanation for the distinctive binding to dihydrofolate reductase at the molecular level. [source] Applications of the rep-PCR DNA fingerprinting technique to study microbial diversity, ecology and evolutionENVIRONMENTAL MICROBIOLOGY, Issue 4 2009Satoshi Ishii Summary A large number of repetitive DNA sequences are found in multiple sites in the genomes of numerous bacteria, archaea and eukarya. While the functions of many of these repetitive sequence elements are unknown, they have proven to be useful as the basis of several powerful tools for use in molecular diagnostics, medical microbiology, epidemiological analyses and environmental microbiology. The repetitive sequence-based PCR or rep-PCR DNA fingerprint technique uses primers targeting several of these repetitive elements and PCR to generate unique DNA profiles or ,fingerprints' of individual microbial strains. Although this technique has been extensively used to examine diversity among variety of prokaryotic microorganisms, rep-PCR DNA fingerprinting can also be applied to microbial ecology and microbial evolution studies since it has the power to distinguish microbes at the strain or isolate level. Recent advancement in rep-PCR methodology has resulted in increased accuracy, reproducibility and throughput. In this minireview, we summarize recent improvements in rep-PCR DNA fingerprinting methodology, and discuss its applications to address fundamentally important questions in microbial ecology and evolution. [source] A new rapid micromethod for the assay of phenobarbital from dried blood spots by LC-tandem mass spectrometryEPILEPSIA, Issue 12 2009Giancarlo La Marca Summary Advantages of dried blood spot include low invasiveness, ease and low cost of sample collection, transport, and storage. We used tandem mass spectrometry (LC-MS/MS) to determine phenobarbital levels on dried blood spot specimens and compared this methodology to commercially available particle enhanced turbidimetric inhibition immunoassay (PETINIA) in plasma/serum samples. The calibration curve in matrix using D5 -phenobarbital as internal standard was linear in the phenobarbital concentration range of 1,100 mg/L (correlation coefficient 0.9996). The coefficients of variation in blood spots ranged 2.29,6.71% and the accuracy ranged 96.54,103.87%. There were no significant differences between the concentrations measured using PETINA and LC-MS/MS (both had similar precision and accuracy) however, LC-MS/MS allows at least 1.5 times higher throughput of phenobarbital analysis and additionally offers ease of sample collection which is particularly important for newborns or small infants. [source] Opportunistic multiuser scheduling with reduced feedback loadEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2010Yahya S. Al-HarthiArticle first published online: 27 MAY 2010 In this paper, we propose a reduced feedback opportunistic scheduling (RFOS) algorithm that reduces the feedback load while preserving the performance of opportunistic scheduling (OS). The RFOS algorithm is a modified version of our previously proposed algorithm, the DSMUDiv algorithm. The main difference is that RFOS consists of a probing process (search process) and a requesting feedback process based on a threshold. The threshold value is variable, and it depends on the probing process. To reduce the feedback rate, a quantised value indicating the modulation level is fed back, instead of the full value of the signal-to-noise ratio (SNR), which we call quantised SNR. The paper includes the closed-form expressions of the probing load, feedback load and spectral efficiency. In addition, we investigate the effect of the scheduling delay on the system throughput (STH). Under slow Rayleigh fading assumption, we compare RFOS algorithm with the DSMUDiv and optimal (full feedback load) selective diversity scheduling algorithms. Copyright © 2010 John Wiley & Sons, Ltd. [source] Dimensioning of data networks: a flow-level perspectiveEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2009Pasi Lassila Traditional network dimensioning formulations have applied the Erlang model where the connections reserve capacity in the network. Until recently, tractable stochastic network models where the connections share the capacity in the network did not exist. The latter are becoming increasingly important as they can be applied to characterise file transfers in current data networks (e.g. IP networks). In particular, they can be utilised for dimensioning of networks with respect to the file transfer performance. To this end, we consider a model where the traffic consists of elastic flows (i.e. file transfers). Flows arrive randomly and share the network resources resulting in stochastically varying transmission rates for flows. Our contribution is to develop efficient methods for capacity planning to meet the performance requirements expressed in terms of the average transmission rate of flows on a given route, i.e. the per-flow throughput. These methods are validated using ns2 simulations. We discuss also the effects of access rate limitations and how to combine the elastic traffic requirements with those of real-time traffic. Finally, we outline how the methods can be applied in wireless mesh networks. Our results enable a simple characterisation of the order-of-magnitude of the required capacities, which can be utilised as a first step in practical network planning and dimensioning. Copyright © 2008 John Wiley & Sons, Ltd. [source] Experiments on space diversity effect in MIMO channel transmission with maximum data rate of 1,Gbps in downlink OFDM radio accessEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2006Hidekazu Taoka This paper presents experimental results on the space diversity effect in MIMO multiplexing/diversity with the target data rate up to 1,Gbps using OFDM radio access based on laboratory and field experiments including realistic impairments using the implemented MIMO transceivers with the maximum of four transmitter/receiver branches. The experimental results using multipath fading simulators show that at the frequency efficiency of less than approximately 2,bits/second/Hz, MIMO diversity using the space-time block code (STBC) increases the measured throughput compared to MIMO multiplexing owing to the high transmission space diversity effect. At a higher frequency efficiency than approximately 2--3,bits/second/Hz, however, MIMO multiplexing exhibits performance superior to that of MIMO diversity since the impairments using higher data modulation and a higher channel coding rate in MIMO diversity overcomes the space diversity effect. The results also show that the receiver space diversity effect is very effective in MIMO multiplexing for maximum likelihood detection employing QR-decomposition and the M-algorithm (QRM-MLD) signal detection. Finally, we show that the real-time throughput of 500,Mbps and 1,Gbps in a 100-MHz transmission bandwidth is achieved at the average received Eb/N0 per receiver antenna of approximately 8.0 and 14.0,dB using 16QAM modulation and Turbo coding with the coding rate of 1/2 and 8/9 respectively in 4-by-4 MIMO multiplexing in a real propagation environment. Copyright © 2006 AEIT. [source] |