System Performance (system + performance)

Distribution by Scientific Domains
Distribution within Engineering

Kinds of System Performance

  • communication system performance
  • control system performance
  • health system performance

  • Terms modified by System Performance

  • system performance measure

  • Selected Abstracts


    Decentralisation, Governance and Health-System Performance: ,Where You Stand Depends on Where You Sit'

    DEVELOPMENT POLICY REVIEW, Issue 6 2010
    Andrew Mitchell
    Advocates of local government often argue that when decentralisation is accompanied by adequate mechanisms of accountability, particularly those responsive to local preferences, improved service delivery will result. From the perspective of the health sector, the appropriate degree of decentralisation and the necessary mechanisms of accountability depend upon the achievement of health system goals. Drawing on evidence from six countries (Bolivia, Chile, India, Pakistan, Philippines, Uganda), this article comes to the conclusion that a balance between centralisation of some functions and decentralisation of others, along with improved mechanisms of accountability, is needed to achieve health system objectives. [source]


    An integrated pneumatic tactile feedback actuator array for robotic surgery

    THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY, Issue 1 2009
    Miguel L. Franco
    Abstract Background A pneumatically controlled balloon actuator array has been developed to provide tactile feedback to the fingers during robotic surgery. Methods The actuator and pneumatics were integrated onto a robotic surgical system. Potential interference of the inactive system was evaluated using a timed robotic peg transfer task. System performance was evaluated by measuring human perception of the thumb and index finger. Results No significant difference was found between performance with and without the inactive mounted actuator blocks. Subjects were able to determine inflation location with > 95% accuracy and five discrete inflation levels with both the index finger and thumb with accuracies of 94% and 92%. Temporal tests revealed that an 80 ms temporal separation was sufficient to detect balloon stimuli with high accuracy. Conclusions The mounted balloon actuators successfully transmitted tactile information to the index finger and thumb, while not hindering performance of robotic surgical movements. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Transcritical CO2 refrigerator and sub-critical R134a refrigerator: A comparison of the experimental results

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 12 2009
    Ciro Aprea
    Abstract This paper describes experiments comparing a commercial available R134a refrigeration plant subjected to a cold store and a prototype R744 (carbon dioxide) system working as a classical ,split-systems' to cool air in residential applications in a transcritical cycle. Both plants are able to develope a refrigeration power equal to 3000,W. The R744 system utilizes aluminium heat exchangers, a semi-hermetic compressor, a back-pressure valve and a thermostatic expansion valve. The R134a refrigeration plant operates using a semi-hermetic reciprocating compressor, an air condenser followed by a liquid receiver, a manifold with two expansion valves, a thermostatic one and a manual one mounted in parallel, and an air cooling evaporator inside the cold store. System performances are compared for two evaporation temperatures varying the temperature of the external air running over the gas-cooler and over the condenser. The refrigeration load in the cold store is simulated by means of some electrical resistances, whereas the air evaporator of the R744 plant is placed in a very large ambient. The results of the comparison are discussed in terms of temperature of the refrigerants at the compressor discharge line, of refrigerants mass flow rate and of coefficient of performance (COP). The performances measured in terms of COPs show a decrease with respect to the R134a plant working at the same external and internal conditions. Further improvements regarding the components of the cycle are necessary to use in a large-scale ,split-systems' working with the carbon dioxide. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Application of Visual Analytics for Thermal State Management in Large Data Centres

    COMPUTER GRAPHICS FORUM, Issue 6 2010
    M. C. Hao
    I.3.3 [Computer Graphics]: Picture/Image Generation,Display Algorithms; H.5.0 [Information Systems]: Information Interfaces and Presentation,General Abstract Today's large data centres are the computational hubs of the next generation of IT services. With the advent of dynamic smart cooling and rack level sensing, the need for visual data exploration is growing. If administrators know the rack level thermal state changes and catch problems in real time, energy consumption can be greatly reduced. In this paper, we apply a cell-based spatio-temporal overall view with high-resolution time series to simultaneously analyze complex thermal state changes over time across hundreds of racks. We employ cell-based visualization techniques for trouble shooting and abnormal state detection. These techniques are based on the detection of sensor temperature relations and events to help identify the root causes of problems. In order to optimize the data centre cooling system performance, we derive new non-overlapped scatter plots to visualize the correlations between the temperatures and chiller utilization. All these techniques have been used successfully to monitor various time-critical thermal states in real-world large-scale production data centres and to derive cooling policies. We are starting to embed these visualization techniques into a handheld device to add mobile monitoring capability. [source]


    Integrating Messy Genetic Algorithms and Simulation to Optimize Resource Utilization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2009
    Tao-ming Cheng
    Various resource distribution modeling scenarios were tested in simulation to determine their system performances. MGA operations were then applied in the selection of the best resource utilization schemes based on those performances. A case study showed that this new modeling mechanism, along with the implemented computer program, could not only ease the process of developing optimal resource utilization, but could also improve the system performance of the simulation model. [source]


    Bi-level Programming Formulation and Heuristic Solution Approach for Dynamic Traffic Signal Optimization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2006
    Dazhi Sun
    Conventional methods of signal timing optimization assume given traffic flow pattern, whereas traffic assignment is performed with the assumption of fixed signal timing. This study develops a bi-level programming formulation and heuristic solution approach (HSA) for dynamic traffic signal optimization in networks with time-dependent demand and stochastic route choice. In the bi-level programming model, the upper level problem represents the decision-making behavior (signal control) of the system manager, while the user travel behavior is represented at the lower level. The HSA consists of a Genetic Algorithm (GA) and a Cell Transmission Simulation (CTS) based Incremental Logit Assignment (ILA) procedure. GA is used to seek the upper level signal control variables. ILA is developed to find user optimal flow pattern at the lower level, and CTS is implemented to propagate traffic and collect real-time traffic information. The performance of the HSA is investigated in numerical applications in a sample network. These applications compare the efficiency and quality of the global optima achieved by Elitist GA and Micro GA. Furthermore, the impact of different frequencies of updating information and different population sizes of GA on system performance is analyzed. [source]


    Scheduling time-critical requests for multiple data objects in on-demand broadcast

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2010
    Victor C. S. Lee
    Abstract On-demand broadcast is an effective data dissemination approach in mobile computing environments. Most of the recent studies on on-demand data broadcast assume that clients request only a single-data-object at a time. This assumption may not be practical for the increasingly sophisticated mobile applications. In this paper, we investigate the scheduling problem of time-critical requests for multiple data objects in on-demand broadcast environments and observe that existing scheduling algorithms designed for single-data-object requests perform unsatisfactorily in this new setting. Based on our analysis, we propose new algorithms to improve the system performance. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    An optimal multimedia object allocation solution in multi-powermode storage systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2010
    Yingwei Jin
    Abstract Given a set of multimedia objects R={o1, o2, ,, ok} each of which has a set of multiple versions oi.v={Ai.0, Ai.1, ,, Ai.m}, i=1, 2, ,, k, there is a problem of distributing these objects in a server system so that user requests for accessing specified multimedia objects can be fulfilled with the minimum energy consumption and without significant degrading of the system performance. This paper considers the allocation problem of multimedia objects in multi-powermode storage systems, where the objects are distributed among multi-powermode storages based on the access pattern to the objects. We design an underlying infrastructure of storage system and propose a dynamic multimedia object allocation policy based on the designed infrastructure, which integrate and prove the optimality of the proposed policy. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Network-aware selective job checkpoint and migration to enhance co-allocation in multi-cluster systems,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2009
    William M. Jones
    Abstract Multi-site parallel job schedulers can improve average job turn-around time by making use of fragmented node resources available throughout the grid. By mapping jobs across potentially many clusters, jobs that would otherwise wait in the queue for local resources can begin execution much earlier; thereby improving system utilization and reducing average queue waiting time. Recent research in this area of scheduling leverages user-provided estimates of job communication characteristics to more effectively partition the job across system resources. In this paper, we address the impact of inaccuracies in these estimates on system performance and show that multi-site scheduling techniques benefit from these estimates, even in the presence of considerable inaccuracy. While these results are encouraging, there are instances where these errors result in poor job scheduling decisions that cause network over-subscription. This situation can lead to significantly degraded application performance and turnaround time. Consequently, we explore the use of job checkpointing, termination, migration, and restart (CTMR) to selectively stop offending jobs to alleviate network congestion and subsequently restart them when (and where) sufficient network resources are available. We then characterize the conditions and the extent to which the process of CTMR improves overall performance. We demonstrate that this technique is beneficial even when the overhead of doing so is costly. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A dynamic admission control scheme to manage contention on shared computing resources

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2009
    Percival Xavier
    Abstract A virtual organization is established when physical organizations collaborate to share their computing resources with the aim of serving each other when there is a likelihood of insufficient local resources during peak resource usage periods at any organization. Contention becomes a potential problem when a large number of requests, which can overwhelm the aggregate capacity of shared resources, are submitted from the participating organizations coincidentally at the same period. In particular, when a small number of requests that require large amounts of computing resources are admitted in place of a large number of requests that require less computing resources, the overall system performance, in terms of admission ratio, can deteriorate significantly. Hence, admission control is necessary to reduce resource oversubscription. Because domain-shared computing resources are likely to be combined to form a large-scale system, it is not possible to define a fixed admission policy solely based on the request's CPU and execution time requirements. In this paper, we introduce an admission control framework, based on a pricing model, for a multi-domain-shared computing infrastructure. The performance of the admission control framework is evaluated under different scenarios that contribute to the overall degree of competition for shared resources. The results are presented and analyzed in this paper. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A brainlike learning system with supervised, unsupervised, and reinforcement learning

    ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2008
    Takafumi Sasakawa
    Abstract According to Hebb's cell assembly theory, the brain has the capability of function localization. On the other hand, it is suggested that in the brain there are three different learning paradigms: supervised, unsupervised, and reinforcement learning, which are related deeply to the three parts of brain: cerebellum, cerebral cortex, and basal ganglia, respectively. Inspired by the above knowledge of the brain in this paper we present a brainlike learning system consisting of three parts: supervised learning (SL) part, unsupervised learning (UL) part, and reinforcement learning (RL) part. The SL part is a main part learning input,output mapping; the UL part is a competitive network dividing input space into subspaces and realizes the capability of function localization by controlling firing strength of neurons in the SL part based on input patterns; the RL part is a reinforcement learning scheme, which optimizes system performance by adjusting the parameters in the UL part. Numerical simulations have been carried out and the simulation results confirm the effectiveness of the proposed brainlike learning system. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 162(1): 32,39, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20600 [source]


    Salt-water recycling for brine production at road-salt-storage facilities

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 4 2009
    G. Michael Fitch
    Abstract This research investigates the storm-water quality at road-salt-storage facilities located at Virginia Department of Transportation (VDOT) winter maintenance locations and investigates the feasibility of a sustainable solution to better manage the salt-contaminated storm-water runoff. Collection ponds are currently used at most salt-storage sites to contain highly saline runoff and prevent its release into the environment. During a synoptic, winter-time sampling, chloride-ion concentrations in these ponds were found to be significantly greater than state and federal regulatory guidelines for surface-water-quality criteria, with individual values exceeding 2000 mg/L. The pond water is currently treated as a waste product by VDOT, resulting in significant costs for disposal. However, this saline pond water can potentially be recycled to produce concentrated brine solutions, which can then be used by VDOT for either prewetting dry salt during application to roadways or for direct brine application. Laboratory and field tests have been performed using a bench-scale brine generation system to quantify the effects of hydraulic retention time, temperature, and influent-water quality on system performance. Results of these studies have found that the storm-water runoff captured in collection ponds requires no pretreatment before entering the brine generation system and can effectively produce brine at the target salt concentration. Results of a cost-benefit analysis indicate that it is possible under multiple scenarios to recover the investment capital of implementing brine generation at all VDOT winter maintenance locations, typically within a 4-year horizon. © 2009 American Institute of Chemical Engineers Environ Prog, 2009 [source]


    Thermal modeling and simulation of an integrated solid oxide fuel cell and charcoal gasification system

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2009
    C. Ozgur Colpan
    Abstract In this study we propose a novel integrated charcoal gasification and solid oxide fuel cell (SOFC) system, which is intended to produce electricity and heat simultaneously. This system mainly consists of an updraft gasifier using air and steam as the gasification agents, a planar and direct internal reforming SOFC and a low temperature gas cleanup system. The performance of this system is assessed through numerical modeling using a pre-developed and validated heat transfer model of the SOFC and thermodynamic models for the rest of the components. These models are used to simulate the performance of the cell and system for a case study. In addition, a parametric study is conducted to assess the effect of Reynolds number at the fuel channel inlet of the SOFC on the cell performance, e.g., fuel utilization and power density, and the system performance, e.g., electrical efficiency, exergetic efficiency, and power to heat ratio. The number of stacks is also calculated for different Reynolds numbers to discuss the economical feasibility of the integrated system. The results show that the electrical efficiency, exergetic efficiency and power to heat ratio of this system are 33.31%, 45.72%, and 1.004, respectively, for the base case. The parametric study points out that taking the Reynolds number low yields higher electrical and exergetic efficiencies for the system, but it also increases the cost of the system. © 2009 American Institute of Chemical Engineers Environ Prog, 2009 [source]


    In-Situ ozonation of contaminated groundwater

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2000
    Michael A. Nimmer
    This paper presents case studies in the application of insitu ozone sparging to remediate petroleum contaminated groundwater. This technology was developed and installed due to shortcomings with other conventional remedial technologies evaluated for groundwater remediation. The main objective of this study was to develop a system to supply ozone to the groundwater aquifer and to evaluate the system performance in the field. Three different applications were evaluated for this study, all containing petroleum-contaminated groundwater. The ozone sparging system consists of an air compressor, ozone generator, a programmable logic controller, and associated gauges and controls. The mixture of air and ozone is injected into the groundwater aquifer through microporous sparge points contained in various sparge well designs. The initial results from the three applications demonstrated that ozone sparging is a viable alternative to remediate petroleum -contaminated groundwater. Significant reductions in petroleum constituents we re observed shortly after system start-up at all sites. During the one to two years operation at the three sites, a number of maintenance items we re identified; these items we re addressed by modifications to the system design and operation. A long-term evaluation of the system operation has not yet been performed. [source]


    New passive filter design for neutral current cancellation in balanced 3-phase 4-wire non-linear distribution systems

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2003
    E. F. El-Saadany
    Different types of non-linear loads are expected to proliferate into the distribution system, causing the harmonic distortion levels on these systems to increase. Third harmonic and all other triplen harmonic currents have little diversity among different loads and add in the neutral. The neutral current in a low voltage three-phase four-wire distribution system is expected to increase resulting in significant problems. The factors that affect the neutral current magnitude as well as the phase currents distortion are investigated. A new technique, namely, reactance one-port compensator is presented in order to cancel the neutral current and improve the overall system distortion levels. The attenuation and diversity effects are considered during performing this study. The analysis uses the electromagnetic transient program (EMTP) to model the loads as well as the overall system. The proposed filter drastically improves the system performance and substantially reduces the neutral current. [source]


    Multiuser detection techniques with decision statistics combining for layered space-time coded CDMA systems

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2006
    Slavica Marinkovic
    This paper considers a low-complexity iterative multiuser detection/decoding algorithm in single user layered space-time coded (LST) systems and LST coded code division multiple access (CDMA) systems. We concentrate on the iterative multiuser receiver based on parallel interference cancellation (PIC) and compare it to the iterative PIC with the minimum means square error (MMSE) detection, as these two approaches seem to be most efficient in meeting the performance-complexity trade-off required by practical systems. In iterative PIC structures, a decision statistics bias severely limits the system performance for a large number of multiple access (MA) or multiple-input multiple-output (MIMO) interferers. A decision statistics combining (DSC) method, originally proposed for iterative PIC receivers in CDMA systems, is used to minimize the bias effect in space-time coded systems for iterative PIC receivers. Significant performance improvements have been confirmed with the iterative PIC receiver with DSC (PIC-DSC) relative to the standard iterative PIC receivers in LST systems for both flat and frequency selective fading channels. This advantage is retained in layered coded CDMA systems as well. The proposed iterative PIC-DSC detector approaches the performance of the much more complex iterative PIC-MMSE receiver. Copyright © 2006 AEIT. [source]


    Optimum adaptive OFDM systems

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003
    Lorenzo Piazzo
    When Orthogonal Frequency Division Multiplexing (OFDM) is used to transmit information over a frequency selective channel, it is convenient to vary the power and the number of bits allocated to each subcarrier in order to optimize the system performance. In this paper, the three classical problems of transmission power minimization, error rate minimization and throughput maximization are investigated in a unified manner. The relations existing among these three problems are clarified and a precise definition of optimum system is given. A general and rigorous way to extend the solution of any of the three problems in order to obtain the solution of the other two is presented. This result is used to devise an efficient algorithm for the error rate minimization. Copyright © 2003 AEI. [source]


    Adaptive group detection for DS/CDMA systems over frequency-selective fading channels,

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003
    Stefano Buzzi
    In this paper we consider the problem of group detection for asynchronous Direct-Sequence Code Division Multiple Access (DS/CDMA) systems operating over frequency-selective fading channels. A two-stage near-far resistant detection structure is proposed. The first stage is a linear filter, aimed at suppressing the effect of the unwanted user signals, while the second stage is a non-linear block, implementing a maximum likelihood detection rule on the set of desired user signals. As to the linear stage, we consider both the Zero-Forcing (ZF) and the Minimum Mean Square Error (MMSE) approaches; in particular, based on the amount of prior knowledge on the interference parameters which is available to the receiver and on the affordable computational complexity, we come up with several receiving structures, which trade system performance for complexity and needed channel state information. We also present adaptive implementations of these receivers, wherein only the parameters from the users to be decoded are assumed to be known. The case that the channel fading coefficients of the users to be decoded are not known a priori is also considered. In particular, based on the transmission of pilot signals, we adopt a least-squares criterion in order to obtain estimates of these coefficients. The result is thus a fully adaptive structure, which can be implemented with no prior information on the interfering signals and on the channel state. As to the performance assessment, the new receivers are shown to be near-far resistant, and simulation results confirm their superiority with respect to previously derived detection structures. Copyright © 2003 AEI. [source]


    Partial imaginary precursor cancelling in DFE for BPSK and GMSK modulations

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2001
    Dan Raphaeli
    This paper examines the effect of partial imaginary precursor cancelling on the performance of a system which uses a DFE and employs a one dimensional modulation. As opposed to QAM, BPSK or GMSK (after proper manipulations) require minimization of the mean square (MS) of the error real part only. Therefore, a lower MSE is achieved if the DFE coefficients are computed taking this fact into account, by trying to minimize as few imaginary precursors as possible. However, this introduces a delay by the imaginary part of the feedback filter (FBF) which is no longer causal. In this paper we investigate the influence of the length of imaginary precursor cancelling on system performance, and its effect on carrier phase tracking. The equations for the computation of the DFE coefficients for none or partially cancelled imaginary precursors are derived and performance on GMSK channels is presented. [source]


    Signal Dependence of Cross-Phase Modulation in WDM Systems

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2000
    Lutz Rapp
    In intensity modulated direct detection wavelength division multiplexing (WDM) systems, the effect of cross-phase modulation (XPM) combined with groupvelocity dispersion causes signal distortion, which depends on the transmitted signals. The influence of the mutual dependence of these signals on the resulting degradation of the system performance is investigated theoretically and by means of simulations. Considering the propagation of two digital signals, the eye-closure penalty is determined for different bit patterns and consequences for system design are pointed out. An approximation method is described in order to provide a better understanding of the signal dependence of XPM. Finally, a technique reducing the impact of XPM on data transmission in WDM systems is proposed. [source]


    Environmental and economic analysis of the fully integrated biorefinery

    GCB BIOENERGY, Issue 5 2009
    ELIZABETH D. SENDICH
    Abstract Cellulosic biofuel systems have the potential to significantly reduce the environmental impact of the world's transportation energy requirements. However, realizing this potential will require systems level thinking and scale integration. Until now, we have lacked modeling tools for studying the behavior of integrated cellulosic biofuel systems. In this paper, we describe a new research tool, the Biorefinery and Farm Integration Tool (BFIT) in which the production of fuel ethanol from cellulosic biomass is integrated with crop and animal (agricultural) production models. Uniting these three subsystems in a single combined model has allowed, for the first time, basic environmental and economic analysis of biomass production, possible secondary products, fertilizer production, and bioenergy production across various regions of the United States. Using BFIT, we simulate cellulosic ethanol production embedded in realistic agricultural landscapes in nine locations under a collection of farm management scenarios. This combined modeling approach permits analysis of economic profitability and highlights key areas for environmental improvement. These results show the advantages of introducing integrated biorefinery systems within agricultural landscapes. This is particularly true in the Midwest, which our results suggest is a good setting for the cellulosic ethanol industry. Specifically, results show that inclusion of cellulosic biofuel systems into existing agriculture enhances farm economics and reduces total landscape emissions. Model results also indicate a limited ethanol price effect from increased biomass transportation distance. Sensitivity analysis using BFIT revealed those variables having the strongest effects on the overall system performance, namely: biorefinery size, switchgrass yield, and biomass farm gate price. [source]


    Gravity gradiometer systems , advances and challenges

    GEOPHYSICAL PROSPECTING, Issue 4 2009
    Daniel DiFrancesco
    ABSTRACT The past few years have witnessed significant advances and unparalleled interest in gravity gradiometer instrument technology as well as new deployment scenarios for various applications. Gravity gradiometry is now routinely considered as a viable component for resource exploration activities as well as being deployed for global information gathering. Since the introduction of the torsion balance in the 1890s, it has been recognized that gravity gradient information is valuable , yet difficult and time-consuming to obtain. The recent acceptance and routine use of airborne gravity gradiometry for exploration has inspired many new technology developments. This paper summarizes advances in gravity gradient sensor development and also looks at deployment scenarios and gradiometer systems that have been successfully fielded. With projected improved system performance on the horizon, new challenges will also come to the forefront. Included in these challenges are aspects of instrument and system intrinsic noise, vehicle dynamic noise, terrain noise, geologic noise and other noise sources. Each of these aspects is briefly reviewed herein and recommendations for improvements presented. [source]


    Funnel-and-Gate Performance in a Moderately Heterogeneous Flow Domain

    GROUND WATER MONITORING & REMEDIATION, Issue 3 2001
    Lacrecia C. Bilbrey
    The funnel-and-gate ground water remediation technology (Starr and Cherry 1994) has received increased attention and application as an in situ alternative to the typical pump-and-treat system. Understanding the effects of heterogeneity on system performance can mean the difference between a successful remediation project and one that fails to meet its cleanup goals. In an attempt to characterize and quantify the effects of heterogeneity on funnel-and-gate system performance, a numerical modeling study of 15 simulated heterogeneous flow domains was conducted. Each realization was tested to determine if the predicted capture width met the capture width expected for a homogeneous flow domain with the same hulk properties. This study revealed that the capture width of the funnel-and-gate system varied significantly with the level of heterogeneity of the aquifer. Two possible remedies were investigated for bringing systems with less than acceptable capture widths to acceptable levels of performance. First, it was determined that enlarging the funnel and gate via a factor of safety applied to the design capture width could compensate for the capture width variation in the heterogeneous flow domains. In addition, it was shown that the use of a pumping well downstream of the funnel and gate could compensate for the effects of aquifer heterogeneity on the funnel-and-gate capture width. However, if a pumping well is placed downstream of the funnel and gate to control the hydraulic gradient through the gate, consideration should be given to the gate residence time in relation to the geochemistry of the contaminant removal or destruction process in the gate. [source]


    A critique of the World Health Organisation's evaluation of health system performance

    HEALTH ECONOMICS, Issue 5 2003
    Jeff Richardson
    Abstract The World Health Organisation's (WHO) approach to the measurement of health system efficiency is briefly described. Four arguments are then presented. First, equity of finance should not be a criterion for the evaluation of a health system and, more generally, the same objectives and importance weights should not be imposed upon all countries. Secondly, the numerical value of the importance weights do not reflect their true importance in the country rankings. Thirdly, the model for combining the different objectives into a single index of system performance is problematical and alternative models are shown to alter system rankings. The WHO statistical analysis is replicated and used to support the fourth argument which is that, contrary to the author's assertion, their methods cannot separate true inefficiency from random error. The procedure is also subject to omitted variable bias. The econometric model for all countries has very poor predictive power for the subset of OECD countries and it is outperformed by two simpler algorithms. Country rankings based upon the model are correspondingly unreliable. It is concluded that, despite these problems, the study is a landmark in the evolution of system evaluation, but one which requires significant revision. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Evaluation of a job-aiding tool in inspection systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 1 2008
    Edem Tetteh
    Visual inspection plays a very important role in ensuring quality in the manufacturing and service industries. Two determinants of inspection performance are visual search and decision-making. Improvement in any one of the components will have an impact on system performance. Job-aids, accompanied by training, have proven to be effective in enhancing accuracy and reducing search time in visual inspection systems. This article aims to investigate the effects of search strategy along with task complexity and pacing on inspection performance using a job-aiding tool. To facilitate the experiments, an enhanced job-aiding tool in a simulated visual inspection environment was developed. This tool enables an inspector to track his or her search path in visual inspection systems. A pilot study and two experiments were conducted using this tool. The pilot study examined the effectiveness of the job-aiding tool. The first experiment studied the effect of search strategy and task complexity on inspection system performance and the second experiment studied the impact of search strategy, task complexity, and pacing on system performance. Results from this research can be used to better design an inspection system. © 2007 Wiley Periodicals, Inc. [source]


    Retracted: The manufacturing enterprise diagnostic instrument: A tool for assessment of enterprise system manufacturers,

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 6 2007
    Ash Genaidy
    Small- to medium-size manufacturers constitute the bulk of the U.S. manufacturing sector and are in a dire need of a tool that will help them face the challenges of globalization and increasing costs of manufacturing in the United States. The objectives were to develop and validate a diagnostic tool for manufacturing firms to evaluate their enterprise system performance to eliminate trial-and-error decisions and allow quick and informed actions. A pilot version was developed and tested in a small manufacturer. A revised version was then developed and further tested in another firm. Testing of the tool's pilot version provided invaluable feedback for developing the main tool. When tested in a small manufacturing firm, the results demonstrated management's misalignment and identified the root causes hindering performance. The tool demonstrated a quick and effective way to evaluate the manufacturing system's performance and to develop an aligned improvement action plan to guide the managerial team in devising a winning business strategy. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 521,574, 2007. [source]


    A case study of serial-flow car disassembly: Ergonomics, productivity and potential system performance

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 4 2007
    Karolina Kazmierczak
    A recent European Union (EU) directive increases demands on car recycling. Thus, present craft-type disassembly systems need reconfiguration in order to be more efficient. A line-based system tested in the Netherlands was investigated regarding system performance and ergonomics. The system had reduced performance compared to the design specifications due to such factors as system losses, operator inexperience, and teamwork deficiencies. Operators' peak low back loads were lower than in Swedish craft-type systems. Direct, value-adding work comprised 30% of the workday, compared to about 70% in the Swedish manufacturing industry. Alternative system configurations were simulated and discussed using a novel combination of flow and human simulations. For example, a smaller variation in cycle time implied higher output in number of cars per week and larger operator cumulative loading on the low back. In all models the cumulative load was high compared to the loads previously recorded in assembly work. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 331,351, 2007. [source]


    Evaluation of best system performance: Human, automated, and hybrid inspection systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2003
    Xiaochun Jiang
    Recently, 100% inspection with automated systems has seen more frequent application than traditional sampling inspection with human inspectors. Nevertheless, humans still outperform machines in most attribute inspection tasks. Because neither humans nor automation can achieve superior inspection system performance, hybrid inspection systems where humans work cooperatively with machines merit study. In response to this situation, this research was conducted to evaluate three of the following different inspection systems: (1) a human inspection system, (2) a computer search/human decision-making inspection system, and (3) a human/computer share search/decision-making inspection system. Results from this study showed that the human/computer share search/decision-making system achieve the best system performance, suggesting that both should be used in the inspection tasks rather than either alone. Furthermore, this study looked at the interaction between human inspectors and computers, specifically the effect of system response bias on inspection quality performance. These results revealed that the risky system was the best in terms of accuracy measures. Although this study demonstrated how recent advances in computer technology have modified previously prescribed notions about function allocation alternatives in a hybrid inspection environment, the adaptability of humans was again demonstrated, indicating that they will continue to play a vital role in future hybrid systems. © 2003 Wiley Periodicals, Inc. Hum Factors Man 13: 137,152, 2003. [source]


    The performance of constructed wetlands for, wastewater treatment: a case study of Splash wetland in Nairobi Kenya

    HYDROLOGICAL PROCESSES, Issue 17 2001
    Daniel Muasya Nzengy'a
    Abstract The performance of a constructed wetland for wastewater treatment was examined for four months (December 1995 to March 1996). The study area, hereby referred to as the Splash wetland, is approximately 0·5 ha, and is located in the southern part of Nairobi city. Splash wetland continuously receives domestic sewage from two busy restaurants. Treated wastewater is recycled for re-use for various purposes in the restaurants. Both wet and dry season data were analysed with a view of determining the impact of seasonal variation on the system performance. The physical and chemical properties of water were measured at a common intake and at series of seven other points established along the wetland gradient and at the outlet where the water is collected and pumped for re-use at the restaurants. The physico-chemical characteristics of the wastewater changed significantly as the wastewater flowed through the respective wetland cells. A comparison of wastewater influent versus the effluent from the wetland revealed the system's apparent success in water treatment, especially in pH modification, removal of suspended solids, organic load and nutrients mean influent pH = 5·7 ± 0·5, mean effluent pH 7·7 ± 0·3; mean influent BOD5 = 1603·0 ± 397·6 mg/l, mean effluent BOD5 = 15·1 ± 2·5 mg/l; mean influent COD = 3749·8 ± 206·8 mg/l, mean effluent COD = 95·6 ± 7·2 mg/l; mean influent TSS = 195·4 ± 58·7 mg/l, mean effluent TSS = 4·7 ± 1·9 mg/l. As the wastewater flowed through the wetland system dissolved free and saline ammonia, NH4+, decreased from 14·6 ± 4·1 mg/l to undetectable levels at the outlet. Dissolved oxygen increased progressively through the wetland system. Analysis of the data available did not reveal temporal variation in the system's performance. However, significant spatial variation was evident as the wetland removed most of the common pollutants and considerably improved the quality of the water, making it safe for re-use at the restaurants. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Design of a near-optimal adaptive filter in digital signal processor for active noise control

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2008
    S. M. Yang
    Abstract Adaptive filter has been applied in adaptive feedback and feedforward control systems, where the filter dimension is often determined by trial-and-error. The controller design based on a near-optimal adaptive filter in digital signal processor (DSP) is developed in this paper for real-time applications. The design integrates the adaptive filter and the experimental design such that their advantages in stability and robustness can be combined. The near-optimal set of controller parameters, including the sampling rate, the dimension of system identification model, the dimension (order) of adaptive controller in the form of an FIR filter, and the convergence rate of adaptation is shown to achieve the best possible system performance. In addition, the sensitivity of each design parameter can be determined by analysis of means and analysis of variance. Effectiveness of the adaptive controller on a DSP is validated by an active noise control experiment. Copyright © 2007 John Wiley & Sons, Ltd. [source]