System Design (system + design)

Distribution by Scientific Domains
Distribution within Engineering

Kinds of System Design

  • control system design


  • Selected Abstracts


    EFFECTS OF RANDOM SHIFTS OF TESTING EQUIPMENT ON PROCESS CONTROL SYSTEM DESIGN AND SELECTION OF PROCESS CONTROL POLICIES*

    PRODUCTION AND OPERATIONS MANAGEMENT, Issue 2 2002
    JIE DING
    This paper studies issues associated with designing process control systems when the testing equipment is subjected to random shifts. We consider a production process with two states: in control and out of control. The process may shift randomly to the out-of-control state over time. The process is monitored by periodically sampling finished items from the process. The equipment used to test sampled items also is assumed to have two states and may shift randomly during the testing process. We formulate a cost model for finding the optimal process control policy that minimizes the expected unit time cost. Numerical results show that shifts of the testing equipment may significantly affect the performance of a process control policy. We also studied the effects of the testing equipment's shifts on the selection of process control policies. [source]


    A Screening Model for Injection-Extraction Treatment Well Recirculation System Design

    GROUND WATER MONITORING & REMEDIATION, Issue 4 2008
    Monica Y. Wu
    Implementation of injection-extraction treatment well pairs for in situ, in-well, or on-site remediation may be facilitated by development and application of modeling tools to aid in hydraulic design and remediation technology selection. In this study, complex potential theory was employed to derive a simple one-step design equation and related type curves that permit the calculation of the extraction well capture zone and the hydraulic recirculation between an injection and extraction well pair oriented perpendicular to regional flow. This equation may be used to aid in the design of traditional fully screened injection-extraction wells as well as innovative tandem recirculating wells when an adequate geologic barrier to vertical ground water flow exists. Simplified models describing in situ bioremediation, in-well vapor stripping, and in-well metal reactor treatment efficiency were adapted from the literature and coupled with the hydraulic design equation presented here. Equations and type curves that combine the remediation treatment efficiency with the hydraulic design equation are presented to simulate overall system treatment efficiency under various conditions. The combined model is applied to predict performance of in situ bioremediation and in-well palladium reactor designs that were previously described in the literature. This model is expected to aid practitioners in treatment system screening and evaluation. [source]


    Applying XBRL in an Accounting Information System Design Using the REA Approach: An Instructional Case,

    ACCOUNTING PERSPECTIVES, Issue 1 2010
    JACOB PENG
    base de données relationnelles; document d'instance; modélisation REA; XBRL Abstract The Church in Somewhere (CIS) is a small community church which uses an Excel spreadsheet to keep its financial records. The church administrator is considering moving from a spreadsheet accounting system to a relational database system that can easily be expanded to include more information in the future. In this paper we examine the transforming process in this hypothetical case by following a resource-event-agent (REA) modeling paradigm to create a database. We then link the REA model to financial reporting using Microsoft Access. In addition, using the financial report in the database, students prepare and validate an eXtensible Business Reporting Language (XBRL) document for CIS. Instead of applying the complex U.S. Generally Accepted Accounting Principles (GAAP) Taxonomies, Release 2009, the case uses a dedicated CIS Taxonomy to complete the mapping and tagging processes. L'application du XBRL dans la conception d'un système d'information comptable selon le modèle ressource-événement-agent: cas didactique Résumé Church in Somewhere (CIS) est une petite église communautaire qui utilise un tableur Excel pour tenir ses registres financiers. L'administrateur de l'église songe à passer du système comptable du tableur à un système de base de données relationnelles susceptible d'être facilement élargi de manière à recevoir ultérieurement davantage d'informations. Dans ce cas hypothétique, les auteurs examinent le processus de « conversion », en suivant le paradigme du modèle ressource-événement-agent (REA), menant à la création d'une base de données. Ils relient ensuite le modèle REA à l'information financière par le truchement de Microsoft Access. En se servant du rapport financier de la base de données, ils fournissent en outre aux étudiants l'occasion de préparer et de valider un document XBRL pour CIS. Plutôt que d'appliquer les taxonomies complexes des PCGR des États-Unis, édition 2009, les auteurs utilisent dans leur cas une taxonomie propre à CIS pour réaliser les processus de cartographie et de codage. [source]


    Performance Measure Properties and Incentive System Design

    INDUSTRIAL RELATIONS, Issue 2 2009
    MICHAEL J. GIBBS
    We analyze effects of performance measure properties (controllable and uncontrollable risk, distortion, and manipulation) on incentive plan design, using data from auto dealership manager incentive systems. Dealerships put the most weight on measures that are "better" with respect to these properties. Additional measures are more likely to be used for a second or third bonus if they can mitigate distortion or manipulation in the first performance measure. Implicit incentives are used to provide ex post evaluation, to motivate the employee to use controllable risk on behalf of the firm, and to deter manipulation of performance measures. Overall, our results indicate that firms use incentive systems of multiple performance measures, incentive instruments, and implicit evaluation and rewards as a response to weaknesses in available performance measures. [source]


    System design in normative and actual practice: A comparative study of cognitive task allocation in advanced manufacturing systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2004
    Sotiris Papantonopoulos
    The Human Factors Engineering approach to human-machine system design is based largely on normative design methods. This article suggests that the scope of Human Factors Engineering shall be extended to the descriptive study of system design in actual practice by the application of theoretical frameworks that emphasize the role of the system-design practitioner and organization in the design process. A comparative study of system design in normative and actual practice was conducted in the design of cognitive task allocation in a Flexible Manufacturing System (FMS) cell. The study showed that the designers' allocation decisions were influenced strongly by factors related to their own design practices, yet exogenous to the tasks to be allocated. Theoretical frameworks from Design Research were applied to illustrate differences between normative and actual practice of system design. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 181,196, 2004. [source]


    A Framework for New Scholarship in Human Performance Technology

    PERFORMANCE IMPROVEMENT QUARTERLY, Issue 2 2006
    Thomas M. Schwen
    This article introduces a strategic argument and examples, in subsequent articles in this special issue, about sociocultural research opportunities for HPT practitioners and scholars. The authors take the view that recent criticisms of Instructional Systems Design have merit when considered from an organizational performance point of view. We see the problem as historic overuse of one theoretical perspective at a microlevel of theory and application. We argue that adding recent sociocultural perspectives and expanding the levels of theory to include groups and complex organizational structures will offer an opportunity for more rigorous and diverse research agenda and create new insights for problem solving in practice. [source]


    A formalized approach for designing a P2P-based dynamic load balancing scheme

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2010
    Hengheng Xie
    Abstract Quality of service (QoS) is attracting more and more attention in many areas, including entertainment, emergency services, transaction services, and so on. Therefore, the study of QoS-aware systems is becoming an important research topic in the area of distributed systems. In terms of load balancing, most of the existing QoS-related load balancing algorithms focus on Routing Mechanism and Traffic Engineering. However, research on QoS-aware task scheduling and service migration is very limited. In this paper, we propose a task scheduling algorithm using dynamic QoS properties, and we develop a Genetic Algorithm-based Services Migration scheme aiming to optimize the performance of our proposed QoS-aware distributed service-based system. In order to verify the efficiency of our scheme, we implement a prototype of our algorithm using a P2P-based JXTA technique, and do an emulation test and a simulation test in order to analyze our proposed solution. We compare our service-migration-based algorithm with non-migration and non-load-balancing approaches, and find that our solution is much better than the other two in terms of QoS success rate. Furthermore, in order to provide more solid proofs of our research, we use DEVS to validate our system design. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Broad Beam Ion Sources for Electrostatic Space Propulsion and Surface Modification Processes: From Roots to Present Applications

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7 2007
    H. Neumann
    Abstract Ion thrusters or broad beam ion sources are widely used in electrostatic space propulsion and in high-end surface modification processes. A short historical review of the roots of electric space propulsion is given. In the following, we introduce the electrostatic ion thrusters and broad beam ion sources based on different plasma excitation principles and describe the similarities as well as the differences briefly. Furthermore, an overview on source plasma and ion beam characterisation methods is presented. Apart from that, a beam profile modelling strategy with the help of numerical trajectory codes as basis for a special grid system design is outlined. This modelling represents the basis for the adaptation of a grid system for required technological demands. Examples of model validation demonstrate their reliability. One of the main challenges in improvement of ion beam technologies is the customisation of the ion beam properties, e.g. the ion current density profile for specific demands. Methods of an ex-situ and in-situ beam profile control will be demonstrated. Examples for the use of ion beam technologies in space and on earth , the RIT-10 rescue mission of ESA's satellite Artemis, the RIT-22 for BepiColombo mission and the deposition of multilayer stacks for EUVL (Extreme Ultra Violet Lithography) mask blank application are provided in order to illustrate the potential of plasma-based ion beam sources. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Baghouse system design based on economic optimization

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 4 2000
    Antonio C. Caputo
    In this paper a method is described for using economic optimization in the design of baghouse systems. That is, for a given emission control problem, the total filtration surface area, the overall pressure drop, fabric material effects, and the cleaning cycle frequency, may all be evaluated simultaneously. In fact, as baghouse design parameters affect capital and operating expenses in interrelated and counteracting manners, a minimum total cost may be searched defining the best arrangement of dust collection devices. With this in mind, detailed cost functions have been developed with the aim of providing an overall economic model. As a result, a discounted total annual cost has been obtained that may be minimized by allowing for optimal baghouse characterization. Finally, in order to highlight the capabilities of the proposed methodology, some optimized solutions are also presented, which consider the economic impact of both bag materials and dust properties. [source]


    In-Situ ozonation of contaminated groundwater

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2000
    Michael A. Nimmer
    This paper presents case studies in the application of insitu ozone sparging to remediate petroleum contaminated groundwater. This technology was developed and installed due to shortcomings with other conventional remedial technologies evaluated for groundwater remediation. The main objective of this study was to develop a system to supply ozone to the groundwater aquifer and to evaluate the system performance in the field. Three different applications were evaluated for this study, all containing petroleum-contaminated groundwater. The ozone sparging system consists of an air compressor, ozone generator, a programmable logic controller, and associated gauges and controls. The mixture of air and ozone is injected into the groundwater aquifer through microporous sparge points contained in various sparge well designs. The initial results from the three applications demonstrated that ozone sparging is a viable alternative to remediate petroleum -contaminated groundwater. Significant reductions in petroleum constituents we re observed shortly after system start-up at all sites. During the one to two years operation at the three sites, a number of maintenance items we re identified; these items we re addressed by modifications to the system design and operation. A long-term evaluation of the system operation has not yet been performed. [source]


    Enhanced system design for download and streaming services using Raptor codes,,

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2009
    Tiago Gasiba
    Raptor codes have been recently standardised by 3rd Generation Partnership Project (3GPP) to be used in the application layer (AL) for multimedia broadcast and multicast services (MBMS) including download delivery and streaming delivery. Furthermore, digital video broadcast (DVB) has also recommended the inclusion of these Raptor codes for IP-datacast services. In this paper, enhancements on the system and receiver design using Raptor codes are studied, namely the permeable layer receiver (PLR) and the individual post-repair mechanism. With the PLR, the partial information ignored in the conventional receiver is passed from lower layer to higher layer. We show how a practical and efficient implementation of the Raptor decoder as a PLR can be done, which can not only achieve huge performance gains, but the gains can be achieved at an affordable low decoding complexity. Whereas the PLR is employed for enhancing both download and streaming services, the post-repair aims at guaranteeing reliable download delivery when a feedback channel is available. We propose here two efficient post-repair algorithms which fully exploit the properties of the Raptor codes. One allows to find a minimum set of source symbols to be requested in the post-delivery, and another allows to find a sufficient number of consecutive repair symbols. Selected simulations verify the good performance of proposed techniques. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Signal Dependence of Cross-Phase Modulation in WDM Systems

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2000
    Lutz Rapp
    In intensity modulated direct detection wavelength division multiplexing (WDM) systems, the effect of cross-phase modulation (XPM) combined with groupvelocity dispersion causes signal distortion, which depends on the transmitted signals. The influence of the mutual dependence of these signals on the resulting degradation of the system performance is investigated theoretically and by means of simulations. Considering the propagation of two digital signals, the eye-closure penalty is determined for different bit patterns and consequences for system design are pointed out. An approximation method is described in order to provide a better understanding of the signal dependence of XPM. Finally, a technique reducing the impact of XPM on data transmission in WDM systems is proposed. [source]


    Numerical Modeling of Unsaturated Flow in Wastewater Soil Absorption Systems

    GROUND WATER MONITORING & REMEDIATION, Issue 2 2003
    Deborah N. Huntzinger Beach
    It is common practice in the United States to use wastewater soil absorption systems (WSAS) to treat domestic wastewater. WSAS are expected to provide efficient, long-term removal of wastewater contaminants prior to ground water recharge. Soil clogging at the infiltrative surface of WSAS occurs due to the accumulation of suspended solids, organic matter, and chemical precipitates during continued wastewater infiltration. This clogging zone (CZ) creates an impedance to flow, restricting the hydraulic conductivity and rate of infiltration. A certain degree of clogging may improve the treatment of wastewater by enhancing purification processes, in part because unsaturated flow is induced and residence times are significantly increased. However, if clogging becomes excessive, the wastewater pond height at the infiltrative surface can rise to a level where system failure occurs. The numerical model HYDRUS-2D is used to simulate unsaturated flow within WSAS to better understand the effect of CZs on unsaturated flow behavior and hydraulic retention times in sandy and silty soil. The simulations indicate that sand-based WSAS with mature CZs are characterized by a more widely distributed flow regime and longer hydraulic retention times. The impact of clogging on water flow within the silt is not as substantial. For sand, increasing the hydraulic resistance of the CZ by a factor of three to four requires an increase in the pond height by as much as a factor of five to achieve the same wastewater loading. Because the degree of CZ resistance directly influences the pond height within a system, understanding the influence of the CZ on flow regimes in WSAS is critical in optimizing system design to achieve the desired pollutant-treatment efficiency and to prolong system life. [source]


    Multilevel Analysis of the Chronic Care Model and 5A Services for Treating Tobacco Use in Urban Primary Care Clinics

    HEALTH SERVICES RESEARCH, Issue 1 2009
    Dorothy Y. Hung
    Objective. To examine the chronic care model (CCM) as a framework for improving provider delivery of 5A tobacco cessation services. Methods. Cross-sectional surveys were used to obtain data from 497 health care providers in 60 primary care clinics serving low-income patients in New York City. A hierarchical generalized linear modeling approach to ordinal regression was used to estimate the probability of full 5A service delivery, adjusting for provider covariates and clustering effects. We examined associations between provider delivery of 5A services, clinic implementation of CCM elements tailored for treating tobacco use, and the degree of CCM integration in clinics. Principal Findings. Providers practicing in clinics with enhanced delivery system design, clinical information systems, and self-management support for cessation were 2.04,5.62 times more likely to perform all 5A services ( p<.05). CCM integration in clinics was also positively associated with 5As delivery. Compared with none, implementation of one to six CCM elements corresponded with a 3.69,30.9 increased odds of providers delivering the full spectrum of 5As ( p<.01). Conclusions. Findings suggest that the CCM facilitates provider adherence to the Public Health Service 5A clinical guideline. Achieving the full benefits of systems change may require synergistic adoption of all model components. [source]


    System design in normative and actual practice: A comparative study of cognitive task allocation in advanced manufacturing systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2004
    Sotiris Papantonopoulos
    The Human Factors Engineering approach to human-machine system design is based largely on normative design methods. This article suggests that the scope of Human Factors Engineering shall be extended to the descriptive study of system design in actual practice by the application of theoretical frameworks that emphasize the role of the system-design practitioner and organization in the design process. A comparative study of system design in normative and actual practice was conducted in the design of cognitive task allocation in a Flexible Manufacturing System (FMS) cell. The study showed that the designers' allocation decisions were influenced strongly by factors related to their own design practices, yet exogenous to the tasks to be allocated. Theoretical frameworks from Design Research were applied to illustrate differences between normative and actual practice of system design. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 181,196, 2004. [source]


    From generative fit to generative capacity: exploring an emerging dimension of information systems design and task performance

    INFORMATION SYSTEMS JOURNAL, Issue 4 2009
    Michel Avital
    Abstract Information systems (IS) research has been long concerned with improving task-related performance. The concept of fit is often used to explain how system design can improve performance and overall value. So far, the literature has focused mainly on performance evaluation criteria that are based on measures of task efficiency, accuracy, or productivity. However, nowadays, productivity gain is no longer the single evaluation criterion. In many instances, computer systems are expected to enhance our creativity, reveal opportunities and open new vistas of uncharted frontiers. To address this void, we introduce the concept of generativity in the context of IS design and develop two corresponding design considerations ,,generative capacity' that refers to one's ability to produce something ingenious or at least new in a particular context, and ,generative fit' that refers to the extent to which an IT artefact is conducive to evoking and enhancing that generative capacity. We offer an extended view of the concept of fit and realign the prevailing approaches to human,computer interaction design with current leading-edge applications and users' expectations. Our findings guide systems designers who aim to enhance creative work, unstructured syntheses, serendipitous discoveries, and any other form of computer-aided tasks that involve unexplored outcomes or aim to enhance our ability to go boldly where no one has gone before. In this paper, we explore the underpinnings of ,generative capacity' and argue that it should be included in the evaluation of task-related performance. Then, we briefly explore the role of fit in IS research, position ,generative fit' in that context, explain its role and impact on performance, and provide key design considerations that enhance generative fit. Finally, we demonstrate our thesis with an illustrative vignette of good generative fit, and conclude with ideas for further research. [source]


    Traffic locality characteristics in a parallel forwarding system

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2003
    W. Shi
    Abstract Due to the widening gap between the performance of microprocessors and that of memory, using caches in a system to take advantage of locality in its workload has become a standard approach to improve overall system performance. At the same time, many performance problems finally reduce to cache performance issues. Locality in system workload is the fact that makes caching possible. In this paper, we first use the reuse distance model to characterize temporal locality in Internet traffic. We develop a model that closely matches the empirical data. We then extend the work to investigate temporal locality in the workload of multi-processor forwarding systems by comparing locality under different packet scheduling schemes. Our simulations show that for systems with hash-based schedulers, caching can be an effective way to improve forwarding performance. Based on flow-level traffic characteristics, we further discuss the relationship between load-balancing and hash-scheduling, which yields insights into system design. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Capacity analysis for underlaying CDMA microcell/macrocell systems

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2001
    Jen-Kung Chung
    Abstract The CDMA system can provide more capacity than the conventional AMPS system and the hierarchical layer of cells is required for system design in the future. However, the problem is whether the same RF channels used in a CDMA underlaying macrocell/microcell structure also obtain high capacity as in the homogeneous structure. This paper investigates the interference of uplink and downlink from both the microcell and macrocell in a hierarchical structure. Downlink power control is also considered. The results show that the capacity of microcell in a hierarchical structure is 23 per cent less than in homogeneous cells. The capacity of macrocell in a hierarchical structure decreases dramatically in proportion to the number of microcells. The capacities of the microcell and macrocell are limited in downlink, and uplink, respectively. In addition, more efforts for microcell should be made, such as more power is transmitted by microcell basestation if the same RF channel is used in a hierarchical structure. The results suggest that different RF channels are used in a two-tier cellular environment. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Integrating electrical and aerodynamic characteristics for DFIG wind energy extraction and control study

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 12 2010
    Shuhui Li
    Abstract A doubly fed induction generator (DFIG) wind turbine depends on the control of the system at both generator and turbine levels, and the operation of the turbine is affected by the electrical characteristics of the generator and the aerodynamic characteristics of the turbine blades. This paper presents a DFIG energy extraction and control study by combining the two characteristics together in one integrative environment to examine various factors that are critical for an optimal DFIG system design. The generator characteristics are examined for different d-q control conditions, and the extracted power characteristics of the turbine blades versus generator slip are presented. Then, the two characteristics are analyzed in a joint environment. An integrative study is conducted to examine a variety of parametric data simultaneously for DFIG maximum wind power extraction evaluation. A close-loop transient simulation using SimPowerSystem is developed to validate the effectiveness of steady-state results and to further investigate the wind energy extraction and speed control in a feedback control environment. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A feasibility study of using thermal energy storage in a conventional air-conditioning system

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 11 2004
    M. M. Hussain
    Abstract An Erratum has been published for this article in International Journal of Energy Research 2004; 28 (13): 1213. This paper deals with the simulation of thermal energy storage (TES) system for HVAC applications. TES is considered to be one of the most preferred demand side management technologies for shifting cooling electrical demand from peak daytime hours to off peak night hours. TES is incorporated into the conventional HVAC system to store cooling capacity by chilling ethylene glycol, which is used as a storage medium. The thermodynamic performance is assessed using exergy and energy analyses. The effects of various parameters such as ambient temperature, cooling load, and mass of storage are studied on the performance of the TES. A full storage cycle, with charging, storing and discharging stages, is considered. In addition, energy and exergy analysis of the TES is carried out for system design and optimization. The temperature in the storage is found to be as low as 6.4°C after 1 day of charging without load for a mass of 250 000 kg. It is found that COP of the HVAC system increases with the decrease of storage temperature. Energy efficiency of the TES is found to be 80% for all the mass flow rate of the discharging fluid, whereas exergy efficiency varies from 14 to 0.5%. This is in fact due to the irreversibilities in a TES process destroy a significant amount of the input exergy, and the TES exergy efficiencies therefore become always lower than the corresponding energy efficiencies. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Performance prediction of a refrigerating machine using R-407C: the effect of the circulating composition on system performance

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 15 2002
    P. Haberschill
    Abstract This article presents a steady-state model of a vapour compression refrigerating machine using a ternary refrigerant mixture R-407C. When using a zeotropic mixture in a refrigerant cycle, the circulating composition does not agree with the composition of the original charged mixture. It is mainly due to the temperature glide and the vapour,liquid slip ratio. As a result of the composition shift and its magnitude, the system performance changes depending on the system design, especially the presence of liquid receiving vessels. In this paper, a method that predicts the circulating composition has been associated to a refrigerating machine model. The results obtained with this model show an enrichment in the most volatile components of about 1% for the circulating composition, which is sufficient to decrease the system performance by about 3%. Factors affecting the overall performance have been investigated. The results show a very strong performance dependence on the refrigerant charge. The COP can decrease by 25% when the refrigerant charge is insufficient. An initial charged composition variation of 2% involves variations of the cooling capacity of about 5%. Furthermore, our model was employed to compare the performance for both R-22 and R-407C. The cooling capacity for R-22 is slightly greater in comparison to R-407C and the COP is almost constant. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Adaptive recurrent neural network control of biological wastewater treatment

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2005
    Ieroham S. Baruch
    Three adaptive neural network control structures to regulate a biological wastewater treatment process are introduced: indirect, inverse model, and direct adaptive neural control. The objective is to keep the concentration of the recycled biomass proportional to the influent flow rate in the presence of periodically acting disturbances, process parameter variations, and measurement noise. This is achieved by the so-called Jordan Canonical Recurrent Trainable Neural Network, which is a completely parallel and parametric neural structure, permitting the use of the obtained parameters, during the learning phase, directly for control system design. Comparative simulation results confirmed the applicability of the proposed control schemes. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 173,193, 2005. [source]


    Quantum computing measurement and intelligence

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 5 2010
    Zoheir Ezziane
    Abstract One of the grand challenges in the nanoscopic computing era is guarantees of robustness. Robust computing system design is confronted with quantum physical, probabilistic, and even biological phenomena, and guaranteeing high-reliability is much more difficult than ever before. Scaling devices down to the level of single electron operation will bring forth new challenges due to probabilistic effects and uncertainty in guaranteeing "zero-one" based computing. Minuscule devices imply billions of devices on a single chip, which may help mitigate the challenge of uncertainty by replication and redundancy. However, such device densities will create a design and validation nightmare with the sheer scale. The questions that confront computer engineers regarding the current status of nanocomputing material and the reliability of systems built from such minuscule devices are difficult to articulate and answer. This article illustrates and discusses two types of quantum algorithms as follows: (1) a simple quantum algorithm and (2) a quantum search algorithm. This article also presents a review of recent advances in quantum computing and intelligence and presents major achievements and obstacles for researchers in the near future. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2010 [source]


    Stable robust feedback control system design for unstable plants with input constraints using robust right coprime factorization

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 18 2007
    Mingcong Deng
    Abstract A stable robust control system design problem for unstable plants with input constraints is considered using robust right coprime factorization of nonlinear operator. For obtaining strong stability of the closed-loop system of unstable plants with input constraints, a design scheme of robust nonhyphen-linear control system is given based on robust right coprime factorization. Some conditions for the robustness and system output tracking of the unstable plant with input constraints are derived. Numerical examples are given to demonstrate the validity of the theoretical results. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Polynomial control: past, present, and future

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2007
    Vladimír Ku
    Abstract Polynomial techniques have made important contributions to systems and control theory. Engineers in industry often find polynomial and frequency domain methods easier to use than state equation-based techniques. Control theorists show that results obtained in isolation using either approach are in fact closely related. Polynomial system description provides input,output models for linear systems with rational transfer functions. These models display two important system properties, namely poles and zeros, in a transparent manner. A performance specification in terms of polynomials is natural in many situations; see pole allocation techniques. A specific control system design technique, called polynomial equation approach, was developed in the 1960s and 1970s. The distinguishing feature of this technique is a reduction of controller synthesis to a solution of linear polynomial equations of a specific (Diophantine or Bézout) type. In most cases, control systems are designed to be stable and meet additional specifications, such as optimality and robustness. It is therefore natural to design the systems step by step: stabilization first, then the additional specifications each at a time. For this it is obviously necessary to have any and all solutions of the current step available before proceeding any further. This motivates the need for a parametrization of all controllers that stabilize a given plant. In fact this result has become a key tool for the sequential design paradigm. The additional specifications are met by selecting an appropriate parameter. This is simple, systematic, and transparent. However, the strategy suffers from an excessive grow of the controller order. This article is a guided tour through the polynomial control system design. The origins of the parametrization of stabilizing controllers, called Youla,Ku,era parametrization, are explained. Standard results on reference tracking, disturbance elimination, pole placement, deadbeat control, H2 control, l1 control and robust stabilization are summarized. New and exciting applications of the Youla,Ku,era parametrization are then discussed: stabilization subject to input constraints, output overshoot reduction, and fixed-order stabilizing controller design. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    A stability guaranteed active fault-tolerant control system against actuator failures

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 12 2004
    Midori Maki
    Abstract In this paper, a new strategy for fault-tolerant control system design has been proposed using multiple controllers. The design of such controllers is shown to be unique in the sense that the resulting control system neither suffers from the problem of conservativeness of conventional passive fault-tolerant control nor from the risk of instability associated with active fault-tolerant control in case that an incorrect fault detection and isolation decision is made. In other words, the stability of the closed-loop system is always ensured regardless of the decision made by the fault detection and isolation scheme. A correct decision will further lead to optimal performance of the closed-loop system. This paper deals with the conflicting requirements among stability, redundancy, and graceful degradation in performance for fault-tolerant control systems by using robust control techniques. A detailed design procedure has been presented with consideration of parameter uncertainties. Both total and partial actuator failures have been considered. This new control strategy has been demonstrated by controlling a McDonnell F-4C airplane in the lateral-direction through simulation. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Value-centric framework and pareto optimality for design and acquisition of communication satellites

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 6 2009
    Joy Brathwaite
    Abstract Investments in space systems are substantial, indivisible, and irreversible, characteristics of high-risk investments. Traditional approaches to system design, acquisition, and risk mitigation are derived from a cost-centric mindset, and as such they incorporate little information about the value of the spacecraft to its stakeholders. These traditional approaches are appropriate in stable environments. However, the current technical and economic conditions are distinctly uncertain and rapidly changing. Consequently, these traditional approaches have to be revisited and adapted to the current context. We propose that in uncertain environments, decision-making with respect to design and acquisition choices should be value-based. We develop a value-centric framework, analytical tools, and an illustrative numerical example for communication satellites. Our two proposed metrics for decision-making are the system's expected value and value uncertainty. Expected value is calculated as the expected NPV of the satellite. The cash inflow is calculated as a function of the satellite loading, its transponder pricing, and market demand. The cash outflows are the various costs for owning and operating the satellite. Value uncertainty emerges due to uncertainties in the various cash flow streams, in particular because of market conditions. We propagate market uncertainty through Monte Carlo simulation, and translate it into value uncertainty for the satellite. The end result is a portfolio of Pareto-optimal satellite design alternatives. By using value and value uncertainty as decision metrics in the down-selection process, decision-makers draw on more information about the system in its environment, and in making value-based design and acquisition choices, they ultimately make more informed and better choices. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Development of mobile broadband interactive satellite access system for Ku/Ka band

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 2 2006
    Yun-Jeong Song
    Abstract It is difficult to implement the broadband satellite Internet and broadcasting service for mobile environment. The paper presents the design and implementation of a mobile broadband satellite access system. In case of the system design, mobile terminal service is considered a critical factor than fixed terminal services, such as resource management, antenna tracking, weak signal recovery. In the paper, mobile broadband interactive satellite access technology system (MoBISAT) is presented. The system network, which is composed of a star network, consists of time division multiplexing-based forward link and multi-frequency time division multiple access-based return link. The MoBISAT provides both Ku-band satellite TV and Ka-band high-speed Internet base on DVB-S/DVB-RCS standards to the passengers and crews for land, maritime and air vehicles. The key factors of hub and mobile terminal are addressed for the design and implementation of the MoBISAT. Especially, the design and implementation of the return link demodulation method, resource management scheme and mobile terminal structure including mobile antenna are described. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Distributional Impacts of Pension Policy in Argentina: Winners and Losers within and Across Generations

    INTERNATIONAL SOCIAL SECURITY REVIEW, Issue 3 2006
    Camila Arza
    The paper deals with the life-cycle intra- and intergenerational income transfers operated by the pension system in Argentina by estimating the internal rates of return obtained by different generations and types of workers from their participation in the system. The empirical analysis confirms that earlier generations of workers benefited from higher social security returns than later generations, which retired under a matured system with large deficits. The worst-affected cohorts were those born after 1920, particularly suffering from a social security crisis and falling real wages. For future generations retiring fully under the new mixed pension system, returns will more closely depend on financial market performance and the evolution of administration costs. Intragenerational transfers were also observed for all cohorts under study, as a result of the original system design as well as adjustments adopted during the implementation process. The real distributional impact of progressive benefit formulas could, however, be offset by state transfers to cover pension deficits and forward tax shifting in a context of unequal pension coverage. [source]


    Health Financing in Singapore: A Case for Systemic Reforms

    INTERNATIONAL SOCIAL SECURITY REVIEW, Issue 1 2006
    Mukul G. Asher
    This paper assesses Singapore's healthcare financing arrangements in terms of their efficiency, fairness, and adequacy. Singapore represents an interesting case study because it is perhaps the only high-income, rapidly ageing country to rely on mandatory savings to finance healthcare, thus eschewing extensive risk-pooling arrangements, generally regarded as efficient and equitable. The paper argues that parametric reforms, i.e. relatively minor changes in the parameters of current schemes which preserve the existing philosophy and system design, will not be sufficient to meet healthcare financing objectives. Systemic reforms, which will bring Singapore into the mainstream of health financing arrangements found in the OECD countries, are urgently needed. Their design and timing should be based on good quality, timely and relevant data, and an environment conducive to vigorous debate. [source]