Design Methodology (design + methodology)

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Design Methodology

  • robust design methodology

  • Selected Abstracts

    Club Cinemetrics: New Post-Perspectival Design Methodologies

    Brian McGrat
    Abstract Cinemetrics is a post-perspectival, cinematically inspired drawing system that encourages a way of working that is multilayered and multiscalar, responding to the complexities of contemporary life and the city. Here Brian McGrath, in collaboration with Hsueh, Cheng Leun, Paul CHUHoi Shan, José De Jesús Zamora and Victoria Marshall, demonstrates how in field work in the US, Thailand and Taiwan, Cinemetrics enabled them to adopt an interdisciplinary process addressing transdisciplinary issues. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Seismic reliability of V-braced frames: Influence of design methodologies

    Alessandra Longo
    Abstract According to the most modern trend, performance-based seismic design is aimed at the evaluation of the seismic structural reliability defined as the mean annual frequency (MAF) of exceeding a threshold level of damage, i.e. a limit state. The methodology for the evaluation of the MAF of exceeding a limit state is herein applied with reference to concentrically ,V'-braced steel frames designed according to different criteria. In particular, two design approaches are examined. The first approach corresponds to the provisions suggested by Eurocode 8 (prEN 1998,Eurocode 8: design of structures for earthquake resistance. Part 1: general rules, seismic actions and rules for buildings), while the second approach is based on a rigorous application of capacity design criteria aiming at the control of the failure mode (J. Earthquake Eng. 2008; 12:1246,1266; J. Earthquake Eng. 2008; 12:728,759). The aim of the presented work is to focus on the seismic reliability obtained through these design methodologies. The probabilistic performance evaluation is based on an appropriate combination of probabilistic seismic hazard analysis, probabilistic seismic demand analysis (PSDA) and probabilistic seismic capacity analysis. Regarding PSDA, nonlinear dynamic analyses have been carried out in order to obtain the parameters describing the probability distribution laws of demand, conditioned to given values of the earthquake intensity measure. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Automating hierarchical environmentally-conscious design using integrated software: VOC recovery case study

    Hui Chen
    Traditionally, chemical process design and optimization has mainly been based on economic considerations. Currently, the scope is being extended to include environmentally-conscious process design (ECD). ECD will be facilitated by the emergence of integrated design methodologies and tools. The objectives of this paper are to present a hierarchical design methodology for environmentally-conscious process design, and an integrated assessment and optimization software. An application for the recycle of VOCs from a gaseous waste stream is presented using this design methodology and software. Revenue increased and environmental impacts were reduced. The net present value for the optimum design is approximately $900,000, which is much higher than the base case design, ,$2,498,200. A composite environmental index decreases from 1.19 × 10,4 in the base case to about 1.30 × 10,5 in the optimum case. This automated tool along with the embedded design methodology provides an effective and efficient way to perform environmentally-conscious chemical process design and optimization. [source]

    Thermoeconomic analysis of household refrigerators

    Arif Hepbasli
    Abstract This study deals with thermoeconomic analysis of household refrigerators for providing useful insights into the relations between thermodynamics and economics. In the analysis, the EXCEM method based on the quantities exergy, cost, energy and mass is applied to a household refrigerator using the refrigerant R134a. The performance evaluation of the refrigerator is conducted in terms of exergoeconomic aspects based on the various reference state temperatures ranging from 0 to 20°C. The exergy destructions in each of the components of the overall system are determined for average values of experimentally measured parameters. Exergy efficiencies of the system components are determined to assess their performances and to elucidate potentials for improvement. Thermodynamic loss rate-to-capital cost ratios for each components of the refrigerator are investigated. Correlations are developed to estimate exergy efficiencies and ratios of exergy loss rate-to-capital cost as a function of reference (dead) state temperature. The ratios of exergy loss rates to capital cost values are obtained to vary from 2.949 × 10,4 to 3.468 × 10,4 kW US$,1. The exergy efficiency values are also found to range from 13.69 to 28.00% and 58.15 to 68.88% on the basis of net rational efficiency and product/fuel at the reference state temperatures considered, respectively. It is expected that the results obtained will be useful to those involved in the development of analysis and design methodologies that integrate thermodynamics and economics. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Micro-fuel cell power sources

    Jeffrey D. Morse
    Abstract This paper presents a review and discussion of micro-fuel cell technologies, providing insight into the innovations that have been made to date. Discussion of concepts and results leading towards increased levels of integration and performance for micro-fuel cell systems will elucidate the potential of thin film and microfabrication methods in meeting the challenges and requirements necessary for consumer applications. While the amount of literature in this area is substantial, a representative sampling of key developments will be presented in this paper, in order to gain a sense of the design methodologies being implemented for micro-fuel cell power sources. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Demand side management for water heating installations in South African commercial buildings

    P. G. Rousseau
    Abstract The largest percentage of the sanitary hot water used in South African buildings is heated by means of direct electrical resistance heaters. This is one of the major contributing factors of the undesirable high morning and afternoon peaks imposed on the national electricity supply grid. Water heating therefore continues to be of concern to ESKOM, the country's only electrical utility company. The so-called in-line water heating system design methodology was developed to address this problem. This paper investigates the potential impact of in-line systems on the national peak electrical demand. A computer simulation model was developed that combines a deterministic mathematical model with a statistical approach in order to predict the diversity factors associated with both the existing and in-line design methodologies. A study was also conducted to estimate the total installed water heating capacity in the national commercial building sector. This figure can be combined with the simulated diversity factor to determine the peak electrical demand. The deterministic model includes the detailed simulation of the hot water storage vessel, the electrical heater and the system control algorithm. The mathematical model for the storage vessel is based on an electrical analogue approach that includes the effects of conduction as well as forced and natural convection. This model was verified extensively with the aid of laboratory measurements and compared with existing storage vessel models. It was found that the new storage vessel model could predict the supply temperature within 2 per cent for a system configuration with the heater in parallel outside the reservoir and within 12 per cent for a configuration with the heater situated inside the reservoir. This compares favourably with existing models found in the literature. The complete simulation based on the statistical approach showed that extensive application of the new design methodology could result in a reduction of approximately 75 MW in the total maximum peak demand imposed on the electricity supply grid in wintertime. This is 58 per cent of the current peak demand due to commercial water heating and 12.5 per cent of the peak load reduction target set by ESKOM until the year 2015. Copyright © 2001 John Wiley & Sons, Ltd. [source]

    Theoretical facet and experimental results of harmonic tuned PAs

    Paolo Colantonio
    Abstract High-efficiency power amplifier design criteria imply a synthesis of input and output networks with particular emphasis on their harmonic behavior. In this article, a simplified approach to clarify the relevance of such terminations is presented. Starting from the implications of power balance for stage performance, design criteria to improve the efficiency of high-frequency applications are presented. In order to validate the approach, comparisons among the performances of single-stage amplifiers, all operated at 5 GHz under a sinusoidal driving signal and synthesized by utilizing different design methodologies, are presented. Drain efficiencies at 1-dB compression of 44.5%, 53.3%, and 61.56% have been measured respectively for the tuned load and harmonically manipulated (2nd and 2nd & 3rd) realized amplifiers, compared with a simulated drain efficiency of 55% using the Class E approach. © 2003 Wiley Periodicals, Inc. Int J RF and Microwave CAE 13: 459,472, 2003. [source]

    A parametric insensitive H2 control design approach

    Philippe Chevrel
    Abstract H2 and H, control design methodologies are known to be efficient to deal with multivariable control problems. However, most of them do not take explicitly the parametric uncertainties into account. This paper proposes a low parametric sensitivity H2 control design method as an alternative to µ -synthesis or robust H2 control design. In addition to the standard H2 criterion, the H2 norm of the parametric sensitivity function is introduced in order to improve the robustness of the resulting controller. Unfortunately, this problem is a difficult one. Its equivalence to structured feedback H2 control problem will be shown. The underlying BMI will be solved by making use of an iterative LMI procedure. Two examples will illustrate the interest of the approach. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    New algorithms and an in silico benchmark for computational enzyme design

    PROTEIN SCIENCE, Issue 12 2006
    Alexandre Zanghellini
    Abstract The creation of novel enzymes capable of catalyzing any desired chemical reaction is a grand challenge for computational protein design. Here we describe two new algorithms for enzyme design that employ hashing techniques to allow searching through large numbers of protein scaffolds for optimal catalytic site placement. We also describe an in silico benchmark, based on the recapitulation of the active sites of native enzymes, that allows rapid evaluation and testing of enzyme design methodologies. In the benchmark test, which consists of designing sites for each of 10 different chemical reactions in backbone scaffolds derived from 10 enzymes catalyzing the reactions, the new methods succeed in identifying the native site in the native scaffold and ranking it within the top five designs for six of the 10 reactions. The new methods can be directly applied to the design of new enzymes, and the benchmark provides a powerful in silico test for guiding improvements in computational enzyme design. [source]

    Robust sequential designs for nonlinear regression

    Sanjoy Sinha
    Abstract The authors introduce the formal notion of an approximately specified nonlinear regression model and investigate sequential design methodologies when the fitted model is possibly of an incorrect parametric form. They present small-sample simulation studies which indicate that their new designs can be very successful, relative to some common competitors, in reducing mean squared error due to model misspecifi-cation and to heteroscedastic variation. Their simulations also suggest that standard normal-theory inference procedures remain approximately valid under the sequential sampling schemes. The methods are illustrated both by simulation and in an example using data from an experiment described in the chemical engineering literature. Les auteurs définissent formellement le concept de modéle de régression non linéaire approxima-tif et proposentdes plans d'expérience séquentiels pour les situations o4uG la forme paramétrique du modéle ajusté est inexacte. Ils présentent une étude de simulation qui montre que, pour de petits échantillons, leurs nouveaux plans sont largement préférables aux plans usuels en terme de réduction de I'erreur quadratique moyenne associée à rinadéquation du modéle et à l'hétéroscédasticité. Leurs simulations montrent aussi que les procédures d'inférence classiques associées au paradigme normal restent valables, à peu de choses prés, pour ces plans expéimentaux se'quentiels. La methodologie proposde est illustrée par voie de simulation et au moyen d'une application concréte tirée de la pratique du génie chimique. [source]

    "Face of the Brand": A design methodology with global potential

    Dannielle Blumenthal
    To cope with the multiplicity of world markets, Dannielle Blumenthal presents a strategy known as "face of the brand." In this approach, brand is less about a uniform message and logo and more about a distinctive competitive position expressed through a palette of images, colors, shapes, and language that can, without losing its global impact,be designed and adapted to suit the nuances of individual cultures and consumer preferences. [source]

    Design of passive systems for control of inelastic structures

    Gian Paolo Cimellaro
    Abstract A design strategy for control of buildings experiencing inelastic deformations during seismic response is formulated. The strategy is using weakened, and/or softened, elements in a structural system while adding passive energy dissipation devices (e.g. viscous fluid devices, etc.) in order to control simultaneously accelerations and deformations response during seismic events. A design methodology is developed to determine the locations and the magnitude of weakening and/or softening of structural elements and the added damping while insuring structural stability. A two-stage design procedure is suggested: (i) first using a nonlinear active control algorithm, to determine the new structural parameters while insuring stability, then (ii) determine the properties of equivalent structural parameters of passive system, which can be implemented by removing or weakening some structural elements, or connections, and by addition of energy dissipation systems. Passive dampers and weakened elements are designed using an optimization algorithm to obtain a response as close as possible to an actively controlled system. A case study of a five-story building subjected to El Centro ground motion, as well as to an ensemble of simulated ground motions, is presented to illustrate the procedure. The results show that following the design strategy, a control of both peak inter-story drifts and total accelerations can be obtained. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Damage-based design with no repairs for multiple events and its sensitivity to seismicity model

    S. Das
    Abstract Conventional design methodology for the earthquake-resistant structures is based on the concept of ensuring ,no collapse' during the most severe earthquake event. This methodology does not envisage the possibility of continuous damage accumulation during several not-so-severe earthquake events, as may be the case in the areas of moderate to high seismicity, particularly when it is economically infeasible to carry out repairs after damaging events. As a result, the structure may collapse or may necessitate large scale repairs much before the design life of the structure is over. This study considers the use of design force ratio (DFR) spectrum for taking an informed decision on the extent to which yield strength levels should be raised to avoid such a scenario. DFR spectrum gives the ratios by which the yield strength levels of single-degree-of-freedom oscillators of different initial periods should be increased in order to limit the total damage caused by all earthquake events during the lifetime to a specified level. The DFR spectra are compared for three different seismicity models in case of elasto-plastic oscillators: one corresponding to the exponential distribution for return periods of large events and the other two corresponding to the lognormal and Weibull distributions. It is shown through numerical study for a hypothetical seismic region that the use of simple exponential model may be acceptable only for small values of the seismic gap length. For moderately large to large seismic gap lengths, it may be conservative to use the lognormal model, while the Weibull model may be assumed for very large seismic gap lengths. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Evolutionary aseismic design and retrofit of structures with passive energy dissipation

    G. F. Dargush
    Abstract A new computational framework is developed for the design and retrofit of building structures by considering aseismic design as a complex adaptive process. For the initial phase of the development within this framework, genetic algorithms are employed for the discrete optimization of passively damped structural systems. The passive elements may include metallic plate dampers, viscous fluid dampers and viscoelastic solid dampers. The primary objective is to determine robust designs, including both the non-linearity of the structural system and the uncertainty of the seismic environment. Within the present paper, this computational design approach is applied to a series of model problems, involving sizing and placement of passive dampers for energy dissipation. In order to facilitate our investigations and provide a baseline for further study, we introduce several simplifications for these initial examples. In particular, we employ deterministic lumped parameter structural models, memoryless fitness function definitions and hypothetical seismic environments. Despite these restrictions, some interesting results are obtained from the simulations and we are able to gain an understanding of the potential for the proposed evolutionary aseismic design methodology. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Integrated Bienzyme Chip for Ethanol Monitoring

    ELECTROANALYSIS, Issue 12 2006
    Javier Gonzalo-Ruiz
    Abstract An ethanol chip biosensors based on bienzymatic system has been developed. Horse radish peroxidase and alcohol oxidase have been co-immobilized into a polypyrrole matrix, as well as the mediator, onto the integrated working electrode. Variables that affect to the chronoamperometric response of ethanol have been optimized through the experimental design methodology. Under these conditions, the slopes of several calibrations curves show a reproducibility, repeatability and limit of detection of 6.09% (n=5), 9.03% (n=5) and 2.98±0.38,mmol dm,3 (,=,=0.05, n=3), respectively. Finally, the biosensors based on platinum chips were applied to the determination of ethanol in white wine samples, obtaining successful results. [source]

    Automating hierarchical environmentally-conscious design using integrated software: VOC recovery case study

    Hui Chen
    Traditionally, chemical process design and optimization has mainly been based on economic considerations. Currently, the scope is being extended to include environmentally-conscious process design (ECD). ECD will be facilitated by the emergence of integrated design methodologies and tools. The objectives of this paper are to present a hierarchical design methodology for environmentally-conscious process design, and an integrated assessment and optimization software. An application for the recycle of VOCs from a gaseous waste stream is presented using this design methodology and software. Revenue increased and environmental impacts were reduced. The net present value for the optimum design is approximately $900,000, which is much higher than the base case design, ,$2,498,200. A composite environmental index decreases from 1.19 × 10,4 in the base case to about 1.30 × 10,5 in the optimum case. This automated tool along with the embedded design methodology provides an effective and efficient way to perform environmentally-conscious chemical process design and optimization. [source]

    GDKAT: A goal-driven knowledge acquisition tool for knowledge base development

    EXPERT SYSTEMS, Issue 2 2000
    Chien-Hsing Wu
    While knowledge-based systems are being used extensively to assist in making decisions, a critical factor that affects their performance and reliability is the quantity and quality of the knowledge bases. Knowledge acquisition requires the design and development of an in-depth comprehension of knowledge modeling and of applicable domain. Many knowledge acquisition tools have been developed to support knowledge base development. However, a weakness that is revealed in these tools is the domain-dependent and complex acquisition process. Domain dependence limits the applicable areas and the complex acquisition process makes the tool difficult to use. In this paper, we present a goal-driven knowledge acquisition tool (GDKAT) that helps elicit and store experts' declarative and procedural knowledge in knowledge bases for a user-defined domain. The designed tool is implemented using the object-oriented design methodology under C++ Windows environment. An example that is used to demonstrate the GDKAT is also delineated. While the application domain for the example presented is reflow soldering in surface mount printed circuit board assembly, the GDKAT can be used to develop knowledge bases for other domains also. [source]

    Thermo-Economic Modelling and Optimisation of Fuel Cell Systems,

    FUEL CELLS, Issue 1 2005
    F. Marechal
    Abstract This paper describes and illustrates the application of a methodology for thermo-economic design and optimisation of fuel cell systems. This methodology combines the use of process simulation and process integration techniques to compute thermo-economic performances of fuel cell systems that will be used in a multi-objective optimisation framework. The method allows the generation of integrated fuel cell system configurations and their corresponding optimal operating conditions. It should be used as a preliminary design methodology, allowing the identification of promising system configurations, which would be further analysed. The methodology and the thermo-economic models are described and demonstrated for the design of PEMFC hybrid systems, combining fuel cell and gas turbine technologies. [source]

    Reliability-based preform shape design in forging

    Jalaja Repalle
    Abstract A reliability-based optimization method is developed for preform shape design in forging. Forging is a plastic deformation process that transforms a simple shape of workpiece into a predetermined complex shape through a number of intermediate shapes by the application of compressive forces. Traditionally, these intermediate shapes are designed in a deterministic manufacturing domain. In reality, there exist various uncertainties in the forging environment, such as variations in process conditions, billet/die temperatures, and material properties. Randomness in these parameters could lead to variations in product quality and often induce heavy manufacturing losses. In this research, a robust preform design methodology is developed in which various randomnesses in parameters are quantified and incorporated through reliability analysis and uncertainty quantification techniques. The stochastic response surface approach is used to reduce computation time by establishing a relationship between the process performance and shape and random parameters. Finally, reliability-based optimization is utilized for preform shape design of an engine component to improve the product quality and robustness. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Fully stressed frame structures unobtainable by conventional design methodology

    Keith M. Mueller
    Abstract A structure is said to be fully stressed if every member of the structure is stressed to its maximum allowable limit for at least one of the loading conditions. Fully stressed design is most commonly used for small and medium size frames where drift is not a primary concern. There are several potential methods available to the engineer to proportion a fully stressed frame structure. The most commonly used methods are those taught to all structural engineering students and are very easy to understand and to implement. These conventional methods are based on the intuitive idea that if a member is overstressed, it should be made larger. If a member is understressed, it can be made smaller, saving valuable material. It has been found that a large number of distinct fully stressed designs can exist for a single frame structure subjected to multiple loading conditions. This study will demonstrate that conventional methods are unable to converge to many, if not most, of these designs. These unobtainable designs are referred to as ,repellers' under the action of conventional methods. Other, more complicated methods can be used to locate these repelling fully stressed designs. For example, Newton's method can be used to solve a non-linear system of equations that defines the fully stressed state. However, Newton's method can be plagued by divergence and also by convergence to physically meaningless solutions. This study will propose a new fully stressed design technique that does not have these problems. Copyright © 2001 John Wiley & Sons, Ltd. [source]

    Inverse optimal design of cooling conditions for continuous quenching processes

    Yimin Ruan
    Abstract This paper presents an inverse design methodology to obtain a required yield strength with an optimal cooling condition for the continuous quenching of precipitation hardenable sheet alloys. The yield strength of a precipitation hardenable alloy can be obtained by allowing solute to enter into solid solution at a proper temperature and rapidly cooling the alloy to hold the solute in the solid solution. An aging process may be needed for the alloy to develop the final mechanical property. The objective of the design is to optimize the quenching process so that the required yield strength can be achieved. With the inverse design method, the required yield strength is specified and the sheet thermal profile at the exit of the quenching chamber can also be specified. The conjugate gradient method is used to optimize the cooling boundary condition during quenching. The adjoint system is developed to compute the gradient of the objective functional. An aluminium sheet quenching problem is presented to demonstrate the inverse design method. Copyright © 2001 John Wiley & Sons, Ltd. [source]

    Robust diagnosis and fault-tolerant control of distributed processes over communication networks

    Sathyendra Ghantasala
    Abstract This paper develops a robust fault detection and isolation (FDI) and fault-tolerant control (FTC) structure for distributed processes modeled by nonlinear parabolic partial differential equations (PDEs) with control constraints, time-varying uncertain variables, and a finite number of sensors that transmit their data over a communication network. The network imposes limitations on the accuracy of the output measurements used for diagnosis and control purposes that need to be accounted for in the design methodology. To facilitate the controller synthesis and fault diagnosis tasks, a finite-dimensional system that captures the dominant dynamic modes of the PDE is initially derived and transformed into a form where each dominant mode is excited directly by only one actuator. A robustly stabilizing bounded output feedback controller is then designed for each dominant mode by combining a bounded Lyapunov-based robust state feedback controller with a state estimation scheme that relies on the available output measurements to provide estimates of the dominant modes. The controller synthesis procedure facilitates the derivation of: (1) an explicit characterization of the fault-free behavior of each mode in terms of a time-varying bound on the dissipation rate of the corresponding Lyapunov function, which accounts for the uncertainty and network-induced measurement errors and (2) an explicit characterization of the robust stability region where constraint satisfaction and robustness with respect to uncertainty and measurement errors are guaranteed. Using the fault-free Lyapunov dissipation bounds as thresholds for FDI, the detection and isolation of faults in a given actuator are accomplished by monitoring the evolution of the dominant modes within the stability region and declaring a fault when the threshold is breached. The effects of network-induced measurement errors are mitigated by confining the FDI region to an appropriate subset of the stability region and enlarging the FDI residual thresholds appropriately. It is shown that these safeguards can be tightened or relaxed by proper selection of the sensor spatial configuration. Finally, the implementation of the networked FDI,FTC architecture on the infinite-dimensional system is discussed and the proposed methodology is demonstrated using a diffusion,reaction process example. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Robust stabilization of a class of non-minimum-phase nonlinear systems in a generalized output feedback canonical form

    Jun Fu
    Abstract In this paper, a globally robust stabilizer for a class of uncertain non-minimum-phase nonlinear systems in generalized output feedback canonical form is designed. The system contains unknown parameters multiplied by output-dependent nonlinearities and output-dependent nonlinearities enter such a system both additively and multiplicatively. The proposed method relies on a recently developed novel parameter estimator and state observer design methodology together with a combination of backstepping and small-gain approach. Our design has three distinct features. First, the parameter estimator and state observer do not necessarily follow the classical certainty-equivalent principle any more. Second, the design treats unknown parameters and unmeasured states in a unified way. Third, the technique by combining standard backstepping and small-gain theorem ensures robustness with respect to dynamic uncertainties. Finally, two numerical examples are given to show that the proposed method is effective, and that it can be applied to more general systems that do not satisfy the cascading upper diagonal dominance conditions developed in recent papers, respectively. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Fault diagnosis of a simulated industrial gas turbine via identification approach

    S. Simani
    Abstract In this paper, a model-based procedure exploiting the analytical redundancy principle for the detection and isolation of faults on a simulated process is presented. The main point of the work consists of using an identification scheme in connection with dynamic observer and Kalman filter designs for diagnostic purpose. The errors-in-variables identification technique and output estimation approach for residual generation are in particular advantageous in terms of solution complexity and performance achievement. The proposed tools are analysed and tested on a single-shaft industrial gas turbine MATLAB/SIMULINK® simulator in the presence of disturbances, i.e. measurement errors and modelling mismatch. Selected performance criteria are used together with Monte-Carlo simulations for robustness and performance evaluation. The suggested technique can constitute the design methodology realising a reliable approach for real application of industrial process FDI. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Issues, progress and new results in robust adaptive control,

    Sajjad Fekri
    Abstract We overview recent progress in the field of robust adaptive control with special emphasis on methodologies that use multiple-model architectures. We argue that the selection of the number of models, estimators and compensators in such architectures must be based on a precise definition of the robust performance requirements. We illustrate some of the concepts and outstanding issues by presenting a new methodology that blends robust non-adaptive mixed µ-synthesis designs and stochastic hypothesis-testing concepts leading to the so-called robust multiple model adaptive control (RMMAC) architecture. A numerical example is used to illustrate the RMMAC design methodology, as well as its strengths and potential shortcomings. The later motivated us to develop a variant architecture, denoted as RMMAC/XI, that can be effectively used in highly uncertain exogenous plant disturbance environments. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Analytical comparison of reversed nested Miller frequency compensation techniques

    Alfio Dario Grasso
    Abstract In this paper, novel and previously proposed reversed nested Miller compensation (RNMC) networks are analyzed and compared, and their design equations are also presented. Hence, this paper is the natural extension of a previous paper by the authors (Int. J. Circ. Theor. Appl. 2008; 36(1):53,80), where only the nested Miller compensation topologies were treated. In particular, a coherent and comprehensive analytical comparison of the RNMC topologies, including two new networks presented for the first time, is performed by means of the figure of merit that expresses a trade-off among gain-bandwidth product, load capacitance and total transconductance, for equal values of phase margin (Int. J. Circ. Theor. Appl. 2008; 36(1):53,80). The analysis shows that there is no unique optimal solution among the RNMC topologies, as this depends on the load condition as well as on the relative transconductance magnitude of each amplifier stage. From this point of view, the proposed comparison also outlines useful design guidelines for the optimization of large-signal and small-signal performance. Simulations confirming the effectiveness of the proposed design methodology and analytical comparison are also included. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Designing globally optimal delta,sigma modulator topologies via signomial programming

    Yuen-Hong Alvin Ho
    Abstract We present a design methodology for globally optimizing the topologies of delta,sigma modulators (DSMs). Previous work cast the design task into a general non-convex, nonlinear programming problem, whereas we propose to recast it as a signomial programming problem. Convexification strategies are presented for transforming the signomial programming problem into its equivalent convex counterpart, thereby enabling the solution of globally optimal design parameters. It is also possible to include circuit non-ideal effects that affect the transfer function of the modulator into the formulation without affecting the computational efficiency. The proposed framework has been applied to topology synthesis problems of single-loop and multi-loop low-pass DSMs based on discrete-time circuitry. Numerical results confirm the effectiveness of the proposed approach over conventional nonlinear programming techniques. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Function-in-layout: a demonstration with bio-inspired hyperacuity chip

    András Mozsáry
    Abstract Below 100 nm a new scenario is emerging in VLSI design: floorplanning and function are inherently interrelated. Using mainly local connectivity, wire delay and crosstalk problems are eliminated. A new design methodology is proposed, called function-in-layout, that possesses: regular layout, mainly local connectivity, functional ,parasitics'. A bio-inspired demonstration is presented, a hyperacuity chip, with 30 ps time difference detection using 0.35 mm complementary metal-oxide semiconductor (CMOS) technology. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Robust design of communication systems: The case of expedited forwarding of voice traffic in differentiated services networks

    Hyoup-Sang Yoon
    Abstract Design of experiments (DOE) is gaining acceptance in the community of telecommunication researchers, especially during the past several years. In this paper, a state-of-the-art review on the use of DOE in the field of communication networks is presented, and the need for introducing a systematic robust design methodology to network simulations or testbed experiments is identified in ensuring robust behaviours of a network against uncontrollable sources of variation. Then, the Taguchi robust design methodology is applied for optimizing the expedited forwarding (EF) of voice traffic in a differentiated services network, and its step-by-step procedures are described in detail. The experimental data are collected using the ns-2 simulator, and the SN ratio, a robustness measure, is analysed to determine an optimal design condition for each performance characteristic. The analysis results show that ,type of queue scheduling schemes' is a major control factor for ensuring robust behaviours of one-way delay and jitter while ,EF queue size' is for throughput and loss rate. Finally, a compromised optimal design condition is identified using a desirability function approach adapted to multi-characteristic robust design problems. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Layered view of QoS issues in IP-based mobile wireless networks

    Haowei Bai
    Abstract With the convergence of wireless communication and IP-based networking technologies, future IP-based wireless networks are expected to support real-time multimedia. IP services over wireless networks (e.g. wireless access to Internet) enhance the mobility and flexibility of traditional IP network users. Wireless networks extend the current IP service infrastructure to a mix of transmission media, bandwidth, costs, coverage, and service agreements, requiring enhancements to the IP protocol layers in wireless networks. Furthermore, QoS provisioning is required at various layers of the IP protocol stack to guarantee different types of service requests, giving rise to issues related to cross-layer design methodology. This paper reviews issues and prevailing solutions to performance enhancements and QoS provisioning for IP services over mobile wireless networks from a layered view. Copyright © 2006 John Wiley & Sons, Ltd. [source]