Performance Requirements (performance + requirement)

Distribution by Scientific Domains


Selected Abstracts


Investigating static and dynamic characteristics of electromechanical actuators (EMA) with MATLAB GUIs

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2010
Gursel Sefkat
Abstract This paper deals with the design of an electromechanical device considering some prescribed performance requirements, and static and dynamic analysis of this device are carried out. In studying the transient response of such a system, as part of dynamic analysis, two methods mostly used finite element method (FEM) and finite differences method (FDM). However, these methods need much CPU time. In this work, a computer simulator program is developed for an EMA. This technique is implemented in the MATLAB-Simulink environment and tested for different design tasks such as electromagnetic valves or electromechanical brakes etc. Furthermore, by using GUIDE tools within MATLAB, a simple useful and user-friendly GUI structure is developed to provide a visual approach to design and analysis process. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 383,396, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20279 [source]


A large-scale monitoring and measurement campaign for web services-based applications

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2010
Riadh Ben Halima
Abstract Web Services (WS) can be considered as the most influent enabling technology for the next generation of web applications. WS-based application providers will face challenging features related to nonfunctional properties in general and to performance and QoS in particular. Moreover, WS-based developers have to provide solutions to extend such applications with self-healing (SH) mechanisms as required for autonomic computing to face the complexity of interactions and to improve availability. Such solutions should be applicable when the components implementing SH mechanisms are deployed on both or only one platform on the WS providers and requesters sides depending on the deployment constraints. Associating application-specific performance requirements and monitoring-specific constraints will lead to complex configurations where fine tuning is needed to provide SH solutions. To contribute to enhancing the design and the assessment of such solutions for WS technology, we designed and implemented a monitoring and measurement framework, which is part of a larger Self-Healing Architectures (SHA) developed during the European WS-DIAMOND project. We implemented the Conference Management System (CMS), a real WS-based complex application. We achieved a large-scale experimentation campaign by deploying CMS on top of SHA on the French grid Grid5000. We experienced the problem as if we were a service provider who has to tune reconfiguration strategies. Our results are available on the web in a structured database for external use by the WS community. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Measuring and modelling the performance of a parallel ODMG compliant object database server

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2006
Sandra de F. Mendes Sampaio
Abstract Object database management systems (ODBMSs) are now established as the database management technology of choice for a range of challenging data intensive applications. Furthermore, the applications associated with object databases typically have stringent performance requirements, and some are associated with very large data sets. An important feature for the performance of object databases is the speed at which relationships can be explored. In queries, this depends on the effectiveness of different join algorithms into which queries that follow relationships can be compiled. This paper presents a performance evaluation of the Polar parallel object database system, focusing in particular on the performance of parallel join algorithms. Polar is a parallel, shared-nothing implementation of the Object Database Management Group (ODMG) standard for object databases. The paper presents an empirical evaluation of queries expressed in the ODMG Query Language (OQL), as well as a cost model for the parallel algebra that is used to evaluate OQL queries. The cost model is validated against the empirical results for a collection of queries using four different join algorithms, one that is value based and three that are pointer based. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Dimensioning of data networks: a flow-level perspective

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2009
Pasi Lassila
Traditional network dimensioning formulations have applied the Erlang model where the connections reserve capacity in the network. Until recently, tractable stochastic network models where the connections share the capacity in the network did not exist. The latter are becoming increasingly important as they can be applied to characterise file transfers in current data networks (e.g. IP networks). In particular, they can be utilised for dimensioning of networks with respect to the file transfer performance. To this end, we consider a model where the traffic consists of elastic flows (i.e. file transfers). Flows arrive randomly and share the network resources resulting in stochastically varying transmission rates for flows. Our contribution is to develop efficient methods for capacity planning to meet the performance requirements expressed in terms of the average transmission rate of flows on a given route, i.e. the per-flow throughput. These methods are validated using ns2 simulations. We discuss also the effects of access rate limitations and how to combine the elastic traffic requirements with those of real-time traffic. Finally, we outline how the methods can be applied in wireless mesh networks. Our results enable a simple characterisation of the order-of-magnitude of the required capacities, which can be utilised as a first step in practical network planning and dimensioning. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Delay analysis of a probabilistic priority discipline

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2002
Yuming Jiang
In computer networks, the Strict Priority (SP) discipline is perhaps the most common and simplest method to schedule packets from different classes of applications, each with diverse performance requirements. With this discipline, however, packets at higher priority levels can starve packets at lower priority levels. To resolve this starvation problem, we propose to assign a parameter to each priority queue in the SP discipline. The assigned parameter determines the probability or extent by which its corresponding queue is served when the queue is polled by the server. We thus form a new packet service discipline, referred to as the Probabilistic Priority (PP) discipline. By properly adjusting the assigned parameters, not only is the performance of higher priority classes satisfied, but also the performance of lower priority classes can be improved. This paper analyzes the delay performance of the PP discipline. A decomposition approach is proposed for calculating the average waiting times and their bounds are studied. Two approximation approaches are proposed to estimate the waiting times. Simulation results that validate the numerical analysis are presented and examined. A numerical example which demonstrates the use of the PP discipline to achieve service differentiation is presented. This example also shows how the assigned parameters can be determined from the results of analysis mentioned above. [source]


Issues, progress and new results in robust adaptive control,

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2006
Sajjad Fekri
Abstract We overview recent progress in the field of robust adaptive control with special emphasis on methodologies that use multiple-model architectures. We argue that the selection of the number of models, estimators and compensators in such architectures must be based on a precise definition of the robust performance requirements. We illustrate some of the concepts and outstanding issues by presenting a new methodology that blends robust non-adaptive mixed µ-synthesis designs and stochastic hypothesis-testing concepts leading to the so-called robust multiple model adaptive control (RMMAC) architecture. A numerical example is used to illustrate the RMMAC design methodology, as well as its strengths and potential shortcomings. The later motivated us to develop a variant architecture, denoted as RMMAC/XI, that can be effectively used in highly uncertain exogenous plant disturbance environments. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Neural bandwidth allocation function (NBAF) control scheme at WiMAX MAC layer interface

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2007
Mario Marchese
Abstract The paper proposes a bandwidth allocation scheme to be applied at the interface between upper layers (IP, in this paper) and Medium Access Control (MAC) layer over IEEE 802.16 protocol stack. The aim is to optimally tune the resource allocation to match objective QoS (Quality of Service) requirements. Traffic flows characterized by different performance requirements at the IP layer are conveyed to the IEEE 802.16 MAC layer. This process leads to the need for providing the necessary bandwidth at the MAC layer so that the traffic flow can receive the requested QoS. The proposed control algorithm is based on real measures processed by a neural network and it is studied within the framework of optimal bandwidth allocation and Call Admission Control in the presence of statistically heterogeneous flows. Specific implementation details are provided to match the application of the control algorithm by using the existing features of 802.16 request,grant protocol acting at MAC layer. The performance evaluation reported in the paper shows the quick reaction of the bandwidth allocation scheme to traffic variations and the advantage provided in the number of accepted calls. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An adaptive load balancing scheme for web servers

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 1 2002
Dr. James Aweya
This paper describes an overload control scheme for web servers which integrates admission control and load balancing. The admission control mechanism adaptively determines the client request acceptance rate to meet the web servers' performance requirements while the load balancing or client request distribution mechanism determines the fraction of requests to be assigned to each web server. The scheme requires no prior knowledge of the relative speeds of the web servers, nor the work required to process each incoming request. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Quality of service for satellite IP networks: a survey

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4-5 2003
Sastri Kota
Abstract The future media rich applications such as media streaming, content delivery distribution and broadband access require a network infrastructure that offers greater bandwidth and service level guarantees. As the demand for new applications increases, ,best effort' service is inadequate and results in lack of user satisfaction. End-to-end quality of service (QoS) requires the functional co-operation of all network layers. To meet future application requirements, satellite is an excellent candidate due to features such as global coverage, bandwidth flexibility, broadcast, multicast and reliability. At each layer, the user performance requirements should be achieved by implementation of efficient bandwidth allocation algorithms and satellite link impairment mitigation techniques. In this paper, a QoS framework for satellite IP networks including requirements, objectives and mechanisms are described. To fully understand end-to-end QoS at each layer, QoS parameters and the current research are surveyed. For example at physical layer (modulation, adaptive coding), link layer (bandwidth allocation), network layer (IntServ/DiffServ, MPLS traffic engineering), transport layer (TCP enhancements, and alternative transport protocols) and security issues are discussed. Some planned system examples, QoS simulations and experimental results are provided. The paper also includes the current status of the standardization of satellite IP by ETSI, ITU and IETF organizations. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Effects of chitosan solution concentration and incorporation of chitin and glycerol on dense chitosan membrane properties

JOURNAL OF BIOMEDICAL MATERIALS RESEARCH, Issue 2 2007
Paula Rulf Marreco Dallan
Abstract The aim of this work was to perform a systematic study about the effects induced by chitosan solution concentration and by chitin or glycerol incorporation on dense chitosan membranes with potential use as burn dressings. The membrane properties analyzed were total raw material cost, thickness, morphology, swelling ratio, tensile strength, percentage of strain at break, crystallinity, in vitro enzymatic degradation with lysozyme, and in vitro Vero cells adhesion. While the use of the most concentrated chitosan solution (2.5% w/w) increased membrane cost, it also improved the biomaterial mechanical resistance and ductility, as well as reduced membrane degradation when exposed for 2 months to lysozyme. The remaining evaluated properties were not affected by initial chitosan solution concentration. Chitin incorporation, on the other hand, reduced the membranes cost, swelling ratio, mechanical properties, and crystallinity, resulting in thicker biomaterials with irregular surface more easily degradable when exposed to lysozyme. Glycerol incorporation also reduced the membranes cost and crystallinity and increased membranes degradability after exposure to lysozyme. Strong Vero cells adhesion was not observed in any of the tested membrane formulations. The overall results indicate that the majority of the prepared membranes meet the performance requirements of temporary nonbiodegradable burn dressings (e.g. adequate values of mechanical resistance and ductility, low values of in vitro cellular adhesion on their surfaces, low extent of degradation when exposed to lysozyme solution, and high stability in aqueous solutions). © 2006 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 2007 [source]


Telerobotic systems design based on real-time CORBA

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 4 2005
Michele Amoretti
A new class of telerobotic applications is making its way into research laboratories, fine arts or science museums, and industrial installations. Virtual laboratories and remote equipment maintenance are examples of these applications, which are built exploiting distributed computing systems and Internet technologies. Distributed computing technologies provide several advantages to telerobotic applications, such as dynamic and multiuser access to remote resources and arbitrary user locations. Nonetheless, building these applications remains a substantial endeavor, especially when performance requirements must be met. The aim of this paper is to investigate how mainstream and advanced features of the CORBA object-oriented middleware can be put to work to meet the requirements of novel telerobotic applications. We show that Real-Time CORBA extensions and asynchronous method invocation of CORBA services can be relied upon to meet performance and functional requirements, thereby enabling teleoperation on local area networks. Furthermore, CORBA services for concurrency control and large-scale data distribution enable geographic-scale access for robot teleprogramming. Limitations in the currently available implementations of the CORBA standard are also discussed, along with their implications. The effectiveness and suitability for telerobotic applications of several CORBA mechanisms are tested first individually and then by means of a software framework exploiting CORBA services and ensuring component-based development, software reuse, low development cost, fully portable real-time and communication support. A comprehensive telerobotic application built based on the framework is described in the paper and evaluated on both local and wide area networks. The application includes a robot manipulator and several sensory subsystems under concurrent access by multiple competing or collaborating operators, one of which is equipped with a multimodal user interface acting as the master device. © 2005 Wiley Periodicals, Inc. [source]


T-AKE: Acquiring the Environmentally Sound Ship of the 21st Century

NAVAL ENGINEERS JOURNAL, Issue 3 2006
Cdr. Stephen P. Markle USN (Ret.) P.E.
Department of Defense (DoD) program managers are increasingly challenged with the difficulties of balancing the risks associated with cost, schedule, and performance in an era of intense competition for increasingly scarce resources. Requirements associated with environmental, safety, and health (ESH), in the context of thirty to forty-year service lives, have not been consistently, or in some cases adequately, addressed in DoD programs. Environmental protection (EP) requirements generally do not fit the normal requirements generation and product synthesis model typically applied to weapon system development. As with all requirements, early identification is the key to integration into the total system. Recognition that EP requirements must be integrated at program conception led to development of the ESH Integration Model by the U.S. Navy Lewis and Clark (T-AKE) Auxiliary Cargo Ammunition Ship Program. Institutionalization of this model has enabled the T-AKE Program to establish EP performance requirements for the twelve-ship class that substantially reduce the environmental footprint of the Navy. Compared to the fifteen ships that it will be replacing, T-AKE will require fifty percent less manning and reduce waste streams by seventy percent enabling an annual life cycle cost savings of $5M in ashore waste disposal costs. The T-AKE Program has been the first to achieve the Chief of Naval Operations vision for the "Environmentally Sound Warship of the 21st Century" through design integration of EP requirements. [source]


Warfighter Needs in the 21st Century: Linking Fleet Operations to Required Capabilities

NAVAL ENGINEERS JOURNAL, Issue 4 2000
Capt. V.A. Myer USNR (Ret.)
ABSTRACT What the warfighter needs is not what he is getting in terms of responsiveness to the emerging threat, interoperability among systems, and systems readiness and training. This disconnect between Fleet operations and the acquisition requirements process is becoming more pronounced as systems grow larger and more complex and as warflghting becomes more joint Knowing what the warfighter wants and how he envisions using it in a concept of operations is fundamental to the requirements process. The source of this information is the commander in chief's (CINC's) operations plan (OPLAN), which contains the concept of operations (CONOPS) for each warfighting theater. It is critical that the CONOPS be used as the basis for determining performance requirements, because it contains the military judgment, context, and authority of the theater CINC. The defunct Arsenal Ship program, which was rightly vetoed by the theater CINCs because it would not meet their warfighting needs at acceptable risk, is a recent example of the mismatch between what is being asked for and what is being provided. [source]


Preparation of carbon nanofibres through electrospinning and thermal treatment,

POLYMER INTERNATIONAL, Issue 12 2009
Cheng-Kun Liu
Abstract Electrospinning is a versatile process to obtain continuous carbon nanofibres at low cost. Thermoplastic and thermosetting polymer precursors are utilized to prepare electrospun carbon nanofibres, activated carbon nanofibres through chemical and/or physical activation and functionalized composite carbon nanofibres by surface coating or electrospinning a precursor solution tailored with nanomaterials. Many promising applications of electrospun carbon nanofibres can be expected if appropriate microstructural, mechanical and electrical properties become available. This article provides an in-depth review of the research activities regarding several varieties and performance requirements of precursor nanofibres, polyacrylonitrile-based carbon nanofibres and their functionalized products, and carbon nanofibres from other precursors. Copyright © 2009 Society of Chemical Industry [source]


Multi-resolution analysis, entropic information and the performance of atmosoheric sounding radiometers

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 569 2000
G. E. Peckham
Abstract The performance of remote-sounding radiometers measuring properties of the earth's atmosphere is analysed through a multi-resolution wavelet transform. This technique allows the uncertainty in retrieved atmospheric profiles to be determined as a function of both altitude and scale of patterns in the profile. Multi-resolution analysis may be applied to a number of indicators of the quality of a measurement including entropic information. The apportionment of performance indicators to specific altitude ranges and pattern scales facilitates a comparison with performance requirements. The analysis is illustrated through a simple model of a remote-sounding radiometer and by application to the Infrared Atmospheric Sounding Interferometer. [source]


Modeling And Robust Pi Control Of A Fluidized Bed Combustor For Sewage Sludge

ASIAN JOURNAL OF CONTROL, Issue 4 2002
Yingmin Jia
ABSTRACT Based on experimental data, a fluidized bed combustor is modeled as an interval system. Severe model uncertainty and large time delays lead to the main difficulties in solving the control problem. The design uses the stability test of closed-loop systems as the main guideline for developing a robust PI controller. In particular, a new formula to compute the maximal magnitude of an edge rational function at a fixed frequency is derived, which provides a way to deal with time delay by transforming it into multiplicative uncertainty. Both the theoretical and the experimental results show that the designed PI controller can satisfy the desired performance requirements. [source]


Monitoring infrastructure for converged networks and services

BELL LABS TECHNICAL JOURNAL, Issue 2 2007
Shipra Agrawal
Network convergence is enabling service providers to deploy a wide range of services such as Voice over Internet Protocol (VoIP), Internet Protocol television (IPTV), and push-to-talk on the same underlying IP networks. Each service has unique performance requirements from the network, and IP networks have not been designed to satisfy these diverse requirements easily. These requirements drive the need for a robust, scalable, and easy-to-use network management platform that enables service providers to monitor and manage their networks to provide the necessary quality, availability, and security. In this paper, we describe monitoring mechanisms that give service providers critical information on the performance of their networks at a per-user, per-service granularity in real time. This allows the service providers to ensure that their networks adequately satisfy the requirements of the various services. We present various methods to acquire data, which can be analyzed to determine the performance of the network. This platform enables service providers to offer carrier grade services over their converged networks, giving their customers a high-quality experience. © 2007 Alcatel-Lucent. [source]