Network Performance (network + performance)

Distribution by Scientific Domains

Selected Abstracts


This article offers insights into the complexity of assessing the performance of public networks. We have identified three so-called exogenous factors: form of the network, type of inception,whether the network was initially formed as voluntary or mandated,and developmental stage of the network. We argue that where a network stands on each of these factors will determine the appropriateness of specific criteria for assessing the performance of the network. [source]

Incrementalism before the Storm: Network Performance for the Evacuation of New Orleans

John J. Kiefer
Hurricane Katrina revealed a lack of preparedness in disaster management networks covering the New Orleans area. This paper focuses on the operation of networks in preparing to evacuate residents in advance of a major disaster. There are two cases: the relatively successful evacuation of residents who left by private conveyance and the widely publicized failure to provide for those who could not or would not leave on their own. We trace the actions and inactions of various players to reach conclusions about the strengths and weaknesses of networks in the special circumstances of disaster preparation. [source]

A Multiobjective and Stochastic System for Building Maintenance Management

Z. Lounis
Building maintenance management involves decision making under multiple objectives and uncertainty, in addition to budgetary constraints. This article presents the development of a multiobjective and stochastic optimization system for maintenance management of roofing systems that integrates stochastic condition-assessment and performance-prediction models with a multiobjective optimization approach. The maintenance optimization includes determination of the optimal allocation of funds and prioritization of roofs for maintenance, repair, and replacement that simultaneously satisfy the following conflicting objectives: (1) minimization of maintenance and repair costs, (2) maximization of network performance, and (3) minimization of risk of failure. A product model of the roof system is used to provide the data framework for collecting and processing data. Compromise programming is used to solve this multiobjective optimization problem and provides building managers an effective decision support system that identifies the optimal projects for repair and replacement while it achieves a satisfactory tradeoff between the conflicting objectives. [source]

Generalized window advertising for TCP congestion control,

Mario Gerla
Congestion in the Internet is a major cause of network performance degradation. The Generalized Window Advertising (GWA) scheme proposed in this paper is a new approach for enhancing the congestion control properties of TCP. GWA requires only minor modifications to the existing protocol stack and is completely backward compatible, allowing GWA-hosts to interact with non-GWA hosts without modifications. GWA exploits the notion of end-host-network cooperation, with the congestion level notified from the network to end hosts. It is based on solid control theory results mat guarantee performance and stable network operation. GWA is able to avoid window oscillations and the related fluctuations in offered load and network performance. This makes it more robust to sustained network overload due to a large number of connections competing for the same bottleneck, a situation where traditional TCP implementations fail to provide satisfactory performance. GWA-TCP is compared with traditional TCP, TCP with RED and also ECN using the ns-2 simulator. Results show that in most cases GWA-TCP outperforms the traditional schemes. In particular, when compared with ECN, it provides smoother network operation and increased fairness. [source]

Clustering-based scheduling: A new class of scheduling algorithms for single-hop lightwave networks

Sophia G. Petridou
Abstract In wavelength division multiplexing (WDM) star networks, the construction of the transmission schedule is a key issue, which essentially affects the network performance. Up to now, classic scheduling techniques consider the nodes' requests in a sequential service order. However, these approaches are static and do not take into account the individual traffic pattern of each node. Owing to this major drawback, they suffer from low performance, especially when operating under asymmetric traffic. In this paper, a new class of scheduling algorithms for WDM star networks, which is based on the use of clustering techniques, is introduced. According to the proposed Clustering-Based Scheduling Algorithm (CBSA), the network's nodes are organized into clusters, based on the number of their requests per channel. Then, their transmission priority is defined beginning from the nodes belonging to clusters with higher demands and ending to the nodes of clusters with fewer requests. The main objective of the proposed scheme is to minimize the length of the schedule by rearranging the nodes' service order. Furthermore, the proposed CBSA scheme adopts a prediction mechanism to minimize the computational complexity of the scheduling algorithm. Extensive simulation results are presented, which clearly indicate that the proposed approach leads to a significantly higher throughput-delay performance when compared with conventional scheduling algorithms. We believe that the proposed clustering-based approach can be the base of a new generation of high-performance scheduling algorithms for WDM star networks. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Competitive flow control in general multi-node multi-link communication networks

Ismet Sahin
Abstract In this paper, we consider the flow control in a general multi-node multi-link communication network with competing users. Each user has a source node, a destination node, and an existing route for its data flow over any set of links in the network from its source to its destination node. The flow rate for each user is a control variable that is determined by optimizing a user-specific utility function which combines maximizing the flow rate and minimizing the network congestion for that user. A preference parameter in the utility function allows each user to adjust the trade-off between these two objectives. Since all users share the same network resources and are only interested in optimizing their own utility functions, the Nash equilibrium of game theory represents a reasonable solution concept for this multi-user general network. The existence and uniqueness of such an equilibrium is therefore very important for the network to admit an enforceable flow configuration. In this paper, we derive an expression for the Nash equilibrium and prove its uniqueness. We illustrate the results with an example and discuss some properties and observations related to the network performance when in the Nash equilibrium. Copyright © 2007 John Wiley & Sons, Ltd. [source]

On parameter estimation of a simple real-time flow aggregation model

Huirong Fu
Abstract There exists a clear need for a comprehensive framework for accurately analysing and realistically modelling the key traffic statistics that determine network performance. Recently, a novel traffic model, sinusoid with uniform noise (SUN), has been proposed, which outperforms other models in that it can simultaneously achieve tractability, parsimony, accuracy (in predicting network performance), and efficiency (in real-time capability). In this paper, we design, evaluate and compare several estimation approaches, including variance-based estimation (Var), minimum mean-square-error-based estimation (MMSE), MMSE with the constraint of variance (Var+MMSE), MMSE of autocorrelation function with the constraint of variance (Var+AutoCor+MMSE), and variance of secondary demand-based estimation (Secondary Variance), to determining the key parameters in the SUN model. Integrated with the SUN model, all the proposed methods are able to capture the basic behaviour of the aggregation reservation system and closely approximate the system performance. In addition, we find that: (1) the Var is very simple to operate and provides both upper and lower performance bounds. It can be integrated into other methods to provide very accurate approximation to the aggregation's performance and thus obtain an accurate solution; (2) Var+AutoCor+MMSE is superior to other proposed methods in the accuracy to determine system performance; and (3) Var+MMSE and Var+AutoCor+MMSE differ from the other three methods in that both adopt an experimental analysis method, which helps to improve the prediction accuracy while reducing computation complexity. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Modelling of wireless TCP for short-lived flows,

Sangheon Pack
Abstract The transmission control protocol (TCP) is one of the most important Internet protocols. It provides reliable transport services between two end-hosts. Since TCP performance affects overall network performance, many studies have been done to model TCP performance in the steady state. However, recent researches have shown that most TCP flows are short-lived. Therefore, it is more meaningful to model TCP performance in relation to the initial stage of short-lived flows. In addition, the next-generation Internet will be an unified all-IP network that includes both wireless and wired networks integrated together. In short, modelling short-lived TCP flows in wireless networks constitutes an important axis of research. In this paper, we propose simple wireless TCP models for short-lived flows that extend the existing analytical model proposed in [IEEE Commun. Lett. 2002; 6(2):85,88]. In terms of wireless TCP, we categorized wireless TCP schemes into three types: end-to-end scheme, split connection scheme, and local retransmission scheme, which is similar to the classification proposed in [IEEE/ACM Trans. Networking 1997; 756,769]. To validate the proposed models, we performed ns-2 simulations. The average differences between the session completion time calculated using the proposed model and the simulation result for three schemes are less than 9, 16, and 7 ms, respectively. Consequently, the proposed model provides a satisfactory means of modelling the TCP performance of short-lived wireless TCP flows. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Non-inferior Nash strategies for routing control in parallel-link communication networks

Yong Liu
Abstract We consider a routing control problem of two-node parallel-link communication network shared by competitive teams of users. Each team has various types of entities (traffics or jobs) to be routed on the network. The users in each team cooperate for the benefit of their team so as to achieve optimal routing over network links. The teams, on the other hand, compete among themselves for the network resources and each has an objective function that relates to the overall performance of the network. For each team, there is a centralized decision-maker, called the team leader or manager, who coordinates the routing strategies among all entities in his team. A game theoretic approach to deal with both cooperation within each team and competition among the teams, called the Non-inferior Nash strategy, is introduced. Considering the roles of a group manager in this context, the concept of a Non-inferior Nash strategy with a team leader is introduced. This multi-team solution provides a new framework for analysing hierarchically controlled systems so as to address complicated coordination problems among the various users. This strategy is applied to derive the optimal routing policies for all users in the network. It is shown that Non-inferior Nash strategies with a team leader is effective in improving the overall network performance. Various types of other strategies such as team optimization and Nash strategies are also discussed for the purpose of comparison. Copyright © 2005 John Wiley & Sons, Ltd. [source]

DRED: a random early detection algorithm for TCP/IP networks

James Aweya
Abstract It is now widely accepted that a RED [2] controlled queue certainly performs better than a drop-tail queue. But an inherent weakness of RED is that its equilibrium queue length cannot be maintained at a preset value independent of the number of TCP active connections. In addition, RED's optimal parameter setting is largely correlated with the number of connections, the round-trip time, the buffer space, etc. In light of these observations, we propose DRED, a novel algorithm which uses the basic ideas of feedback control to randomly discard packets with a load-dependent probability when a buffer in a router gets congested. Over a wide range of load levels, DRED is able to stabilize a router queue occupancy at a level independent of the number of active TCP connections. The benefits of stabilized queues in a network are high resources utilization, predictable maximum delays, more certain buffer provisioning, and traffic-load-independent network performance in terms of traffic intensity and number of connections. Copyright © 2002 John Wiley & Sons, Ltd. [source]

Blocking performance of fixed-paths least-congestion routing in multifibre WDM networks

Ling Li
Abstract Wavelength-routed all-optical networks have been receiving significant attention for high-capacity transport applications. Good routing and wavelength assignment (RWA) algorithms are critically important in order to improve the performance of wavelength-routed WDM networks. Multifibre WDM networks, in which each link consists of multiple fibres and each fibre carries information on multiple wavelengths, offer the advantage of reducing the effect of the wavelength continuity constraint without using wavelength converters. A wavelength that cannot continue on the next hop on the same fibre can be switched to another fibre using an optical cross-connect (OXC) if the same wavelength is free on one of the other fibres. However, the cost of a multifibre network is likely to be higher than a single-fibre network with the same capacity, because more amplifiers and multiplexers/demultiplexers may be required. The design goal of a multifibre network is to achieve a high network performance with the minimum number of fibres. In this paper, we study the blocking performance of fixed-paths least-congestion (FPLC) routing in multifibre WDM networks. A new analytical model with the consideration of link-load correlation is developed to evaluate the blocking performance of the FPLC routing. The analytical model is a generalized model that can be used in both regular (e.g. mesh-torus) and irregular (e.g. NSFnet) networks. It is shown that the analytical results closely match the simulation results, which indicate that the model is adequate in analytically predicting the performance of the FPLC routing in different networks. Two FPLC routing algorithms, wavelength trunk (WT)-based FPLC and lightpath (LP)-based FPLC, are developed and studied. Our analytical and simulation results show that the LP-based FPLC routing algorithm can use multiple fibres more efficiently than the WT-based FPLC and the alternate path routing. In both the mesh-torus and NSFnet networks, limited number of fibres is sufficient to guarantee high network performance. Copyright © 2002 John Wiley & Sons, Ltd. [source]

An engineering approach to dynamic prediction of network performance from application logs

Zalal Uddin Mohammad Abusina
Network measurement traces contain information regarding network behavior over the period of observation. Research carried out from different contexts shows predictions of network behavior can be made depending on network past history. Existing works on network performance prediction use a complicated stochastic modeling approach that extrapolates past data to yield a rough estimate of long-term future network performance. However, prediction of network performance in the immediate future is still an unresolved problem. In this paper, we address network performance prediction as an engineering problem. The main contribution of this paper is to predict network performance dynamically for the immediate future. Our proposal also considers the practical implication of prediction. Therefore, instead of following the conventional approach to predict one single value, we predict a range within which network performance may lie. This range is bounded by our two newly proposed indices, namely, Optimistic Network Performance Index (ONPI) and Robust Network Performance Index (RNPI). Experiments carried out using one-year-long traffic traces between several pairs of real-life networks validate the usefulness of our model.,Copyright © 2005 John Wiley & Sons, Ltd. [source]

A reliable cooperative and distributed management for wireless industrial monitoring and control

Dr S. Manfredi
Abstract This paper is concerned with the analysis, design and validation of a reliable management strategy for industrial monitoring and control over wireless sensor network (WSN). First, we investigate the interactions between contention resolution and congestion control mechanisms in Wireless Industrial Sensor Network (briefly WISN). An extensive set of simulations are performed in order to quantify the impacts of several network parameters (i.e. buffer, sensors reporting rate) on the overall network performance (i.e. reliability, packet losses). This calls for cross-layer mechanisms for efficient data delivery over WISN. Second, a reliable sink resource allocation strategy based on log-utility fairness criteria is proposed. It is shown that the resource sink manager can plan strategies to better allocate the available resource among competing sensors. Finally, the analysis, design and validation of a reliable sinks cooperative control for WISN are introduced. A sufficient condition for wireless network stability in presence of multiple sinks and heterogeneous sensors with different time delays is given and it is used for network parameters design. The stability condition and the resulting cooperative control performance in terms of fairness, link utilization, packet losses, reliability and latency are validated by Matlab/Simulink-based simulator TrueTime, which facilitates co-simulation of controller task execution in real-time kernels and in the wireless network environment. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Integrating the scene length characteristics of MPEG video bitstreams into a direct broadcast satellite network with return channel system

Fatih Alagöz
Abstract In order to optimize the network resources, we should incorporate all the available information into the network design. However, incorporating irrelevant information may increase the design complexity and/or decrease the performance of the network. In this paper, we investigate the relevance of integrating the scene length characteristics of moving pictures expert group (MPEG) coded video bitstreams into a direct broadcast satellite (DBS) network with return channel system (DVB-RCS). Due to the complexity of the studied system, unless disputable simplifications are made, it is hard to achieve a mathematical foundation for this integration. Our analysis relies on extensive set of simulations. Firstly, we achieve the scene length distributions for MPEG bitstreams based on the proposed scene change models and their subjective observations of the actual video. We show that these models may be used to estimate the scene length of MPEG bitstreams. We then integrate this estimation into a DBS network simulator. Finally, we show that the scene length characteristics may be used to improve the DBS network performance under certain conditions. Copyright © 2004 John Wiley & Sons, Ltd. [source]

A simulation-based reliability assessment approach for congested transit network

Yafeng Yin
This paper is an attempt to develop a generic simulation-based approach to assess transit service reliability, taking into account interaction between network performance and passengers' route choice behaviour. Three types of reliability, say, system wide travel time reliability, schedule reliability and direct boarding waiting-time reliability are defined from perspectives of the community or transit administration, the operator and passengers. A Monte Carlo simulation approach with a stochastic user equilibrium transit assignment model embedded is proposed to quantify these three reliability measures of transit service. A simple transit network with a bus rapid transit (BRT) corridor is analysed as a case study where the impacts of BRT components on transit service reliability are evaluated preliminarily. [source]

Cell phone roulette and "consumer interactive" quality

Peter Navarro
Under current policies, cell phone consumers face a lower probability of finding the best carrier for their usage patterns than winning at roulette. Corroborating survey data consistently show significant dissatisfaction among cell phone users, network performance is a major issue, and customer "churn" is high. This problem may be traced to a new form of "consumer interactive" quality characteristic of emergent high technology products such as cell phone and broadband services. This problem is unlikely to be resolved by effective search and sampling, efficient secondary markets, or voluntary carrier disclosure. Traditional one-dimensional disclosure responses to this new variation on an old asymmetric information problem should give way to a more multi-faceted and fine-grained policy approach. © 2005 by the Association for Public Policy Analysis and Management [source]

Multiple neural networks modeling techniques in process control: a review

Zainal Ahmad
Abstract This paper reviews new techniques to improve neural network model robustness for nonlinear process modeling and control. The focus is on multiple neural networks. Single neural networks have been dominating the neural network ,world'. Despite many advantages that have been mentioned in the literature, some problems that can deteriorate neural network performance such as lack of generalization have been bothering researchers. Driven by this, neural network ,world' evolves and converges toward better representations of the modeled functions that can lead to better generalization and manages to sweep away all the glitches that have shadowed neural network applications. This evolution has lead to a new approach in applying neural networks that is called as multiple neural networks. Just recently, multiple neural networks have been broadly used in numerous applications since their performance is literally better than that of those using single neural networks in representing nonlinear systems. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]

Improved network performance via antagonism: From synthetic rescues to multi-drug combinations

BIOESSAYS, Issue 3 2010
Adilson E. Motter
Abstract Recent research shows that a faulty or sub-optimally operating metabolic network can often be rescued by the targeted removal of enzyme-coding genes , the exact opposite of what traditional gene therapy would suggest. Predictions go as far as to assert that certain gene knockouts can restore the growth of otherwise nonviable gene-deficient cells. Many questions follow from this discovery: What are the underlying mechanisms? How generalizable is this effect? What are the potential applications? Here, I approach these questions from the perspective of compensatory perturbations on networks. Relations are drawn between such synthetic rescues and naturally occurring cascades of reaction inactivation, as well as their analogs in physical and other biological networks. I specially discuss how rescue interactions can lead to the rational design of antagonistic drug combinations that select against resistance and how they can illuminate medical research on cancer, antibiotics, and metabolic diseases. , Editor's suggested further reading in BioEssays The evolutionary context of robust and redundant cell biological mechanismsAbstract Reprogramming cell fates: reconciling rarity with robustnessAbstract [source]


J.H. Ligtenberg
The Lower Eocene El Garia Formation forms the reservoir rock at the Ashtart oilfield, offshore Tunisia. It comprises a thick package of mainly nummulitic packstones and grainstones with variable reservoir quality. Although porosity is moderate to high, permeability is often poor to fair with some high permeability streaks. The aim of this study was to establish relationships between log-derived data and core data, and to apply these relationships in a predictive sense to uncored intervals. An initial objective was to predict from measured logs and core data the limestone depositional texture (as indicated by the Dunham classification), as well as porosity and permeability. A total of nine wells with complete logging suites, multiple cored intervals with core plug measurements together with detailed core interpretations were available. We used a fully-connected Multi-Layer-Perceptron network (a type of neural network) to establish possible non-linear relationships. Detailed analyses revealed that no relationship exists between log response and limestone texture (Dunham class). The initial idea to predict Dunham class, and subsequently to use the classification results to predict permeability, could not therefore be pursued. However, further analyses revealed that it was feasible to predict permeability without using the depositional fabric, but using a combination of wireline logs and measured core porosity. Careful preparation of the training set for the neural network proved to be very important. Early experiments showed that low to fair permeability (1,35 mD) could be predicted with confidence, but that the network failed to predict the high permeability streaks. "Balancing " the data set solved this problem. Balancing is a technique in which the training set is increased by adding more examples to the under-sampled part of the data space. Examples are created by random selection from the training set and white noise is added. After balancing, the neural network's performance improved significantly. Testing the neural network on two wells indicated that this method is capable of predicting the entire range of permeability with confidence. [source]