Protocol Stack (protocol + stack)

Distribution by Scientific Domains


Selected Abstracts


Generalized window advertising for TCP congestion control,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2002
Mario Gerla
Congestion in the Internet is a major cause of network performance degradation. The Generalized Window Advertising (GWA) scheme proposed in this paper is a new approach for enhancing the congestion control properties of TCP. GWA requires only minor modifications to the existing protocol stack and is completely backward compatible, allowing GWA-hosts to interact with non-GWA hosts without modifications. GWA exploits the notion of end-host-network cooperation, with the congestion level notified from the network to end hosts. It is based on solid control theory results mat guarantee performance and stable network operation. GWA is able to avoid window oscillations and the related fluctuations in offered load and network performance. This makes it more robust to sustained network overload due to a large number of connections competing for the same bottleneck, a situation where traditional TCP implementations fail to provide satisfactory performance. GWA-TCP is compared with traditional TCP, TCP with RED and also ECN using the ns-2 simulator. Results show that in most cases GWA-TCP outperforms the traditional schemes. In particular, when compared with ECN, it provides smoother network operation and increased fairness. [source]


Neural bandwidth allocation function (NBAF) control scheme at WiMAX MAC layer interface

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2007
Mario Marchese
Abstract The paper proposes a bandwidth allocation scheme to be applied at the interface between upper layers (IP, in this paper) and Medium Access Control (MAC) layer over IEEE 802.16 protocol stack. The aim is to optimally tune the resource allocation to match objective QoS (Quality of Service) requirements. Traffic flows characterized by different performance requirements at the IP layer are conveyed to the IEEE 802.16 MAC layer. This process leads to the need for providing the necessary bandwidth at the MAC layer so that the traffic flow can receive the requested QoS. The proposed control algorithm is based on real measures processed by a neural network and it is studied within the framework of optimal bandwidth allocation and Call Admission Control in the presence of statistically heterogeneous flows. Specific implementation details are provided to match the application of the control algorithm by using the existing features of 802.16 request,grant protocol acting at MAC layer. The performance evaluation reported in the paper shows the quick reaction of the bandwidth allocation scheme to traffic variations and the advantage provided in the number of accepted calls. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Empirical evaluation of receiver-based TCP delay control in CDMA2000 networks

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2007
Oh-keun Kwon
Abstract Wide-area broadband wireless technologies such as CDMA2000 often suffer from variable transfer rate and long latency. In particular, TCP window-based rate control causes excessive buffering at the base station because of the lower transfer rate of the wireless link than that of the wired backhaul link. This performance characteristic of TCP further increases the end-to-end delay, and additional resources are required at the base station. This paper presents a practical mechanism to control the end-to-end TCP delay for CDMA2000 networks (or other similar wireless technologies). The key idea is to reduce and stabilize RTT (round-trip time) by dynamically controlling the TCP advertised window size, based on a runtime measurement of the wireless channel condition at the mobile station. The proposed system has been implemented by modifying the Linux protocol stack. The experiment results, conducted on a commercial CDMA2000 1x network, show that the proposed scheme greatly reduces the TCP delay in non-congested networks, while not sacrificing the TCP throughput in congested networks. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Layered view of QoS issues in IP-based mobile wireless networks

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2006
Haowei Bai
Abstract With the convergence of wireless communication and IP-based networking technologies, future IP-based wireless networks are expected to support real-time multimedia. IP services over wireless networks (e.g. wireless access to Internet) enhance the mobility and flexibility of traditional IP network users. Wireless networks extend the current IP service infrastructure to a mix of transmission media, bandwidth, costs, coverage, and service agreements, requiring enhancements to the IP protocol layers in wireless networks. Furthermore, QoS provisioning is required at various layers of the IP protocol stack to guarantee different types of service requests, giving rise to issues related to cross-layer design methodology. This paper reviews issues and prevailing solutions to performance enhancements and QoS provisioning for IP services over mobile wireless networks from a layered view. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Cross-layer protocol optimization for satellite communications networks: a survey

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 5 2006
Giovanni Giambene
Abstract Satellite links are expected to be one important component of the next-generation Internet. New satellite system architectures are being envisaged to be fully IP based and support digital video broadcasting and return channel protocols (e.g. DVB-S, DVB-S2 and DVB-RCS). To make the upcoming satellite network systems fully realizable, meeting new services and application requirements, a complete system optimization is needed spanning the different layers of the OSI, and TCP/IP protocol stack. This paper deals with the cross-layer approach to be adopted in novel satellite systems and architectures. Different cross-layer techniques will be discussed, addressing the interactions among application, transport, MAC and physical layers. The impacts of these techniques will be investigated and numerical examples dealing with the joint optimization of different transport control schemes and lower layers will be considered referring to a geostationary-based architecture. Our aim is to prove that the interaction of different layers can permit to improve the higher-layer goodput as well as user satisfaction. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Fighting fire with fire: using randomized gossip to combat stochastic scalability limits

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2002
Indranil Gupta
Abstract The mechanisms used to improve the reliability of distributed systems often limit performance and scalability. Focusing on one widely-used definition of reliability, we explore the origins of this phenomenon and conclude that it reflects a tradeoff arising deep within the typical protocol stack. Specifically, we suggest that protocol designs often disregard the high cost of infrequent events. When a distributed system is scaled, both the frequency and the overall cost of such events often grow with the size of the system. This triggers an O() phenomenon, which becomes visible above some threshold sizes. Our findings suggest that it would be more effective to construct large-scale reliable systems where, unlike traditional protocol stacks, lower layers use randomized mechanisms, with probabilistic guarantees, to overcome low-probability events. Reliability and other end-to-end properties are introduced closer to the application. We employ a back-of-the-envelope analysis to quantify this phenomenon for a class of strongly reliable multicast problems. We construct a non-traditional stack, as described above, that implements virtually synchronous multicast. Experimental results reveal that virtual synchrony over a non-traditional, probabilistic stack helps break through the scalability barrier faced by traditional implementations of the protocol. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Porting and performance aspects from IPv4 to IPv6: The case of OpenH323

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2005
Ch. Bouras
Abstract This paper is a summary of our experiences on a case study for porting applications to IPv6. We present the results of the effort to port OpenH323, an open-source H.323 platform to IPv6, which we believe can serve as guidelines for other projects with similar goals. We briefly present the structure of the OpenH323 platform. We also discuss a number of issues arising during the porting of a platform to IPv6, like which would be the easiest approach to the porting procedure, how compatibility with earlier, IPv4-only versions of the platform could be retained, if there are any useful tools for assisting this task, how and when one could be positive that the necessary modifications had been made, and which testing procedures should be followed. We then present a variety of experiments that we conducted in order to comparatively evaluate the IPv4 and IPv6 protocol stacks. We also present the results of some initial experiments comparing IPv4 and IPv6 performance under congested network links and the conclusions that they lead us to. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Fighting fire with fire: using randomized gossip to combat stochastic scalability limits

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2002
Indranil Gupta
Abstract The mechanisms used to improve the reliability of distributed systems often limit performance and scalability. Focusing on one widely-used definition of reliability, we explore the origins of this phenomenon and conclude that it reflects a tradeoff arising deep within the typical protocol stack. Specifically, we suggest that protocol designs often disregard the high cost of infrequent events. When a distributed system is scaled, both the frequency and the overall cost of such events often grow with the size of the system. This triggers an O() phenomenon, which becomes visible above some threshold sizes. Our findings suggest that it would be more effective to construct large-scale reliable systems where, unlike traditional protocol stacks, lower layers use randomized mechanisms, with probabilistic guarantees, to overcome low-probability events. Reliability and other end-to-end properties are introduced closer to the application. We employ a back-of-the-envelope analysis to quantify this phenomenon for a class of strongly reliable multicast problems. We construct a non-traditional stack, as described above, that implements virtually synchronous multicast. Experimental results reveal that virtual synchrony over a non-traditional, probabilistic stack helps break through the scalability barrier faced by traditional implementations of the protocol. Copyright © 2002 John Wiley & Sons, Ltd. [source]