ABSTRACT
Asynchronous Transfer Mode (ATM) technology is the transfer mode for implementing a Broadband-Integrated Services Digital Network (B-ISDN). ATM defines multiplexing and switching techniques for broadband signals. Synchronous Optical Network (SONET) is employed in the B-ISDN backbone. The main objective of an ATM is to guarantee Quality of Service (QoS) in transfer of cell streams across networks. This technology recommended as the transport vehicle for the B-ISDN offers a great flexibility to transmission bandwidth allocation to accommodate diverse demands of multimedia connections. Dynamic Bandwidth Allocation (DBA) is a fundamental factor in network performance for an ATM-based bursty traffic. However, the fundamental problem in ATM network is defining the way to allocate bandwidth optimally especially for unpredictable bursty traffic. This project aims at developing the approach to the derivation of the bandwidth resource allocation in an ATM-based network. The main tool used for the simulation is Riverbed Modeler 2014 Academic Edition 17.5 (OPNET). This project developed the intended format with bandwidth allocation guide for N* 64kbps of a Primary Rate Interface (PRI) T1 trunk lines for varying delay, loss and buffer occupancy. The allocation was developed for varying traffic intensity between 0 and 320 Kbps corresponding to five channels of 64kbps each. Cell Loss Probability (CLP) standards between 0.0000875 and 0.002967 were considered. The Buffer Occupancy values were between 0.00000237KB and 0.0000117KB. Queuing Delay standard which ranges between 0.0000000501s and 0.000000103s and Queue Delay Deviation values of between 0.0000000366s and 0.000000584s were all considered.
TABLE OF CONTENTS
Title page i
Approval Page ii
Certification Page iii
Declaration iv
Dedication v
Acknowledgement vi
Abstract vii
List of Abbreviation viii
List of Figures xi
List of Tables xii
Table of Contents xiii
CHAPTER ONE: INTRODUCTION
1.0 Background 1
1.1 Problem Statement 5
1.2 Objectives of the Project 6
1.3 Scope 6
1.4 Methodology 6
CHAPTER TWO: LITERATURE REVIEW
2.0 Background on ATM 8
2.1 ATM Cell Format 9
2.2 ATM Device Types 10
2.3 ATM Network Interface Types 10
2.4 ATM Cell Header Format 12
2.5 ATM Services 13
xiv
2.6 Virtual Paths (VP) and Virtual Channels (VC) 14
2.7 VPI-VCI Relationships 15
2.8 Point-To-Point and Point-To-Multipoint Connections in ATM 15
2.9 ATM Interoperation with LAN 16
2.10 ATM Switch Operation 17
2.11 ATM Reference Model 18
2.12 Functions of the ATM Layer 20
2.13 Management Plane Interactions 21
2.14 ATM Addressing 21
2.15 Traffic Contracts and Service Categories 21
2.16 Traffic Contract 22
2.17 ATM Service Categories 23
2.18 Service-Dependent ATM Adaptation Layers 24
2.19 Common Physical Interface Types 25
2.20 The Table for Common Physical Interface Types 27
2.21 ATM Signaling 28
2.22 Connection Setup and Signaling For ATM 28
2.23 ATM Signaling Protocol for UNI and NNI 29
2.24 ATM Addressing 30
2.25 ATM Addressing Formats 30
2.26 Challenges of ATM 30
2.27 Related Research Works 31
xv
CHAPTER THREE: MODELLING AND SIMULATION METHODOLOGY
3.0 Introduction 42
3.1 Modeling Approach 42
3.2 Physical Model 44
3.3 Simulation Model 44
3.3.1 Brief on the Riverbed Modeler 45
3.4 Research Methodology Adopted for the Simulation 50
3.5 Simulation Configuration 51
3.5.1 Configuration of Application Config 52
3.5.2 Configuration of Profile Config 53
3.5.3 Definition oand Application of the FTP_Profile 54
3.6 Description of Simulation Devices 56
3.6.1 Switch 56
3.6.2 Server Node 57
3.6.3 ATM Workstation Node 58
3.6.4 ATM Link Connections 59
3.6.5 Configuration of Nodes 60
3.7 Configuration and Running of the Simulation 64
3.7.1 Simulation Set 64
3.7.2 Simulation Execution 65
3.8 Performance Metric 66
3.8.1 Configuration of the Switches 66
3.8.2 Node Links 67
3.9 Model Validation 69
3.10 Conclusion 71
CHAPTER FOUR: SIMULATION RESULTS AND RESULTS ANALYSIS
4.0 Introduction 72
xvi
4.1 Data Computation and Analysis 72
4.2 Simulation Results 75
4.3 Result Analysis 75
4.3.1 Buffer Usage 75
4.3.2 Queue Delay Deviation 77
4.3.3 Queuing Delay 78
4.3.4 Traffic Dropped 79
CHAPTER FIVE: CONCLUSION, CHALLENGES AND RECOMEMNDATIONS
5.0 Conclusion 85
5.1 Challenges 90
5.2 Recommendations 90
REFERENCES
APPENDIX A
APPENDIX B
1
CHAPTER ONE
INTRODUCTION
1.0 BACKGROUND
In digital communications, bandwidth as a concept has to do with the amount of data a link or network path can deliver per unit of time. For many multimedia applications, the available bandwidth has direct impact on the applications’ performance. The terms bandwidth and throughput often characterize the amount of data that the network can transfer per unit of time [1]. Bandwidth plays a significant influence in several network communications. Several applications can benefit from knowing the bandwidth characteristics of their network paths. Network providers present lists of bandwidth bouquet from which interested users select and are billed. The customers’ subscription to the service provider leads to traffic contract which will finally result in signing of Service Level Agreement (SLA). The rate of bandwidth utilization by various customers makes the providers to plan for capacity upgrade or expansion for the network to avoid congestion, traffic drop or total collapse of the network. It is a standard that bandwidth utilization of above 70% is an invitation to heavy congestion in which case various methods are encourage to avoid such state of congestion. Although network providers can effectively monitor bandwidth utilization through traffic policing and shaping, it is however not the same from the customers point of view. To achieve this network administrator with administrative privileges and access to the network devices such as routers or switches may connect to a link of interest in order to measure the bandwidth using the Simple Network Management Protocol (SNMP). However, such access is typically available only to administrators and not to end users. At times due to congestion which may lead to network failure, end users can estimate the bandwidth of their links or paths from end-to-end measurements to ascertain the quality of service delivery by the network provider, without any information from network routers due to lack of access. Even network administrators sometimes need to determine the bandwidth from hosts under their control to hosts outside their infrastructures, which make them to equally rely on end-to-end measurements. There are some bandwidth estimation tools which try to identify the bottlenecks that adversely affect the performance of the network communication. Some of the publicly available bandwidth measurement tools include the following: pathchar, pchar, nettimer, pathrate, and pathload, AppareNet and lots of other tools. Due to demand by various users, communication network providers, try to allocate bandwidth in order to optimize the network,
2
enhance network performance and guarantee quality of service delivery to various users whose network demand defer [1].
The above scenario makes bandwidth allocation a very important issue in ATM networks, especially when there are random fluctuating demands for service and variations in the service rates. In order to make ATM reliable, ATM is designed to support not only a wide range of traffic classes with diverse flow characteristics such as Unspecified bit Rate (UBR) but also to guarantee these traffic classes Quality of Service (QoS) as well. The QoS may be measured in terms of cell loss probability and maximum cell delay [2]. The performance of a network is dependent on the behavior of the QoS parameters. However, the challenge is finding the best way to dynamically allocate bandwidth economically while maintaining low loss and delay [3]. This challenge had necessitated the need to investigate through simulation the QoS parameter of an UBR service category of an ATM network using the Riverbed 17.5 Modeler 2014 version (OPNET). The result of the simulation which was based on Dynamic Bandwidth Allocation (DBA) technique/method was converted into graph using Microsoft Excel. This graph was used to derive the bandwidth resource allocation for UBR service category of an ATM network.
Dynamic Bandwidth Allocation however is a method used in allocating bandwidth dynamically in a network communication technology. It allocates the bandwidth among multiple applications almost instantaneously by providing each qualified service with only its fair share of the available bandwidth that each application requests at a specific moment. The DBA optimizes the use of available bandwidth without engaging the transmission capacity in advance of which any engagement of bandwidth in advance is static allocation.
One important feature of Dynamic Bandwidth Allocation is the ability to make bandwidth changes based on continuous monitoring of customer traffic. If the customer increases the amount of traffic being sent, the algorithm developed should be able to allocate more bandwidth, and if the customer reduces the traffic submitted to the network, the algorithm should reduce the amount of bandwidth that had been allocated to that particular customer [4] and should make such bandwidth available to others in the network.
3
Network resources such as the bandwidth are mainly shared by users because of distance, population size or human activities (greediness) of not coordinating their actions enough to effectively utilize allocated bandwidth and allow others have their own fair share of the network resources. The reality in communication is that some unreasonable customers have the capacity to consume available network resources such as the entire bandwidth without any sign of remorse if no constraints were put in place [5]. This consumption of the available network resource affects the QoS performance by introducing congestion and delay to the communication network thereby hindering some more important communication from achieving success. It is instructive to inform that one of the important network resources that requires management in order to avoid congestion and unnecessary delay is the bandwidth. To ensure optimal allocations when unusual traffic demands occur, any developed algorithm needs to consider the time-varying nature of offered traffic which makes such a network a complex dynamic system. In designing such a system, we need to consider the following:
(a) distribution of control functions in networks using Virtual Paths (VPs)
(b) monitoring of capacity usage on VPs
(c) calculation of capacity allocated to VPs
(d) frequency with which the capacity can/should be adjusted [6].
From above, dynamic bandwidth allocation is a very serious area which requires appropriate attention and proper management. Bandwidth management is therefore vital in bandwidth allocation.
Bandwidth management can be divided into three categories as follows:
Bandwidth reservation: Bandwidth reservation dedicates bandwidth to a customer such that even if only a fraction of the reserved bandwidth is utilized, the remaining portion of the reserved bandwidth is not available to any other user in the network.
Bandwidth limitation: Bandwidth limitation constrains the maximum amount of traffic that can be sent by a single user. If the customer attempts to send more traffic than the
4
upper limit allows, the traffic could be discarded by the service provider, buffered, or otherwise penalized because it does not conform to the specified limits.
Bandwidth allocation: Bandwidth allocation is similar to bandwidth reservation explained above because the bandwidth is “guaranteed” to be available to the customer. However, it differs in the sense that if the customer does not use all of the allocated bandwidth, the unused portion is made available to others thereby attempting to dynamically optimising the network. This can be achieved graphically by removing the bandwidth when dormant from redundant user (when not in use) and given to someone that may require it [7].
Bandwidth management can be applied on the customer side or the network side of the ATM User Network Interface (UNI). The network providers are authorized to police the traffic and penalize by discarding and/or delaying nonconforming traffic. To produce conforming traffic, the customer may shape the traffic between the source and the network. Traffic shaping or smoothing refers to queuing ATM cells and then releasing those cells so that the burstiness of the source is controlled.
Although there are many different methods of bandwidth management currently, they are categorised into static and dynamic resource management. The static resource management approach does not reflect the inherent changing nature of user requirements which has to do with many variables of which the customer’s state of mind is critical and even the network state which may adjust severally during the lifetime of the connection. This is of great concern to network administrators and company’s management. In addressing the shortcoming above, dynamic bandwidth allocation tends to adjust the bandwidth allocated to a particular customer over time as a result of different variables. This is achieved by shaping the traffic and allocating available bandwidth to other customers in need of it which leads to optimization of the network traffic. The bandwidth usage characteristics of customers can be determined by monitoring the submitted traffic.
From above therefore, the challenge of dynamic bandwidth allocation has gained lots of divergent opinions in terms of best practices in its implementation and significant attention in the area of creating new scheduling disciplines and improvement in the areas of existing ones. This means that a lot of work need to be done by the network service providers in ensuring that
5
various applications from customers are given their fair share of the bandwidth and this can be achieved by appropriate scheduling discipline through available robust scheduling algorithm. On the other hand customers need to use appropriate monitoring tools to ensure that they get a fair share of the bandwidth. The signing of SLA is one approach which can be used in checkmating the behaviour of both sides of the coin. However, putting the content of such SLA contract into reality is another ball-game of which the use of independent bandwidth monitor group whose job can be likened to that of an auditor maybe solicited as a solution with the auditor serving as an arbiter for both sides.
Network Resource such as bandwidth is a very vital and scarce commodity in telecommunication. Due to this singular development, it draws a lot of interest just the way black gold (crude oil) draws in the oil and gas sector. Network operators were interested not only on profit but the Quality of Service rendered to customers especially when they remember sanctions from the regulator when not complying. For corporate organizations requiring bandwidth allocation, they are interested in quality of service, optimal performance of their allocated bandwidth (best practice for allocated bandwidth) and allocated bandwidth cost implication which is almost the same thinking of single network users. Knowing that BISDN allocates bandwidth in the multiple of 64Kbps (N*64) it becomes a very serious challenge when the actual bandwidth needed by an institution is just 80kpbs (additional 16kbps) or better still 96kbps (one and a half of the normal allocation of 64kbps). If the organisation decides to step down needed bandwidth from 80Kbps to the normal supply of 64kbps bouquet, will the network still perform optimally or will such decision cause network performance issues/failure for the traffic in such organization? Again, if the scale up the bandwidth allocated to 128kbps for either the 80kbps or the 96kbps what will happen to the excess bandwidth of 48kbps and 32kbps respectively? This is a huge wastage for communications’ managers, network administrators and top echelons of organizations and research fellows.
1.1 PROBLEM STATEMENT
Quality of Service (QoS) in an ATM-based network for years has become an important research topic. However, one of the critical challenges in achieving standard QoS requirements is finding acceptable required in dynamically allocating bandwidth resource to various unpredictable demanding multimedia applications.
6
Therefore within the context of this research, the challenge/problem of this work is to derive bandwidth resource allocation for multimedia traffic for UBR service category of an ATM-based network.
1.2 OBJECTIVES OF THE PROJECT
The objectives of this research work are as follows:
i. improving network performance,
ii. providing to the network vendors bandwidth allocation,
iii. providing the capability to predict bandwidth allocation and thereby improve trunk utilization,
iv. contributing to telecommunication knowledge base and forming basis for further research work in the field QoS parameter in ATM network.
1.3 SCOPE
This project was narrowed to modelling, simulation and analysis of QoS parameters for UBR service category in an ATM Network using Riverbed Modeler. The physical medium used for the Network transmission was the topology which is a Synchronous Optical Network (SONET). Simulation result and result analysis, made use of the Riverbed Modeler 2014 Academic Edition 17.5 and Microsoft Excel respectively.
1.4 METHODOLOGY
The methods adopted during the course of the work included studying past literatures in the related project field, developing both the physical and simulation models for the ATM technology, exporting the simulated result to Microsoft Excel for data computation and analysis. Primary research problem focused on monitoring the QoS parameters and traffic (in and out) pattern on each of the switches for the trunk interface Ports (Port 0 and Port 1). Within this perspective, the immediate interest in this platform was ensuring that the satisfied the following network QoS requirements by:
Providing bandwidth guarantees and attempting bandwidth optimization which is a challenge for UBR;
Maintaining fairness among the entire network data service;
7
Accomplishing increase in the network bandwidth utilization
In addressing the primary research problem in a step-by-step manner, a two dimensional approach was adopted:
Exploring the possibility of using existing scheduling algorithm (Leaky Bucket) as informed from the literature review by developing a physical model;
Transforming the physical model into a simulation model and investigating the QoS parameter pattern of the Network.
The essence of the simulation model was to underscore the following:
i. Developing network topology model in Riverbed Modeler 2014 Academic Edition 17.5 and investigating the behaviour of the QoS parameters of UBR ATM network.
ii. Analysing the simulation result by exporting the graph from the simulation to Microsoft Excel.
iii. Studying the behaviour of the trunk paths for each of the switches in other to know to what extent they affect network performance and network utilization.
iv. Comparing the simulation result gotten from the QoS parameters of the network to the ATM Forum Traffic Management 4.1 standard.
1.5 THESIS OUTLINE
This work was organised and presented as follows: Chapter two was a detailed literature review on ATM and other related works on ATM. Chapter three discussed the modeling approach adopted and the research methodology used for the work. In chapter four, simulation results of the research work were presented and the result analysis discussed. Conclusion, challenges and recommendations were presented in chapter five.
8
Do you need help? Talk to us right now: (+234) 08060082010, 08107932631 (Call/WhatsApp). Email: [email protected].
IF YOU CAN'T FIND YOUR TOPIC, CLICK HERE TO HIRE A WRITER»