350-501 SPCOR Cisco CCNP Service Provider – Quality of Service (QoS) Part 1
June 11, 2023

1. Quality of Service (QOS )

Quality of service. Now, in this video we’ll see the basic introduction to the quality of service and then we’ll see why there is a need for the quality of service, what are the different network issues comes and how to overcome that by using some quality of service. Now, before we go ahead, let’s try to understand what is quality of service. So, quality of service is a method of giving a priority to some specific information as it goes over the network. Like, take an example, in my network, normally we have different types of traffic. You might be sending some vivo IP traffic, voice traffic, where you have some IP phones connected and you are sending that. And also there are some applications used in my network which are using some video conferencing applications.

And also I have some data traffic as well moving in the network, maybe your Http traffic or ftp traffic or some database, something like that. Now, when they go through the network, probably there is a possibility that when you’re sending on the router, probably it is going on the link. Now, if there is not enough bandwidth, or maybe there is a possibility that your ftp traffic, there are some users who are downloading something and the users now the ftp traffic is almost eating up all your bandwidth and maybe your voice traffic is not clear, your voice is not clear. The reason is maybe some of the packets are getting dropped and when you are doing some video conferencing applications, the video is not coming properly.

Now, these are some of the reasons, some of the issues you get into if you have some less amount of bandwidth available on the link. Now, what we can do here is we can simply configure something like we can give a priority for specific traffic, saying that if a traffic is coming with a voice or video, that should be sent first and it has to stop all the remaining traffic. Or maybe we can define that the voice traffic should get a guaranteed bandwidth of 64 kbps on that particular link in case if there is a congestion due to low speed links. And that can be done by using something called quality of service. So quality of service is a method of giving a priority for some specific traffic as it is moving over the network.

And we can define them in different categories and we can reserve some amount of bandwidth for specific traffic, we can define the priority, we can ensure that any applications like ftb downloads should not utilize all the bandwidth, we can restrict them by using some policing options. And that’s what we are going to see in these sections. Now, before we go ahead with the more in detail of different quality of service mechanisms which will help us in improvising your network performance or giving the priority, before we go ahead, we need to understand what are the general issues we have in the network. So network quality issues like sometimes the small voice packets has to compete with your ftb downloads and we need to ensure that some packets, some traffic should get a high priority which are typically your sensitive traffic like voice and video.

And we never want any downtime for this specific applications. We want to ensure that some database applications, we need to ensure that there should not be any downtime in accessing those servers. Now that can be done by ensuring that we can give some quality for different applications. That’s what we do in the quality of service. Now why there is a need, what are the general problems you have in the network? Like the first common problem is the lack of bandwidth. So you might be trying to send some information over the network but the van link connecting between the routers is just two mvps and in these two mvps we need to ensure that we send all the traffic. Now if there’s no enough bandwidth then probably it will also lead to the packet getting dropped or network getting congested.

So first let’s try to understand where in which scenarios you will have some bandwidth issues. Like take an example, there is a user a sender is sending is connected in the land at a speed of ten mbps and I have a receiver or trying to access the server which can support at 200 mbps. So probably when the sender supports 100 mbps it will try to send at a speed of 100 mbps. But on the van link we just have 256 KPPS. So probably the router cannot process the packets at a speed of ten, it’s going to send at a speed of 26 KP and then maybe here 512, but at the end you will not be able to send and receive at the same speed. So the overall bandwidth from the sender to receiver, probably it will be the least bandwidth whatever we have. So the maximum bandwidth equals to the bandwidth on the slowest link.

So that’s the maximum bandwidth you can send between the two devices. Or in those scenarios probably we don’t have enough bandwidth so the bottleneck will be like we have the link which is just supporting 26 KPPS and you cannot send the information more than that because this link connecting between the two routers, router one and router two, you can just send at a speed of 26 kbps. Now the next possible thing now packet loss. The second problem is the packet loss. Now packet loss generally happens when the router receives the packets and it will try to send the packets. If there is based on the output queue, the number of packets it can hold, it will hold those packets before it send back again and if it reaches the maximum limit automatically it will start dropping your packets. So that is also one common reason.

Let’s say if you have a voice traffic coming up and probably that voice traffic is queued and maybe due to the output queue is full then it is going to automatically drop your packets and when you’re sending some important critical traffic we don’t want that to happen. So that is one more problem here. Now if there is a packet loss probably you will have some issues if you are on the phone call you will see the voice will break up because of the some packets are getting dropped because of the condition. If you’re using some telephone conferencing applications the VoIP and video will not be synchronized, your picture is not clear again and if you are downloading some files from some publishing companies the files will be corrupted so when you download the file will not be complete.

So it says when you try to open that particular file, it says as corrupted. Or maybe if you’re attending a call center, probably they say that please hold on while my screen refreshes. Because when they try to refresh, it will take some time because of the network conditions. And these are the general problems you have in general? Noncommerly when we have some network congestions. Okay, so the first one was lack of bandwidth packet loss. Even there is a possibility that your packets may get delayed. Now the delay generally depends. There are different types of delays. We have processing delay where the time taken by the device to process the packets queuing delay, queueing delay is nothing but how long because the packets will be queued before it sends outside.

Probably that will also take some time depending upon the output queue again and then propagation delay is the time taken before it actually sends outside and these are all the things, these are all different types of delays matters. So when you are sending the information, it will also take some time depending upon the dependent on different types of delays as well. And then the fourth problem will be jitter. Now, jitter is one kind of problem where the packets from the source will reach the destination in different delays. Like normally, if you are sending the packets at a steady stream from the source, the source and destination between each and every packet you have some serial delay normally the common delay but if you have some jitter, there will be extra delay here.

So jitter is generally caused based on the congestion on the network and the condition can occur either the router interfaces or the provider or the carrier network if the circuit is not working properly. Now jitter again leads to some extra, maybe some unnecessary traffic including that so the packets not coming at a common streaming we call that as Jitter. Now these are the four different problems we have with the network condition and to overcome all these things we need to use something called quality of service. Now, quality of service will ensure that whatever the bandwidth available based on the particular bandwidth will give some priority for specific amount of traffic.

Like we’ll say that if the voice traffic is coming and also your ftp traffic is coming, we’ll say voice traffic should be sent first before it sends the ftp traffic. And we are also saying that ftp traffic should not utilize more than 64 kb. If it is exceeding, it will automatically drop it or reduce the priority, something like that. So we do this we’re efficiently utilizing the available bandwidth by prioritizing some specific topic and that is what we call as quality of service. So probably the next section we’ll see some different qs tools which we use to make this possible. Just a quick overview on that. And in the later on coming topics we’ll also get into each and every mechanism which is used to overcome these problems.

2. Qos Mechanisms

Now in this video we’ll talk about some of the qs mechanisms which can be used, used to overcome the network issues. Now majorly, when you’re working in a converged network, you have some voice traffic as well as a video traffic, and maybe you have some ftp traffic. So there is a possibility that your ftp traffic may utilize almost all the bandwidth available and maybe your voice traffic may get delayed or it can be dropped. And that’s something we don’t want. Now, to overcome these things, what we can do is we can implement something called qs mechanisms which will ensure that we give some priority for specific amount of traffic. Now, the major problems with the network is the lack of bandwidth. Now the packets get dropped or delayed normally because of the lack of bandwidth, or if there’s no bandwidth, there is a possibility that your packet get dropped as well.

Delay, Ngitter, these are the general issues you have with the converse networks. Now we’ll see the different querys mechanisms we can use to overcome those things. So the first one is classification in the market. Now, in your network we are sending some voice traffic as well as we have some video traffic, or maybe your critical traffic like database traffic, and some other traffic like ftp or Http traffic. Now, the first thing what we’ll do is we can classify the traffic as the traffic, which is high priority traffic, the traffic which is a medium priority traffic, and we can also classify them as a low priority traffic. Now this method we call that as a classification. So classification is a method of defining the different types of traffic in different categories so that we can define what kind of priority should be given for which traffic. Okay? So that’s what we call as classification.

And then the next thing what we can do is we can, as they move over the network, we can do some specific marking values with them. Like, I can say that all the voice traffic coming should go with some marking of seven numbers, something we’ll talk about marking more in detail in the next sections, like whatever. The video traffic coming should come with a marking of six or something like that. And as they go through the network, once it reaches the next device based on the marking values we have, it is going to decide what kind of priority to be given for that particular traffic. So marking is simple, it’s like a coloring of the packets as a member of some specific network and it will be recognized throughout the network. So classification is differentiating, the traffic, like video traffic, voice traffic, mission critical data, or some signaling traffic, something like that.

Now the next mechanism we can use something called congestion management. Now in the congestion management, what we can do is we can define a priority for a specific amount of traffic. Like, let’s say you have a voice and video traffic and also you have ftp traffic coming and if both are coming and there is a major condition in the network, we want to ensure that this voice and the video traffic should be sent first always. Let’s say this is your voice traffic should be sent always first before it sends all your ftp traffic or we can arrange them in a separate queue so that it should not. This will ensure that your voice or video traffic should always be given a high priority and there is a well less possibility of getting dropped. So we have some different queueing mechanisms for that.

We’ll talk about more in detail on that the different mechanisms glass base, weighted field, queen, low, latency, queen, we have different options. So here we are just getting into some basic introduction of those things. Now, there’s one more mechanism can be used called congestion avoidance where before it actually gets congested and reaches the limit and then it will start dropping the packets and before it reaches the maximum threshold value it randomly detects and drop the packets which are low priority. So that’s what we call as congestion avoidance mechanism. We have something called red random Led detection and weighted random Led detection mechanisms which are considered as congestion avoidance mechanism where it is going to drop the packets before the network gets congested.

And apart from that we have some other options we can use like policing and shaping. Now, policing and shaping are almost similar. Now in the policing what we can do is we can define the maximum amount of bandwidth or maximum number of packets that can be sent by a specific traffic. Let’s say I can define a rule, say that Http is allowed to send to utilize the bandwidth of not more than 64 kbps, something like that, or maybe one mbps and anything exceeding the one mbps will be automatically dropped. So that’s what we are going to define. We are saying that we are going to enforce a limit for specific traffic like Http. I’m going to say one mbps and if it exceeds one mbps, whatever exceeding traffic, either it can be dropped or it can be again marked with some low priority traffic.

That is something we can do and whereas shaping is also same thing, we can define a limit for the specific traffic and once it reaches the limit, instead of dropping, like in policing, it is dropping. Instead of dropping we are delaying the packets and we are using some buffer, we are going to store them in the buffer and we are ensuring that it will send without getting dropped. Now that’s what we call as traffic shaping. We’ll talk about this more in detail in a separate sections, but in this section we are just getting into some introduction of what are the different qs mechanisms we have which will ensure that your high priority traffic should always get forwarded and your network, if in case it gets congested. In that case, we need to ensure that our low priority traffic and the high priority traffic should be differentiated and preference should be given.

3. QoS Models

In this video we’ll see the different models can be used to implement and quality of service. Now there are three different models which was used for implementing the quality of service. We just used the last one in today’s Networks Differentiated Services. But initially there were some other models like best effort and Integrated Services. Now the first one is best effort model now the best effort model was initially based on the best effort packet delivery service. The best effort is a default mode for all the traffic where there is no differentiation of the traffic. So all the traffic are treated equally whether it is a voice traffic or whether it’s ftp traffic. The device is going to send out all the traffic on the output interface, treating all the traffic equally and it will try its best to forward the traffic without any differentiation.

Now the best effort moder in simply we can say without any quality of service where we are not differentiating the traffic, we are not giving the priority and we are not going to receive anything, any bandwidth on any particular traffic. Now later on, the major drawback with the best effort is some of the benefits. Like it is highly scalable where we are not using any specific mechanisms to forward the traffic. But the major drawback is there is no guarantee for any traffic. So maybe there is a voice traffic which is going on the network, it may get delayed or may get dropped because there is no differentiation of specific traffic. Now. Later on we have something called Integrated Services Model. Now Integrated services model was slightly different.

Like in this integrated services which is older than the dif serve, where the devices is going to request or reserve some specific amount of bandwidth for a specific flow. Like take an example, there is an IP phone connected here. The IP phone is going to send a signaling request to the linkage device saying that it has to reach some so and so device here and it needs some 20 mbps or 20 kb of bandwidth for that. And then it’s going to send a request request and based on that it is going to reserve some amount of bandwidth on that particular path. And this all process is done by rsvp protocol. Now, there is a protocol called rsvp resource reservation protocol. It’s going to reserve a specific amount of bandwidth for each and every flow and that bandwidth will be reserved whether that particular device is sending the information or not. So the major advantage is it’s going to reserve by using some protocol rsvp.

Now, rsvp stands for Resource Reservation Protocol. It is used for reserving a specific amount of bandwidth for a specific flow. It works on with a protocol ID of port 46 and on tcp udp port number three four double file. Now one of the major drawback with this integrated service model is if a specific bandwidth is reserved for a specific flow. Let’s say there is a view IP traffic and we are going to reserve some 64 kb of bandwidth for that particular flow. If that particular voice traffic is not coming up it’s no other can use that particular bandwidth available. So it’s something like reserved and no one can use it. That’s it. And it’s really not scalable for a big size network because it requires a separate administrator configuration on each and every device and a separate bandwidth reservation is recent every device.

And one of the major drawback is what if any one of the transit router do not support rsvp then there won’t be any reservation done and it’s going to be a big disadvantage here. In today’s network we don’t use these two models. The first and the second one best effort is like no differentiation whereas in the InServe inso it’s going to reserve a specific amount of bandwidth for each and every flow. In today’s network we use differentiated services model. Now, the differentiated services model was introduced to overcome the limitations of both the previous models, the best effort and the insubmodels. Now here what we do is each and every traffic is identified based on different classes. Like here we are going to create something called class maps.

We’ll see more in the configuration wise in the next videos. So we are going to differentiate or create our own classmaps. We can choose the level of service for each class and we can define what traffic should come in the class map one. We can create another classmap like classmap two. And we can also create class map three where I can define something like voice traffic and the video traffic should comes under the classmap one. And we can say that my ftp traffic or Http traffic comes under the classmap three, whereas my other information like sql, some database kind of traffic comes under the second classmap. And we can define a set of priority for each and every class map. Like we can say that if the class map one traffic is coming and also class map two, traffic is coming.

We can define some priority level for this where this classmap one should be more prioritized or even we can guarantee a specific amount of bandwidth. And that’s something we are going to differentiate. And we are treating each and every traffic based on the user defined plus map. So that’s something how differentiated service works. Like here you can see I have categorized in four different services like premium, gold, silver, bronze. Once the enters into the router probably it’s going to treat each and every packet in a differentiated way. Now in today’s network we use differentiated service model and the major benefit is it is highly scalable for a big size networks and we can define different levels of quality possible.

And the drawback we can say it is a little bit more complex mechanism to implement as there is no absolute service guarantee still. But this is the one model which we use in today’s networks. So probably we’ll see more on this differentiated service model like how to create a classmaps and how we define them on the interface by using policy maps we’ll see the hierarchy on how the configuration goes probably more in detail in our next sections. Now primarily we got three different models. Let us quickly revise the three models here. Now best effort model is something like no quality of service where there is no differentiation of the traffic, no priority at all. Whereas in the InServe we are going to differentiate.

We are going to reserve a specific amount of bandwidth for each and every flow. And that bandwidth is reserved by using some rsvp protocol but in service something not really scalable in a big size networks. And if there are any devices in between do not support rsvp then in that scenarios we cannot reserve a specific amount of bandwidth. Now, differentiated services is the one which we use in two dash networks where each and every traffic is classified by using different classmaps and we can define we can define some differentiation like what traffic, how it’s going to be treated is defined manually by the administrator. We can define a specific guaranteed bandwidth or we can also define the prioritization of specific traffic to be sent first before it sends the other traffic.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!