4. Class Based Weighted Fair Queing
Now, in this video we’ll discuss about class based weighted failure queen. Now, class base weighted fail queen is an extension to the default quing mechanism which we used in our previous sections, weighted fail queen. And here in case of class base, we are going to define our own user defined classes. So where we are going to create our own class maps like class map one, class map two, class map three, and we’re going to define what traffic should come in the class map one, what traffic should be classified in the class map two and class map three. Like here, let’s take an example. I’m using a voice traffic, all the traffic is coming, I’m going to differentiate the voice traffic and I’m going to differentiate the video traffic and the web traffic in a user defined classmaps.
And then we are going to define a guaranteed bandwidth for each and every class at the time of congestion. Now, if there is any congestion in the network in those kind of scenarios, we are going to define a specific minimum bandwidth guaranteed during the condition. Like let’s say this is your ten mvps link or 100 MPs links and I’m going to guarantee that in case if there is any congestion, the voice traffic should be guaranteed of minimum of five mvps. It can use, there is no congestion, it can use more than that, but five mvps is something guaranteed in the case of conditions. Now here we are going to define the classes, that’s what we call as user defined traffic classes and we are going to define the default candidate bandwidth.
Now, if you go back to the previous method which we discussed like weighted fail queen mechanism instead of classes in the weighted fair quing mechanism, there is an automatic classification based on the preference based weighting. So even if you do not configure, it’s going to use the default weighted fair quing mechanism and it’s something supported only on two mvps or less than two MPs links. And there is a lack of control over classification because we are not classifying anything, it’s something done automatically. Now, the classification again can be done based on different options like we did in the basic classes how to create a classmap, either you can match the input interface, like maybe you have two interfaces at zero by zero and F zero by one.
We can match the traffic based on the incoming interface or based on the marking values like IP presidents and dscp. Or you can write an acl where we can write a specific source and destinations in that particular acl. Or we can use something called nbar network based application recognition where we can directly say an option of match protocol and we can directly match Http or any specific protocols. Remember we did some basic example during the class map. Like I got a small lab, a small task, which where I’m exactly conferring saying that the icmp traffic should be guaranteed from ten dot network to two door network. Should be guaranteed of 128 kbps when it goes on the router one to router tooling on s one by zero interface, and the remaining Http traffic should be guaranteed of 64 kbps.
That’s my requirement. Now to configure this we can simply go and create a classmap and we can simply say match access group. We need to create an acl which is going to match the source and destination and if you don’t have a specific source and destination we can directly write here match protocol icmp. So when you say match protocol icmp is going to match all the traffic icmp traffic irrespective of the source and the destinations. But if you have a requirement where you need to specifically tell the source and destinations, we need to write the source and destination based on the acl and when you define that particular acl inside the class map and then again we can write web traffic. Web traffic is in a separate classmap.
So we are going to create two different classmaps. One class map is going to match my icmb traffic and the second classmap is going to match my web traffic and then we can go to the policy map and we can call those class maps die SiMBiT traffic and we can define a command called bandwidth in terms of Kbps. And I’m saying that whatever the traffic matched in this class should be guaranteed 128 Kbps of bandwidth in case if there is any condition. Now similar way we are going to match the web traffic here calling the class map and we are going to say that the web traffic should be guaranteed of 64 kb of bandwidth during the congestion. If there is no condition they can still use excess of that.
But in case of congestion, the minimum guaranteed bandwidth for this traffic is defined with a bandwidth command here and then we are going to apply this on the interface just we need to go to the Interface Service policy Output and cci. Now this is one of the lab, if you remember we did this same lab in the basic classification examples but in this example we are going to practically verify with another example with a class based weighted fare queen bandwidth reservations. So let’s take an example here. So I’m going to take the same example or the same lab which I have in the workbook documented. Now I got two routers, router one and router two and between the router one and router two I’m going to configure the ehrp protocol just to provide the reachability.
Anyway not really compulsory here because we are not going to generate any real time traffic to test it out here but we are going to see how the classification is done and how we are going to reserve the minimum bandwidth. So just countering the network auto summary and nerds. Now my requirement is I want to ensure that my icmp traffic should be guaranteed of one and 128 Kbps. That is my first requirement. And then Http traffic should be guaranteed of 64 Kbps and ftp traffic should be guaranteed of 64 Kbps. And all the remaining unmatched traffic should use the default weighted fair queen mechanism. Now to make this possible, we need to create three different class maps.
One classmap is going to match the icmp traffic, another class map matches the htp traffic and the third class map is going to match the match ftp traffic. Now as the specific requirement, do not need to be specific source and destinations so we can directly get into the class map. I’m going to create a class map called Http and I’m going to say match protocol Http. First. I’ll start with http It’s okay, any order we can write. Now the next thing we’ll create another classmap which is going to match my icmp traffic. So I’m going to simply use the same names in calculators icmp and then I’m going to say match protocol icmp. So icmp it matches all the icmp traffic from any source or in destination. Similar way I’m going to match another classmap with match protocol ftp.
Okay, so if you verify the configurations here, if I said show run classmap, I can see here I created three different classmaps. One classmap matches the icmp traffic, Http traffic and ftb traffic. Now the next thing we need to define a guaranteed bandwidth of 128 Kbps for icmb traffic. Now to make that possible, we need to create a policy map. Any name for the policy map, I’m using cci as a name and then we are going to define the icmp. And if you use question mark, almost all the parameters you’ll find here. Right now we are going to use a bandwidth command to define the minimum guaranteed bandwidth. And once we use question mark here, you’ll find some multiple options.
Like we can either define what should be the minimum guaranteed bandwidth out of that icmp traffic should get, or we can define in terms of percentage. Like you can even define something like let’s say out of 100%, we can tell that 10% of the bandwidth should be reserved for icmb traffic. So if it is ten mvps links automatically, 10% means one mbps will be, will be reserved for icmb traffic. Like in my scenario, it’s one five, four Kbps on this seizure link. So it’s going to be 10% means around 154 Kbps will be reserved for icmb traffic. Now probably I have a separate lab documented where I have used only percentage values instead of bits per second. Now in this scenario, I’m going to define specific amount of bandwidth.
We can also define what should be the guaranteed bandwidth or we can define in terms of percentages. And the next thing is the next thing we need to define the class map Http. And then I’m going to define the guaranteed bandwidth of 64 kb and then I’m going to use ftp traffic and the guaranteed bandwidth of 64 kb. And for all the remaining traffic we generally defined as a class default. And if you lose by default, if you are using this link, if it is less than two mbps link automatically, it is going to use weighted fair queen automatically, even if you don’t define. So the automatic classification will be based on the president’s past weight, even if you do not configure a fairview command.
And if you’re using any high speed link, probably it will use first infrastructure mechanisms. Now we can specifically define to use weighted fair querying in case of high speed links by using a phase queue command. So now if you verify showrun Policy Map you can see these are the different policy map we created inside. We had defined the class and the guaranteed bandwidth. And you can also use a command called Show Policy Map. Now here it will show you the bandwidth allocated for each and every class. And then the last step is implementing on the interface you define the service policy whether input or output. As per my scenario, there are output one leaving the interface so it’s going to be output and the policy name is cci the center.
Now for verification we can use a command called Show Policy Map. Interface s one by zero. Now here you can see on this interface there is a separate class map which matches the icmp traffic. And here you can see the bandwidth of 120 at Kbps Reserve. Now you’ll see some drops as well in case it’s getting dropped. But right now we don’t have a real time traffic going on. So what I’ll do is I’ll try to generate some traffic from router one to router two, just some pink traffic because we have matched the icmp traffic also. Now once we generate the traffic, if you verify Show Policy Map let me check, did I configure on the right router? No, actually I configured these commands on the router two so it doesn’t matter. So let me just verify Show on class map, it’s okay.
Anyway, so I have configured on the router two instead of router one. Either I can copy paste or I can just take an example like router two leaving interface. Now in my scenario, the interface leaving to router two. So I’m going to generate the traffic to ten one one and the source is 21 one. And if you verify Show Policy Map interface s one by zero. Now here I should see the packets matches. Now you can see the default class map matches the packets because all the remaining traffic, it can be a controlled traffic between the ehrb messages that matches and I should see a classmap matching the packets. Here you can see there are ten packets which are generated five pig messages before and now.
And whenever you see any packets matches here you will see how many packets matched and how many bytes of information transferred here by using this specific commands. Now there is an alternate way to configure this class based weighted fear king. Either we can define a specific bandwidth by using a bandwidth option and we can define like 128 KPPS or 64 KPPS or 64 kb, whatever the bandwidth, there is one possible option or we can define in terms of percentages. Also, if you don’t want to get into a specific bandwidth, we can tell that my Http ftp traffic, this is the next step which I documented in the workbook here.
Now this is based on the percentage options instead of manually define the bandwidth. Now in this I’m saying that all my Http traffic, ftp traffic and tftp traffic should be guaranteed minimum of 20% of the bandwidth on this interface and the X Windows application should get a minimum of 10% of the bandwidth. And I have some database servers, sql servers, and that should get a guaranteed of 25% of the bandwidth in case of conditions. Now to implement this, again, we can go to the command line and we can simply configure the same thing. Like I can create a classmap which is going to match all my traffic and I can define what should be the percentage of the bandwidth.
I think I got a second lab here, you can see there’s a second lab, I can create a class map which matches all the three protocols as a class web, X, Windows which matches the protocol X Windows and sql which matches my sql server. And then inside the policy map I can define the class and then I can define in terms of percentages rather than defining in terms of specific values. Now there are two different options we can configure here and in applying it’s going to be the same. I got three different scenarios with the three different labs documented in the workbook which is going to where you’ll see exactly how class based weighted fair queen can be used. Now again, when it comes to implementation, it’s going to be the same.
We need to create a class map and then in that class map we need to define what traffic should be matched and inside the policy map we need to define the guaranteed bandwidth options. Now, one of the major advantage we get here is we are going to define our own classifications of the traffic and we can define our minimum bandwidth allocation for each and every class and we are going to provide some finer granularity and scalability. But again, the drawback with this class based weighted fair queen is let’s say if you have a voice traffic coming up and we are going to define some guaranteed bandwidth of for the specific voice. But still if your voice traffic has to be given a more priority rather than simply reserving the bandwidth.
So that is where the classbased weighted fair wing has a small disadvantage. Now, to overcome this, we can use another queen mechanism called low latency queen, where we are going to combine the classbased weighted fair queen along with the priority queen. Now, in case of low latency, what we are going what we are going to do is we are going to define a priority for a specific traffic like voice which has to be sent first before it sends any other traffic. At the same time, we’ll ensure that each and every traffic is differentiated. So we’ll be talking about LLQ probably much more in detail in our next session with lapse.
5. Low Latency Queueing
Now in this video we’ll try to understand the concept of low latency queen. Low latency queen is going to combine the functionality of the priority queen which is a legacy priority queen method along with the class based weighted fair queen. Now, in the previous section, if you remember we discussed a class based weighted fair queen in the case of class based weighted fair queen, it’s going to provide a minimum bandwidth guaranteed at the time of congestion. Now, we are going to define with a bandwidth command saying that in case if there is any congestion, I’m going to say that icmp traffic should be guaranteed 128 kb of bandwidth and in case if there is no congestion in that case they can use excess of the bandwidth, but that is a minimum bandwidth guaranteed.
Now, similar way we can define a specific classes, each and every class, we can define a specific guaranteed bandwidth. Now, all the packets are sent based on the bandwidth defined and no packets will be given any priority. Now, one of the major drawback with this class based weighted fair querying is it is going to add delay to some of the sensitive traffic like voice traffic. So let’s say you got a voice traffic here and even though we defined some guaranteed bandwidth for the voice, now there is a possibility that the voice packets may get delayed. May get delayed, that is something we don’t want. Now the major drawback with the class based weighted fair queen is it is going to have some issues with applications like voice traffic that cannot tolerate the delay or jitter.
Now to fix this, the low latency quing is going to provide a strict priority quing reducing the delay in jitter in the case of voice communications. Now what we can do is we are going to use the same class based weighted fail queen as if we did in the previous section where we are going to define a minimum guaranteed bandwidth by using some bandwidth option. And also we are going to define a separate class and that particular class we are going to match a voice traffic and we are going to define a priority of the particular for the voice traffic, we are going to define a priority, maybe whatever the bandwidth. Now, once we define the priority for the voice traffic, it’s going to guarantee the maximum amount of bandwidth to the voice with a high priority.
Now, this will ensure that your voice traffic is not queued, it should be sent first before it actually sends the other classification of the traffic. Now, the main advantage we get with the low latency queen is it reduces the latency and digital for the voice communications. Now let’s see the configuration voice. There’s no much difference when you compare with the class based weighted fair queen. We are still going to use the other traffic like icmp Http and ftp based on the class based weighted fair queen and we are going to define the minimum guaranteed bandwidth for that particular traffic. But the only difference is here we are going to match the voice traffic, I’m getting access list which is going to match the udp port numbers from 16 384 to 32767 used by the voice traffic and I’m going to define them in a class map.
And then here I’m going to just use a command called priority. Now, once we define a command called priority instead of bandwidth, it’s going to guarantee the maximum amount of bandwidth as a priority traffic, it’s going to be high priority traffic than this one. Now, how the bandwidth and priority is going to differ. Now, whenever you define the priority option here, it’s going to allocate the amount of bandwidth in case of kbps, like if I’m going to define as 256 kbps it’s going to give the priority of 250 kbps of bandwidth and it will ensure that it should be forwarded immediately. Immediately and whatever the traffic exits. Like in case of any congestion, let’s say whenever you define any specific traffic as a priority and in case if there is no congestion.
Probably anything exceeding this bandwidth is going to be sent as a normal traffic without any prioritization. But whereas in case if there is in congestion, it’s going to implement some strict policing where it is going to drop the excess of the traffic. Now, the dropping of the excess of the traffic, it all depends upon what kind of priority we are giving. So let’s say I’m going to define 26 kbps as a priority traffic and once it reaches this limit, once it reaches the limit, if there is a congestion, probably it is going to drop the excess of the traffic. That’s what it is going to do. Strict policing, it is going to do if it exceeds and if there is a congestion.
Now, if it exceeds, if there is no congestion in those kind of scenarios, it’s going to still forward the traffic, but without priority it’s going to be treated as a normal other traffic like it’s going to treat for other classes. Now again, we can define either based on a specific amount of bandwidth or we can define in terms of percentage. Like, let’s say if I’m using 1000 kbps of bandwidth and if I’m defining the 10% of the priority traffic, it’s going to use 100 kbps, that is the 10% of the complete interface bandwidth. Now, the major difference between these two options, like in case of class based weighted fair queen, we are going to use a bandwidth command, whereas in case of priority we are going to use a priority command.
The major difference is in both the cases it’s going to guarantee the minimum bandwidth. When we define any specific bandwidth, they are going to get the minimum of that particular bandwidth. And in case of bandwidth command, there’s no maximum bandwidth guarantee, so it can use excess of the bandwidth, it depends, but there’s no maximum of the bandwidth it can utilize. But whereas in case of priority we can define like we are going to define 260 kbps of bandwidth priority and there is a maximum bandwidth it can utilize, if there is a congestion and anything excess it will be dropped. But in case of bandwidth command, we are not going to do that. Now whereas when we give a priority command it’s going to have builtin policing.
Policing means it’s simply anything exceeding it’s going to drop. But whereas in case of bandwidth anything exceeding whatever we define, it’s not going to drop here, okay, it’s going to be delayed or so it will try its best to forward but it can drop in the end. But anything exceeds here, it’s going to drop in case of priority options and the major advantage we get to the priority option is it’s going to add very low latency, especially your voice traffic which has to be forwarded immediately without any latency, that is guaranteed here, but whereas here there is a possibility of delay. Now conflation wise we can get into the command line and you can verify the same. Only the difference is we just define a priority option here instead of using the bandwidth. When it comes to configuration, it’s the same thing.