CompTIA Network+ N10-008 – Module 12-:Streaming Voice and Video with united communications Part 3
February 27, 2023

6. 12.5 QoS Markings

In this video, let’s consider how we can mark our traffic so that the next router or the next switch can look at that marking and make a decision based on that marking using some sort of a quality of service mechanism. And we have markings both at layer two and at layer three. The layer two marking I want you to know about is called cos, or class of service. And we oftentimes see these markings on a dot one Q trunk. A dot one Q trunk adds four bytes to a layer two frame. And in those four bytes we have various types of information, such as what VLAN does this frame belong to? But for quality of service purposes, there are three bits called the priority bits.

And with three bits at our disposal, let’s do the math. How many different values can we come up with? Two raised to the power of three is eight. We’ve got eight different values, and those would be in the range of zero through seven. However, it’s recommended that we do not use six or seven for our production traffic. Those should be reserved for network use. So that means we’re only left with six values that we can use in the range of zero through five. So our really high priority traffic, such as voice over IP, we probably want to mark that at layer two with a cos value of a five. However, there’s a challenge with a layer two marking, and that is it does not cross a router boundary. Just like a Mac address gets rewritten every time we route a packet, that router is going to swap out the source and destination Mac addresses.

Similar thing going on here. This layer two cos marking will not cross a router. So what we want to do is have some sort of a layer three marking that can cross a router, and many of the layer two switches out there can play an if then game. We can say if the cos equals five, then the layer three marking equals this. So we can translate it in the switch many times a layer two marking into a layer three marking. And I want you to know about two different layer three markings. There is IP precedence and DSCP. DSCP. By the way, that stands for differentiated services. Code point. And in an IP version for header, there’s a byte eight bits called the toss byte. And Tos. That stands for type of service. There’s something very similar in an IP version six packet header.

That’s called the Traffic Class bite, but it works the same. We’ve got this byte, and if we use the three left, most bits of that bite, we can come up with an IP precedence value. That’s a layer three marking. It can cross a router boundary. But again, just like CS, we only have six values at our disposal because two raised to the power of three is eight. And the values are zero through seven. And the recommendation is we don’t use six or seven. That leaves us with only six different values in the range of zero through five. So again, if I had voice over IP, and I’m marking my traffic with IP precedents, I might want to mark it with an IP precedence value of a five. However, with only six different values at our disposal, it’s hard to be as granular as we might want to be.

We might have more than six different classes of traffic in our network. How can we be more specific about how to treat different traffic types? But instead of limiting ourselves to the three left most bits of the toss byte, what if we used the six left most bits of the toss bite? That would give us a DSCP value. And two raised to the power of six is 64. That gives us a lot of granularity. In the range of zero through 63, we’ve got 64 different markings. In fact, it’s almost too granular. Here’s the challenge. Let’s imagine that on my router, I decide that the number 26 is my favorite number, and I’m going to mark my best traffic with 26 and I send it to your router.

Well, your router, based on your opinion, you think 26 is dirt. You want to look at 42 as your high priority traffic. You see the issue. We don’t have any relative level of priority to compare. Fortunately, the IETF Standards Body came to the rescue and they predefined for us 21 different DSCP values and they gave them names, so we don’t have to remember those specific 21 decimal values. And I want to go through those 21 different names with you. By the way, sometimes you’ll hear this called a PHB, a per hop behavior. When you hear that, we’re talking about the name of a DSCP value and one name, and many routers and switches allow you to type in the word default for the value of DSCP. And that’s going to equate to a decimal value of a zero.

And if we’re using the six leftmost bits of the toss byte, what is zero in binary? Using six bits? It’s all zeros. Now, at the other end of the spectrum, our very high priority traffic like voiceover IP, we might mark it with a value of EF or expedited forwarding. And in decimal, that’s a 46. Now, when the ITF came up with these values, they kept in mind that not everybody is using DS EP. Maybe your router is configured to examine IP precedence values only. Well, if I send you this voice over IP packet marked with a decimal value of 46, and you’re only paying attention to IP precedents, do you look at that and say, I don’t speak DSCP, so I’m going to disregard this value? No.

What your router is going to do, if it’s paying attention to IP precedents, it’s going to look at the three left most bits in isolation. What if we looked at the three leftmost bits in isolation of 46 in binary? I’ve got it highlighted on screen for you. It’s 10 one. If we looked at just those three bits by themselves, what is that in decimal? It’s a four plus a one. That’s a five. That’s the value that we should use to mark our highest priority traffic with IP precedents. So you see, there is some backwards compatibility. However, we may want to be purely backwards compatible. And by that I mean the toss byte can be bit for bit identical. Between IP precedence markings and DSCP markings. This gives us complete backwards compatibility. And there are seven of those values called class selector values. Class selector one has a decimal value of an eight. Now, notice how this equates to an IP precedence value of a one. Again, if we look at just the three

leftmost bits of that toss bite in isolation zero, zero, one, that’s a one in decimal. Meaning that if an IP precedence router evaluated a packet marked with a value of class selector one, in other words a decimal value of eight, it would extract from that an IP precedence value of a one, because it’s only examining those three leftmost bits. And the same thing continues through class selector seven. Again, if you look at just the three leftmost bits, that equates to IP precedence values of 123456 and seven, noticing that the last three bits in this toss bite for these class selector values, they’re all zeroes.

So if you were to take a class selector value, let’s say class selector six, and you put that under a microscope, next to an IP precedence value of a six, they would look bit for bit identical, because these class selector values, they are not marking bit positions four, five and six. So that’s a total of nine different DSCP values we can use. We’ve got one just for default traffic, sort of best effort. We don’t really have a strong opinion about how to treat that. We’ve got one for expedited forwarding for our best traffic, and we’ve got seven values that can give us some backwards compatibility. However, there are twelve more. The ITF gave us 21 different values.

 Let me show you those other twelve values. And with those other twelve values, we’re going to pay attention not just to IP precedents, but we’re also going to pay attention to the probability that a packet is going to be dropped if we start to experience congestion. If you remember how random early detection works as the queue starts to fill up in a router in order to prevent that queue from filling to capacity and then discarding everyone leading to that symptom of TCP. Slow start for everybody called TCP synchronization with random early detection or weighted random early detection, we start to somewhat randomly discard traffic as that queue begins to fill up, so we don’t fill it to capacity, and we’ve got three different probabilities of dropping with these assured forwarding values. First, let’s consider the fact that we have four different classes.

We’ve got class one, two, three and four. The class number corresponds to the IP precedence equivalent value. So notice in class one we have three DSCP values, AF eleven, AF one two, AF one three. They all have a one after the AF for assured forwarding. And if we look at the three left most bits of the toss byte, which is the IP precedence equivalent value, it’s zero zero one for all three of those values. If any of those three values were intercepted by a router speaking just IP precedents, they would all be interpreted as having an IP precedence value of a one. But let’s say that our queue is starting to fill to capacity and we want some traffic to be more likely to be discarded than other traffic. That’s where we turn to that second number in the AF values, the AF 11213. They have different drop probabilities. Notice we have a low drop probability and everything with a low drop probability. In all four of those classes, they have a number of one as their second digit, AF 1121-3141.

 And in binary, that one comes from bit positions four and five in the toss byte. So you see that we’ve got zero one in isolation. That would be a one that’s in bit positions four and five of our toss bite. And if you look ahead, it’s the same thing for medium, it’s AF 122-3242. If you look at bit positions four and five in isolation, it’s a 10, that’s a two in decimal. And then bit positions four and five are one one for all of the high drop probability markings. So they all have a three as their second value, AF 13233 and four three. Now, what does it really mean to have a high or medium or low drop probability? Let’s consider how random early detection works. Random early detection does not want a queue to fill to capacity and start to overflow.

So what we can do is set a couple of thresholds, a minimum threshold and a maximum threshold. And as that queue begins to fill up, there is zero probability of discard until we exceed at that minimum threshold. Once we exceed the minimum threshold, the router begins to get just a little bit nervous. It’s concerned that we might continue growing and overflow this queue. So it introduces a slight, a very slight possibility of discard. But as the queue gets deeper and deeper and deeper, that probability of discard increases until we hit the maximum threshold. And if we go over the maximum threshold, there is a 100% probability of discard. Here’s another way of viewing that.

This is called a red profile. Let’s say that the minimum threshold is 25 packets. The maximum threshold is 45 packets in the queue. Well, until we exceed 25 packets in the queue, on average, there is zero probability of discard. But once we get to 26 packets, and then 27, notice the probability of discard begins to increase and increase and increase until we have an average of 45 packets in the queue. That’s our maximum threshold. And the way this example is configured, when we’re at exactly the maximum threshold, there is now a 20% probability of discard. But once we exceed that maximum threshold, there suddenly is a 100% probability of discard. Now, this is a red profile that we could apply to an entire queue if we were just doing red.

 However, if we’re doing weighted random early detection, then we can give different red profiles to different markings. Check this out. Remember we said that the high drop probability assured forwarding values ended in a three? Look at this. All four of those high drop probability values of AF 132-3343, they all have the same minimum threshold. So based on a minimum threshold of 25 for high drop probability and a minimum threshold of 30 for medium drop probability and a minimum threshold of a 35 for low drop probability, let’s imagine we had an average of 28 packets in the queue, right? Now, at 28 packets in the queue, if we receive a packet marked with a value of AF 132-3343, anything ending in a three, are we going to discard it? And the answer is maybe there’s not a high drop probability of discard.

 There is a possibility, but there is no probability of discard for the medium and the low drop probabilities until we exceed their minimum thresholds. So that’s how these assured forwarding values have two components. The first component, the one, the tooth, a three to four, that’s the IP precedent’s equivalent value. And the second component, which is either a one or two or a three, that refers to their drop probability, which translates into what is their minimum threshold?

7. 12.6 QoS Traffic Shaping and Policing

There is a category of quality of service tools called traffic conditioners and we’ve got two tools in that category I want you to know about shaping and policing. They both serve basically the same purpose. They set a speed limit on your traffic. We can say that we don’t want network gaming traffic, for example, to exceed a certain amount of bandwidth on our network. But let’s consider some of the differences between shaping and policing. Shaping is going to delay traffic that is exceeding the speed limit rather than dropping it. It’s going to say if I send you, you’re going to be violating the speed limit. So I’m going to store you in this buffer and then when the bandwidth demand dies down, I’ll take you out of the buffer and I’ll send you on your way.

And the recommendation is we use this on slower speed interfaces because shaping does not drop a packet and therefore cause it to be retransmitted. We’re trying to keep down the load on the network. Policing on the other hand, is a bit more harsh. Policing can say if you’re violating the speed limit you’re going to be discarded, you’re going to have to be retransmitted. And the recommendation is it be used on higher speed interfaces. So again, the big distinction shaping delays traffic while policing drops traffic that happens to be exceeding the bandwidth. Shaping should be used on lower speed interfaces and policing on higher speed interfaces. But one of the questions is how do we send at less than the line rate? I mean after all we’re dealing with a synchronous circuit.

We send bits out of an interface at a certain rate. How can we send at a slower rate for certain types of traffic? Well, the metaphor you often hear uses the concept of a token bucket. Imagine that we have this big bucket and we can dump in bytes or bits into this bucket. By the way, shaping is going to use bits and policing is going to use bytes. But other than that it’s very similar. And let’s say that the capacity of the bucket is a value we call B sub C. That’s the committed burst. That’s the number of bytes or bits that we can send during a timing interval. You see, we’re going to take a 1 second period of time and we’re going to divide it up into multiple timing intervals. And during each timing interval we’re allowed to send all the bytes or bits that happen to be in this bucket. But after the bucket is empty we’re not allowed to send anything else until we get our bucket replenished.

That’s how we can send at less than the line rate. Now there’s an option. We can extend the depth of this bucket. Instead of just having conforming traffic that obeys the law and stays below the speed limit, we might occasionally allow some exceeding traffic, traffic that goes beyond the speed limit for a brief period of time. Here’s the idea. Let’s say that we’re dumping 10,000 bits into the bucket during each timing interval. Well, during the first timing interval, maybe we didn’t need 10,000 bits. Maybe we just needed 5000. Well, if we have a bigger bucket, we might have a residual of 5000 bits in that bucket. And then we at the beginning of the next timing interval, dump another 10,000 bits on, that might give us 15,000 bits in our bucket.

And we could send those at the line rate until we run out of bits in our bucket. So we can’t have a bigger bucket and periodically exceed the speed limit. Let’s go through a basic example of how this works, because a lot of people have trouble visualizing how we send at less than the line rate. And the formula that governs all this, whether it’s policing or shaping, is this. The CIR equals B sub C divided by Tsubs. Let’s break that down. The CIR, that’s the speed limit. It stands for Committed Information Rate and it’s the average speed over the period of a second. And I emphasize average because as we’ve already said, we’re dealing with a synchronous circuit. If the line rate is ten megabits per second, we’ve got to send traffic at ten megabits per second. And we’ve already talked about the B SubC. That’s the committed verse. That’s the number of bytes for policing or bits for shaping that we’re going to put into the bucket each timing interval. And the timing interval is measured by T sub C. This is the fraction of a second after which we’re going to replenish our bucket.

So in our example, just to keep the math easy, let’s use a variant of traffic shaping that takes a 1 second period of time and divides it into eight different time slots. What is one 8th of a second? Well, that is 125 milliseconds or . 125 seconds. And let’s say that we want to have a CIA or a speed limit of 64 Kb/second. In other words, 64,000 bits per second. Well, if we do the math, we take 64,000 multiply by 00:12 and that’s going to give us a B sub C of 8000 bits. So each timing interval we’re going to send 8000 bits. And let’s pretend we’re doing this on a circuit that has a line speed. Again, just to keep the number simple, of 128 Kbps. Well, when we start to send data, we don’t want to exceed 64K. When we start to send data, we must send at the line rate. We have no option. But we can only send as many bits as we have in our bucket right now. And we’ve only got 8000 bits in our bucket. And each of the timing intervals, you see, we have eight of those.

They’re each one 8th of a second. And if we send traffic at 128 Kbps, we’re going to send all of those 8000 bits before we get to the next timing interval. So notice we’re sending 8000 bits for a period of time and then we’re stopping, and then we get our bucket replenished. It’s like a big dump truck comes up and backs up and it pours 8000 more bits in our bucket. And then we send nothing until the next timing intervals around. Here’s a metaphor I often give my students. Imagine that I’m going to go this afternoon and visit one of my friends and let’s say they live 60 miles away, and I get in my car and I start driving to my friend’s house and I’m driving at a rate of 120 mph, which is violating the speed limit.

And let’s imagine that a law enforcement official pulls me over and they say, mr. Wallace, you realize you were exceeding the speed limit. I could accurately say, actually officer, I was not exceeding the speed limit. The speed limit was 60 miles an hour, I was going 60. See, over the period of an hour, I would only have traveled 60 miles. Granted, my rate for the first 30 minutes would be 120 mph, but my rate for the last 30 minutes would be 0 mph. So on average, it equals six. Okay, don’t try that with a real law enforcement official. But that is the way it works with policing and shaping. We dump 8000 bits in our bucket, we send those at line rate, and when it’s empty, we have to send nothing until the next timing interval rolls around.

So what we end up doing is sending 8000 bits at line rate eight times during this 1 second period of time. But we’re not sending all the time. That’s the secret of sending at less than the line rate. We don’t transmit all the time. But if you count it up, we’ve sent 8000 bits eight times over the period of a second. That means the average speed over the period of a second, our speed limit, in other words, the CIR, that equals 64,000 bits per second, even though our line rate is 128. That’s the way that both policing and shaping can send at less than the line rate and enforce a speed limit. And one of the big differences I want you to keep in mind is the B sub C is measured in bits for shaping and it’s bytes for policing.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!