Cisco CCIE Enterprise 350-401 Topic: Infrastructure Part 1
December 13, 2022

6. VTP Lab 01

Let’s just perform the lab task. So we have this arrangement where we have switch one, switch two, and switch through. Let me quickly draw the diagram. So I have switch one, and then it is connected to switch two. Then it is linked to switch number three. One, two, and three. Even switches 3 and 1 are linked together. However, we will turn off or disable this interface. All right, so switch ones, two, and three. What interfaces do they have? Say zero, zero, and one in between that. And here you have two and three. So it is connected like this, and here it’s zero, two, and three. I’ll shut down these interfaces. So I don’t want any cross-communication between the switches. So this is a topology, and we want to run this VDP. So I’ll set up either switch 1 or switch 10 as a VTP server. So let’s do this. I’ll go make Vlad the Tenth to Twentyth. And here we can see the VLAN. We now have a list of VLANs, including some that we created earlier. Then I’ll go to the interface range, even though I can give these interfaces to the iPhone as well. I will do the shutdown because I don’t want to switch one and switch three to do the communication.

Okay, we can go through CDP neighbours and see which switches are in communication with us. So you can see that I can directly communicate with Switch 1020R, Switch 2, and that’s the agenda we have. All right, so let’s go and create the VTP domain. I’ll make this my domain, then the VTP version, and you can see that we have the system stating that the domainname has changed from null to my domain version two. And then in VTP, what options we have, you can see that we can give the mode, we can give the password, and we have the printing option. We’ll see that.So now I can set the mode to client-server transparent and off. So this is the server. By default, it will be the server. I’ll go to the other switch here if you check the number of VLANs, so it is number of VLANs,Because if I go and check the VTP status, it will be empty. So here we can see that this version is version two. It is running version two. Although it understands one, two, and three, the domain is my domain, and you can see by default that this is also server. What is the configuration reason number that it has? That number is four. This is the number of vLance. Let’s double-check this in the first switch. So it means that by default, the VDP is on the switcher. So I’ll go back to switch number one, and inside switch number one, we can check the VTP’s status.

7. VTP Lab 02

Let me run the command “show VDP status.” And here you can see that you have version, you have mode, and you can see the number of VLANs supported. So everything is perfect and correct. There is no issue here. Now if I go ahead and create one more VLAN, say, for example, VLAN 55, And now, if you check the status, you’ll notice that the number of VLANs has increased to 19. The revision is five, and the same thing should be replicated for the other switches as well. So the revision from four to five becomes And now if I go ahead and check the BTPS status, you can see 19 and five. Although the mode is server, I will change it to client, and we’ll check the transparency as well. If I go check the status of VTPs, I’ll go to switch number three. So let’s double-check that. Now this is also true, and the number of villains is five. Configuration revision is zero. And he’s not able to get the information from switch number one to switch number three. Because switches two and three do not form the trunk, So what I’ll do is go quickly here, and we have an interface here, interface E-2.

I can go and manually assign the trunk. Or we can do this dynamically as well as with the DTP. So I’ll do this thing: switch ports and then trunk. You can see that you have encapsulation and native pruning allowed as well. What I want to do is switch to dynamic port mode, and we have the desirable option of switching from port mode to the more dynamic and desirable trunk mode. The flip side of this is that if both sides are enabled, ISL will be executed instead of just one queue. So the recommendation is that we should do Q, correct? So that’s the reason. Let me quickly go and do this switch trunk encapsulation in one queue because that’s a standard correct.So we ran these commands over an interface: switchportrunk, and now I don’t need switch ports to be more dynamically desirable because I can make switch ports more trunked. But anyways, let’s go ahead and run these commands on interface zero slash two, and then we can check the status of VTPs status.Now you can see that they have formed the trunk, and now multiple VLANs are passing. The next step is to change the VTP mode as a client. And then we’ll go and check the show’s VTP status. Then here, I will go and create some more VLANs, say VLAN 60 to 70, and then check the status to show VTP what options you have. You can see that it counters device interface status and the password. So the total is now 30, and the number of reviews is six. You can go ahead and check. Total is 30, examine six. I can now go here and check 36. So still, we can see that the client is sending some information to the server as well, and he is updating his database.

Now I don’t want to do this; rather, what I want here is that the VTP mode is client first of all, and then I will go and create some more VLANs just to verify, say, VLAN 80 to 90, and we can go and verify the VTP status on switch number three. So now this is 41 and seven. Now in between the switches, I will go and make this in VTP mode transparent, and then we’ll check the revision. Now you can see that the revision number has become zero, although they have this 41VLAN that they have already learned. So now if I go here and create some more reliance, for example 102, 10.0, and if we go and check the status, this is eight and 51, and you will see that this will become zero and 41, but it should become eight. Now here, you can see that the status is the same. So as per the transparent mode, it should pass the information, although you can see that the revision number is still seven and the number of villains is 41 when it should be 51. So clearly, they are not updating the revision number in between if we have the transparent, and this is the issue with the virtual devices, but ideally, they should update the database, according to theory. So I’ll do one thing: I’ll go back to switch number two, and I’ll make this VTP mode a client one more time. We should now switch, and we should see that they are replicating in client mode. So the server in client mode is working perfectly in transparent mode, although they are changing the vision number but not propagating the correct number of VLANs.

If I go into VDP mode, I can make this work and we can check the status now. So now the revision number is zero, and no matter how many VLANs I have, they are learning that if the freevision number is zero, they will not learn those data if it is off. So now if I go ahead and create some more VLANs 150–2160, So we can go check revision number 9262, and we can also check revision number 9262 here; this is the final piece that we need to learn and understand. Here is the pruning method. So if I go here, first of all, I should enable VTP pruning globally. Once you enable that, you cannot modify pruning unless the VTP is in server mode. So that’s the restriction: first of all, the Prune should be inside the server because the server has the authority to create the VLANs, and then it is replicating the database. So VTP pruning is on. Now what we can do is go to the switch port and let me go to the interface. Zero slash zero switch port trunk is the interface. Then you have pruning, then you can use Sweelan, and you have add, accept, remove, none, et cetera. Now we can go to switch port interface 0 and check the switchboard related to E 0. Here you can see that the mode, the encapsulation, the trunking, and the pouring wheel are enabled. We have marked that as “none.” So that’s why it’s showing like that. Okay, so this is the way that we can create the VTP and do the verification as well.

8. Etherchannel theory

Three. One. We have Ether Channel set up. We know that Ether Channel is something that we can bundle the interfaces around to increase the overall throughput. So here you can see in the diagram that you have core distribution and access. There are generally two places where we want to do this bundling or aggregation of the ports or interfaces: at the level of code and at the distribution layer, generally. That doesn’t mean that we can’t do it at the access layer. We can accommodate your needs. So what we are achieving with either channel or port bundling is that we want to give much more throughput to whatever we have at this point in time when you are connecting the interfaces back-to-back. That is a very simple term. Let’s suppose I have switches A and B. So let me try to draw here. Say, for example, one switch here and one switch there.

If I have a one-gig interface, then obviously the overall throughput is one gig. However, you can combine four interfaces to form a four-gigabit interface. Overall throughput will obviously increase to four gigabytes. Now, the important thing here to understand is that this 4-gigabyte interface is per application. Suppose you have only one application and you think that one application is getting 4 gigabytes of interface. That’s not true. The algorithm or mechanism behind the scenes is based on the assumption that if you have two interfaces, they will do the load balancing. So one gig and one gig will get load balancing, or one gig and one gig will get load balancing per application flow. That’s the key thing we have here. When we do port bundling, it does not mean that this becomes a single four-gig pipe with four gigs of speed for all applications. It’s not the same thing. All right. So we have different types of methodologies behind this, and we’ll see that we have LSP link aggregation control protocol and Cisco Propriety. That is the PAGP port aggregation protocol. But the moral of the story is that you are aggregating the port for higher throughput. Now we have to go and check. That is how we can create this. We can do it manually, dynamically, or manually when we are talking if a channel or bundle has options.

So for example, I have two interfaces I can use, and I can use this command interface range and then channel group one mode on. Or maybe you can use some other numbers, like 20, 2030, 40, etc. So, when you use mode and on, we have several options. So, for example, in LACP, we have options related to active, passive, and so on. So, for example, if you are using On, this is a manual thing. Manually, you are turning on this LCP protocol or channel protocol. Okay? Now that we are running this command, what is happening behind the scenes automatically? One will be created. Now, this port-channel-one interface that you are creating here requires that you go and write your logical configuration, or you have to do the coding. So for example, if I have two interfaces, 23 and 24, and I want to run commands related to switchportmode and allow a certain VLAN, then I do not have to run the command over the member interfaces, but rather I have to do it inside the logical interface. So we should understand some of the technical terms. I have a switch, and I have two member interfaces, for example, 23 and 24. When you bundle these interfaces, you are using the manual method, such as turning on LACP.

So at that point in time, for example, the group you are using is one. So you’ll make an interface port channel here, and whatever configuration you want to put over these interfaces, you should put over the port channel switchboardmode trunk and allowed villain. For example, if you want to create a layer 3-port channel, you can go to the port channel, do no switch port, and then assign an IP address. Okay, so this is the manual way to do the configuration, although we have the automatic way as well, and we have the protocol as well to do this; we will see that in the upcoming slide. One important note here is that whenever we are creating the port channel among two, three, or four switches, obviously these are point-to-point configurations. So in between two switches, whenever you are creating the port channel, So at that time, your speed duplex and other hardware-related properties should be the same. So what I’m telling you is that you are creating a port channel here; no problem, you can create it like that. You can create other port channels here, for example, numbers 10 and 20, et cetera. So these interfaces, such as speedduplex, should be the same, right? Otherwise, they will not form the port channel, and they will constantly throw an error as well. All right, so next we have a dynamic way to do the configuration. We have a link aggregation protocol. We have a port aggregation protocol. Now everyone is using LCP, and even in these labels, it is recommended that we learn to understand LSP, although if we learn one methodology, the second will come automatically. So I’m going to focus on the LCP link aggregation protocol. This is the dynamic method of creating bundles. We have modes for LCP that are active and passive. If both sides of the switches are active, they will form the port channel.

Assume that one side is active and the other is passive. Again, they will form the port channel. If both sides are passive, then they will not form the port channel because no one is sending the LSP control package. As a result, both are LSCP listeners and will not auto-negotiate the eco channel or the bundle. So you can see that when you’re doing the LCP negotiation automatically, you can go to the interface channel protocol, define, and then you have to go and give the channel group name mode active. Let’s say I’m using active on one side and passive on the other. So they will go and form the port channel. It’s very easy and straightforward. There’s no such thing. We are in this. Now, we have some important things related to LSP: we can have eight interfaces inside a group, and eight interfaces can work as a backup. So eight plus eight equals a total of sixteen interfaces that can form an LSP. But these eight interfaces are working as a backup. Now, who will work as an agent? That depends on port priority. By default, that will be 32768, and lower is  But these eNow, apart from that, we have other variables as well. Another variable is system priority. So, suppose you have to switch, and which switch will be active and which will be standby or secondary. So who is the primary, and who is the secondary?

So that depends on the system priority. By default, that is also 32768, and a lower system priority is better. So suppose this is 500 and this is 32768. So obviously he is the active initiator, and he has to take the decision. So by default, both will be the same. So, once again, if there is a tiebreaker, the lowest Mac address will always be preferred. So whoever has the lowest Mac address will take the decision because, by default, we are not changing the port priority and we are not changing the system priority. So as per the Mac address, in terms of system priority, the lowest Mac address will win, and in terms of port priority, the lowest port number will be the case of tie breaker.After you’ve completed the configuration, you can view the channel summary. That’s a nice command. There are legends or flags, such as D for down, B for being in the port channel, I for stand alone, S for suspended, R for L3, S for layer 2, and so on, so you can check the status. So “layer 2” refers to these devices, and “US” refers to “port channel and use.” So this port channel is successfully formed. This is layered on top of an operational Cisco Nexus, the syntax of which differs slightly between the catalyst switch and the Nexus switch, but the concept is the same.

So this way, we can go and create the port channel. Here, we have some differences. What is the difference between the port aggregation protocol and the link aggregation protocol? In the Port Aggregation Protocol, we are using the modes auto and desirable. Desirable is equivalent to active, and I think there is some typo here. As a result, desirable is synonymous with active. All right, so it’s not a type of mistake. But in the diagram, there are different use cases. So manually, you can do either the PSGPor Link Aggregation protocol, and they will both work. Off will no longer form the channel on one side. That’s true. Then one side will be automatic, another side desirable, and they will form the port channel. You can see that you have auto now. So let me write your autobiography, and you’ll have desirable. This vehicle is passive for LACP. So, for example, PAGP and this desirable are active. So if both sides are desirable, they will form the port channel. The port channel will not be formed by either side. As we’ve already discussed, LCP is the same way. These are the use cases. These are the scenarios we have that will work here to form the ecosystem.

9. Etherchannel Lab

Let us perform the lab task here in our lab, where we have switches, for example, switches one and two. Their interfaces are on both sides. So we’ll go and form the LCP port channel. First, we’ll try to form with the manual method. You can use the manual method, and we will use the LSCP as well. We have the option for PSEP as well, but we’ll focus on manual and LSCP. So let’s go and do this. First of all, what I will do is set the default for the interfaces that we have: zero, zero, and one, because we have done some configuration earlier and I don’t want to do that here again, which means I don’t want to have the old configuration, so I can go and default the old configuration. After I delete this, I can go here and take the range of e zero to one. Let me type it. So we have zero and one, and then there is the option of a channel group. So for example, in the channel group, you can see that the number ranges from one to 255. I can check 100 and then turn mode on, which means we are turning this on. So now here we can see because this is the virtual device, so maybe we’ll get some errors. So 400, invalid group slot number, et cetera, et cetera. Let’s go back and try to make this 10.

Okay, so this is the problem with the virtual switch itself and not the problem with the actual hardware. So if I use ten, it takes ten, and similarly, I can do one thing that results in an interface of 0. Let’s copy and paste this configuration on the other side as well. So here is the interface range from zero to one, and then I can paste the configuration, or we can go to channel group ten or whatever we have; let’s see that we have created ten and turn mode on. So ten mode is activated. We have the option to check the show ether channel summary after you create this ether channel. It stands for switched layer two, and we stand for up, as you can see. So we have this up and running, and ports are in bundle. If you want more information about this option, you can look into ether channel, and you can also look into the load balancing method. So I can go ahead and check this detail as well, and you can check this information in detail as well. So here you can see all these detail options that we are getting now, and this is the manual method. If I want to do this automatically or with one of the protocols, I should start with the interfaces; we have the interface range, and then you can go and define the protocol.

So this channel protocol is either LACP or PAGP. Because they are already bundled, this is a Cisco proprietary and industry standard. So we are getting this error. What I’ll do is exit here, and I’ll remove that group. Or maybe we can go here, and then we can remove that statement. So I’ll say no here, and then I’ll go and remove that statement from the other switch as well. So let’s go to the range and paste this configuration removed.All right, next, what can we do here if we want to give this channel protocol the name LSP? Then I can go and define the channel group. Say, for example, 20. And then you can see the modes we have: active and passive for LSP auto, and desirable for PGP. But I want to be active. Say, for example, that you are active on this side and the other side. I can go here and there again. I can go and type. Say the channel protocol is LSP. The channel group is then 20. The mode is passive. So either it should be active or passive. So it will form the port channel. Now, if I go ahead and check the Ether Channel Summary, I can see the Ether Channel Summary.You can see that. Now we have this, and they haven’t negotiated everything. So that’s why it was showing as “suspended.” Let’s see what the actual code is here. So there’s waiting and SN. So here you can see that you have layer two not in use and waiting for aggregation. So LSCP was not negotiated, and that’s why it was showing that it was waiting. Now you can see that 20 has been formed and it is working properly. So this is the way that we can go and bundle the ports if you want to see more things related to this. So we still have the option of going to check the protocol and then entering.

So here you have the LSCP protocol. You have group 20 in which mode is on. So now we can go check this out. Now suppose we have some issue that keeps coming and coming and coming. So you have the option to do the debugging for LSP. Once you do the debug for LCP, then you have to check who is the sender, who is the receiver, etcetera. Etcetera. Okay? So you must understand that this debugging is functional, and if you go here, I can debug LSDP events, as well as go to Config and monitor. Let’s see if I can monitor over the screen; otherwise, I have to go and check the locks. If a problem is discovered, it will begin sending us these LLCP-related negotiation problems for us to investigate. So I think the terminal monitor is already the command console. All right. It will now appear if I go here and do anything. So let’s see. And the channel protocol is P over GP. See if we create any issues here. And then, if we verify this, we can see some problems. So I’ll do one thing: disable the channel protocol LSCP. Then you can go look at the debug. So this is how we can go about verifying the debug. Keep in mind that it is not recommended that you create the debug in the production network. You should have a maintenance window. If the problem is severe, we can bring Cisco tag online and you can perform in-depth troubleshooting. 

10. RSTP Rapid Spanning Tree Protocol

In three, one C. We have to configure and verify RSTP and MST. So let’s start with RSTP. That is quick across three protocols. It’s also very helpful that we know the other three protocols. Sample. It’s useful to understand STP plus, Rst, RHP, and MSD. But I’m going to revise those spanning-creep protocols, or at least the underlying methodology. What exactly happens when we talk about the switching network or switching architecture? So in that case, to prevent the loop, we are actually running the STB, so suppose if you have three nodes connected like this, like a triangle, and switch A, B, and C. Now here, you can see that if you build your network like this, obviously you will find the loop. The frame will be looped across this triangle. Now, again, you may wonder why we are creating. network like triangles or a looped network? The answer is redundancy. We are creating a type of network that can provide us with redundancy. So for the sake of redundancy, we are creating this type of network. So, for example, if one link is down, you still have one backup link. Or maybe you have other links as well. So two links down, you have one link, etc. In a switched network or a layer-2 network, we are creating loops, or at least such a network design, for the sake of redundancy.

The second question may be, “OK, since we are creating the loop, what is the loop prevention mechanism?” And then STP is the answer, spanning three protocols, loop prevention mechanism, and what it does is that it becomes root based on the switch’s priority one, and then obviously you have second root and non route. But what is happening is that whatever interfaces are going toward the route, they are termed “root pools.” So you have rootport, rootport, and whatever interface is in front of rootport, which is referred to as “designated ports.” So you have DP, and then you have RP. Now, what about those switches where you have the least priority? They are not routes; they are not secondary routes. In that case, they will go check the match priority, then see who has the lowest match. As a result, one of the interfaces will generate a blocking, BLK BLK, and the other will treat it as a designated port. So this is the blocking port, and then you have the designated port. So the summary of this explanation is that since you have this type of arrangement, So one of the interfaces is working as a blocking port, and suppose your main interface is down. Still, the traffic can go in this direction because from blocking it will move to listening, then it will move to learning, and then it will move to forwarding. So it will take 30 to 50 seconds to converge, and after that time, the traffic will be released from blocking because it will eventually become the forwarding port. Here’s the issue: we don’t want to use STP instead of rapid SDP. We don’t want to wait until this time to convert the network, so the solution is to use the rapid SDP. So let’s see that. What are the things we have in our rapid STP? This is under 8021 W, a rapid-span pre-protocol. It’s similar to STP, but there are disabilities as well. There are differences as well. We’ll see: What are the differences we have? So, BPD here also, you have breached the protocol data unit. You can imagine that these are the kinds of messages that switches use to build their STP or RSTP. Here also, they will select one route bridge as per the lowest bridge ID, which will be 32768. We’ll see that on the next slide. Then the router designates that the selected ports are functionally identical to SAP. These are the similarities what is not similar.You can see that here you have a root port, an alternate port, a designated port, and a backup port. On the other hand, in STP, you have a root port, a designated port, and a blocked port. So let me show you the diagram. It’s always easy to understand with respect to a diagram. We can see here that you have the lowest priority, priority 100, and will be the root bridge.

So this guy and this are with respect to STP, and this is with respect to RSTP. So this guy is the root; whatever is going towards the root bridge is the root port; there, you can see these are the root pool, and then, as per the high priority, one will become a block and one will be the designated port. If you have any issue in this direction, then it will move to the forwarding state, and the packet will flow like this, right? On the other hand, you can see that in our SDP, he is the root bridge, for example. So these are the root ports. We have a designated port, just like STP. But instead of blocking here, you can see that you have an alternate port, correct? In addition, if you have two links, one is the primary port and the other is the backup port. So instead of blocking, we have alternate ports inside RSTP, and that is one of the reasons for faster conversions. But there are other reasons as well. Here you can see that instead of blocking, listening, learning, and forwarding, we have the discarding, learning, and forwarding phase. So we are not waiting for that particular amount of time but rather moving to the forwarding estate as soon as possible. as quickly as possible. Initially, a switchboard is installed in a dumping estate. Obviously, then, he has to start his learning. Now, a disc-cutting port will not forward frames or learn Mac addresses. As the name suggests, a scouting port will listen for, reproduce, and alternate, while the backup port will remain in the discarding state. So you have your alternate port and your backup port. They are still listening, but they are not actively participating. So they are listening. They are paying attention. Otherwise, failure will happen. how they will start forwarding the frame.

So they are not in the forwarding estate, but they are still listening to BPDs. Now, RSP does not need a listener, as stated. Instead, if a port is elected as a root or designated port, it will transition from discarding to learning. So here, you can see that. Now, if you have root selection, in that case, the port will move from the discarding to the learning state. Now what will happen? In this case, a learning port will begin to add Mac addresses to the cam table. However, a learning port cannot forward frames yet. So here you can see that you have three states to discard. You are paying attention to the fact that as you learn, you are beginning to understand the mechanism. But in discarding and learning, you are not forwarding the package unless you reach the forwarding mode. Finally, when you reach the forwarding estate, it will send a message for BP to learn more about and forward the frame, route, and designated port, which will eventually transition to the forwarding estate. So once you have the selection of the ports, like root port, designated port, alternate port, etc., they will move from discarding landing to the forwarding estate. Now the main question here is why RSTP has faster convergence. And now you will see the core difference between STP and RSTP. So Bpdus are generated by every switch and sent out as a hello interval. As an artificial forward delay timer, switch is no longer required. Now what is happening in this case is that all the switches are generating their own bridge protocol data and forwarding it to their peer switch. Okay. Now what is happening in STP? The STP code is 82. In STP, Bpdus are generated by the root bridge. If a switch receives a BPD on its route port from the rootbridge, it will propagate the BPD to the downstream or its neighbor. This convergence process is slow, and STP relies on a forward delay timer to ensure a loop-free environment.

So now you can understand that rather than using RSTP switches, they are generating the BPD and sending it to the neighbor. But in STP, which is 8021 d, if you have one route port or one route bridge selection, one switch will become the higher authority, and then he will generate the BPD. And all downstream switches will listen according to their delay timers or timers. So you can see here that the default nature of STP is slow compared to RTP. Now in RSTP, switches will handset directly with their neighbours, allowing the topology to be quickly synced. This allows the port to rapidly transition from the discarding to the forwarding state without delay. So you have a one-to-one handshake, and it will quickly move to the forwarding address where they want to forward the packet after discarding it. Again, we have some ports typed in RSTP, and we have edge ports. And suppose, if you know at this point in time about portfolio, that portfost is one of the very popular terms that means whenever you are connecting the switch with the terminal device, for example, a server, you don’t want to send the BPD. So I don’t want to send BPD or an STP control packet to a non-switch device. There is no meaning in that. So that’s why we’re making this interface a portfolio, because it’ll soon be moved to the forwarding estate. Likewise, the edge Now this edge port is working as a port.

First, it is going immediately to the forwarding state. As a result, there is no listening, learning, and forwarding as in STP, or discarding, learning, and forwarding as in RSTP. However, it will be transferred to the forwarding estate immediately. Now, in the second port type, you have the root route ports, which are linked to other switches and have the best path cost to the root, implying that the root port is present. On one side, there is a routeport, and on the other, there is a designated port. So you can get to the root pit quickly. Finally, you have a point-to-point port. What this point-to-point port is doing is creating a port that connects to other switches and has the potential to become the designated port for the segment. so it can become DP. And now, if you go and refer to the actual diagram that we have here, you will understand. So you have a root port that is going towards your root pitch. And as you can see, the designated port is the adjacent root port. But here you can see that you have designated a port and a backup port as well. All right, finally, we have some important notes here related to RSVP. So let me quickly cover this. Now, if an edge port receives a BPD, it will lose its edge port and status and transition back to normal through the Rhypt process on the Cisco switch. Any port configured with a portfolio becomes the edge board. We know that HBO, as I already explained, quickly moves to the forwarding state. That’s the summary for the export. Now, the RHCP convergence process is below.

So let us quickly understand Rhett’s convergence process and why they converge so quickly. We also discussed this in the previous slide. So let me quickly revise this. Switches are now exchanging breach protocol dataunits in order to select the root breach. This is valid for all the spinning trail protocols. Because they are nothing more than the portfolio, the edge port immediately transitioned to forward the state. All potential routes and point-to-point ports begin with the discarding state. So this is actually important. By default, the ports will start in a discarding state. Correct. because they are actively listening. The Bpdu. They are not learning the Mac addresses. They are not following the framework. They are learning. So you can think about this thing. It’s not learning. Actually, they are listening. Consider this to be active listening. Then, if you receive a superior BPD, it becomes the root port and transitions immediately to the forwarding estate. So obviously, once you learn the BPD, you will move to the forwarding state. Superior implies that you are only receiving one BPD, because in this case, all of the switches are generating and exchanging their own BPD. Assume this switch has a low priority of 32768 by default. Suppose this has priority 32769. His phone number is 6869, which is not a valid number. Just for example, obviously, when he received the PP 2, he understood, “Okay, I’m getting this with the higher or superior switch.” This specific point-to-point interface will then become the forwarding address. Okay? For a point-to-point port, each switch will exchange a handshake proposal to determine which port becomes the designated. Obviously, you can do the handshake, and you have to define who is the designated. It’s okay. Once the switch agrees, the designated port moves immediately into the forwarding estate. As you can see, the convergence is a little different than SDP.

So ports are starting with the discarding estate and then moving to the forwarding estate. It will become the root pot as soon as you obtain some superior bpd. Remember that if he is a root bridge, that means you are the root put and he is the DP. Okay? So if he is sending the BPD, it will be a superior BPD. Correct? And when he is sending the BPD, it’s not a superior BPD. So that’s why it says that it will move to the forwarding state. because you only have one blocking type of state, discarding You will either be at the learning or at the discarding if you beat the forwarding. But there are fewer chances that, when you have a superior BPD, you are in the discarding state. All right, so this is the summary for the RSTP convergence. Now, every switch will perform this handshake process with each of its neighbours until all the switches are synced. Complete conversions happen very quickly—within the second. So all the switches are sending their BPU, and then switches are calculating on the basis of BPD and the superior BPU. Whoever is sending the superior BPU has become the root page, and his interface will become the designated port. The adjacent port will become the root port because he has to reach the superior bridge. And according to that, all the switches will converge. Once they converge and once they sync, then RSTP will be up and running to all the switches, and then it will take a few seconds. And that’s why it’s

11. RSTP TCN

TCN STP and RSTP both have different way TCN is nothing but a topology change notification. What happens in STP when they switch to forwarding a state, or when they are about to block or down state, is that they generate the TCN at that time. These will go to the root bridge, especially in STP, which is 80 two-dot TCN bits. We know that the root must make a choice. Now what the route bridge is doing is that if they are sending the Bpdu, they are setting the TCbit, which is the topology change bit, and then they are forwarding this to all the downstream switches. As a result, all downstream switches will be aware that the port is in the forward state or that the switch interface is in the forward state, blocking, or downstate. Correct. Again, this is slightly more time-consuming than RHDP. What is happening? To begin with, only non-edge ports are used when moving to forward a state. So we know that this edge port is acting as a portfolio, or that they are equivalent to a portfolio, because they are going to the forwarding state directly or quickly.

As a result, only non-edge ports generate TCN when they transition to a forwarding state. The good thing about RSTP is that all the switches can generate the BPD and send the Bpd.So, in this case, they are not waiting for the root to bridge to understand or recognise that specific change or topology change notification, and it will send to all downstream switches rather than the switch. Whenever he gets that change notification, he will update the other switch. So that’s the reason this process is faster. So here you can see that any switch can generate the Tcbpdu, allowing the topology to quickly converge via the handsex. Now, a switch receiving a BPD will flush all Mac addresses learned on the designated port except the port where they are receiving the TC or topology change VPN. So again, the core thing behind the scenes is that all the switches in RH TP can generate and send DBPD, and they can sync the RH TP database or the RsCPDs rather than waiting for some max edge timer to expire. Or rather, they are waiting for some central authority, say, to send the updates to all the downstream switches in the SDP. All right. So here again, you can see that we have some flavour of Rhett, and in the next section, we are going to do the lab related to that. We’ll learn more on that. Now here you can see that not only is the RSVP’s convergence faster, but they are also taking care of two failures. So either it’s a direct or indirect failure. We know that if you read that strap, you’ll discover that you have certain mechanisms that you can use to optimise the STP, such as an uplink fast backbone. Portfast is also one of the optimization techniques.

So portfast is already taken care of inside the edge or by the nature of the edge. Port is working as a portfolio. Then, for direct and indirect failures, this is also taken care of inside RSTP. So we don’t need to optimise because these features are already included. So in case of any direct or indirect failure, the alternate ports or the backup ports will not wait, but rather they will move to the forwarding state. Okay, now we have two implementations of RSTP. One is Cisco’s proprietary rapid per VLAN RSTP. So because we are going to run the RSP on a Cisco switch, we will go and use this spanning tree mode, the rapid PBS tree, and the other one is the multiple-spanning tree that we will study in the upcoming sections. So we have our tables set up. Let me quickly show you this lab setup and what we are going to do here. For example, this is the topology diagram that we have. I can say, for example, that I have switched one, switched to, and switched three. So here I’m going to run the RSTP in between switches one, two, and three like this. For the time being, I will not use all of the interfaces and switches, but this will suffice if we run the RSTP in between and then run a few of the verification commands. So let’s stop here. And in the upcoming section, we have a lab related to rapid PBS-T or rapid HTTP.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!