Vmware 2V0-41.20 Topic: Logical Switching Part 1
December 21, 2022

1. N-VDS Logical Switch

layer to segments within our NVDs. So let’s start with the basics. In this case, Let’s assume that we already have four ESXi hosts that have been deployed as transport nodes. And of course, we’ve got our NSX manager nodes deployed. So we’ve got kind of the basic underlying pieces that we need here to start creating logical networks. And so when we establish our transport zone, we create an NVD in our NSX management cluster, and that’s essentially where you’re creating the logical construct that is an NVD. It doesn’t rely upon Vcenter in any way.

So you’ll notice that Vcenter is not in this diagram, but the NVDs is very similar to a Vs. distributed switch, and distributed is the key word there. So basically, within the NSX manager, we’re configuring an NVD, we’re configuring this switch, we’re configuring this NVD, and then that configuration is pushed down and is distributed to all of the transport nodes that are participating within that transport zone. So basically, we create the configuration in one place, and little host switches are dynamically created on all of these transport nodes. So the NVDs exist on each and every one of these transport nodes as an opaque switch. And what I mean by that is that we can go into the Vsphere client, and you’ll see this in some of the demos. In this course, we’ll go into the Vsphere client and, for example, take a look at the configuration of ESXi 1. And if you go to the configuration tab, you’ll see this NVD as a virtual switch. For this ESXi host, you can see the port groups. You can see which virtual machines are connected to which layer-two segment. But you can’t manage any of that in the VSfair client. That is all managed through the NSX user interface. So on the Vsphere side, it’s presented as a port group. We can connect our VMs to these two virtualization layers, but everything must be managed through the NSX management tools.

And remember, when you create the transport zone, it is mapped to a single NVD. So each transport node within a transport zone has this little host switch for the NVDs that is essentially used to connect within that transport zone. So a good way to think of it is that NVDs essentially exist inside the transport zone. And so, from a naming perspective, the name that you give the NVDs and the name that you give the transport zone should be somehow related somehow. Or perhaps we have something resembling a production transport zone. We’ll also name our NVDs production to keep those on track. And remember, there can only be one overlay transport zone per transport node. So let’s zoom out for a moment here and just reiterate where we currently stand. We’ve got four ESXi hosts that are transport nodes. They’ve already been configured for NSX. I’ve set up a transport zone, and that transport zone has an NVD associated with it. And all four of these hosts are part of a cluster. That cluster is part of this transport zone. So this NVD is present on all four of these transport nodes. Now, what does that mean? Can I start connecting my virtual machines to the NVDs? Not yet. We can create layer-two segments and layer-two logical switching segments within the NVDs. And each one of these two layer-two segments has a unique VNI associated with it. So here we see VNI 5001, and we’re recalling this segment as the app Logical Switch. VNIS are automatically assigned, and they start with the number 5000.

 That way, they’re easy to differentiate from VLANs. And, if you’re familiar with the Nsxv numbering scheme used here, the highest VLAN number is 40 96. So these start at 5000, so that they’re easy to distinguish from VLAN identifiers. And so I can create this layer-two segment within my NVDs, and the VMs and containers running on my ESXi hosts can connect to these layer-two segments. And it’s essentially like connecting to a layer-two switch. They might be on different ESXi hosts, but they’re still on the same layer-two network. And we can move VMs from one host to another without a problem. This layer-two segment is present on all of the hosts in the transport zone, so the VMscan can move from host to host without issue. Do the VMs know which VNI they are connected to? Are the VMs aware in any way that they’re connected to an NSX segment? No, they don’t know that. They don’t know which VNI they’re participating in. They don’t care. They know which subnet they belong to. They know which other addresses are in their subnet. And if any traffic is destined for something outside of their little segment, it’ll be sent to the default gateway, which will be a distributed router. So we can create these layer-two segments that can be either overlay segments or VLAN segments. And which one we’re going to create is going to be based on the transport zone that we create it within. So let’s kind of zoom out here and simplify this diagram a little bit. We’re going to bring it down to just two hosts, and each of these hosts has a tap. Each of these hosts is participating in the same transport zones, and this LS segment has been created. So now I’ve got these two virtual machines, VM One and VM Two.

They’re connected to the same Layer 2 segment, and you can tell by their IP addresses that they’re on the same subnet as each other. So in this case, if VM one wants to communicate with VM two, there’s no need for any sort of routing traffic; it doesn’t need to hit the default gateway. And let’s assume that these VMs have communicated with each other before, so there’s no need for any sort of ARP requests. VM One has a local ARP table, and the VM One ARP table knows the Mac address of VM Two and vice versa. So they know each other’s Mac addresses. There’s no need for any kind of ARP in that situation. So as this traffic flows from VM one to VM two, it’s going to hit the tap, and these additional outer headers are going to be appended, identifying which tap that traffic needs to be sent to and which VNI, which layer two segment, this traffic belongs to. And so that’s the purpose of this outer header that’s appended by the tap as the traffic flows over the physical underlay network. So how did the tap know to forward this traffic to host two? Specifically, that’s the NSX controller tables. The NSX controller tables have these Mac tables that track which Mac address is present behind which tap. So the Mac for VM 2 is reachable through this tap, and that’s what the Mactable in the NSX controller tracks.

So as the traffic actually flows over the physical network, this outer header is appended. The destination Mac is the Mac address of this tap. The destination IP is the IP address of this tap. The destination VNI: to which VNI does this traffic belong to? And as the traffic arrives at the tap, it’ll pull off this Mac; it’ll pull off the IP because it is the destination IP. It’ll see this VNI, and it will dump the traffic onto the appropriate layer-two segment. Now, one little technical side note here. What I would suggest in an NSX environment is that you go to the VMs here and configure a maximum MTU of about 8900. Here’s the reason why the NVDs don’t have the ability to fragment and reassemble traffic. So if the MTU is 9000, if I have those large frames coming out of the VM, then they hit the tap. The tap now needs to append additional headers, and that may exceed the MTU of the physical network. So what you basically want to do is ensure that as traffic comes out of the VM, it has a little headroom so that you can add some more headers and still have that traffic flow over the physical network. One other little kind of techy side note here, before I wrap up this lesson, is the fact that Genev is an UDP protocol, which means that it’s unreliable by nature. It does not guarantee delivery.

 So let’s say, for example, that some gen-encapsulated frame is leaving tap one and heading for tap two. Well, there’s no kind of connection-oriented protocol there. It’s not connection-oriented. There are no acknowledgments of receipt of those data. So it’s not going to make sure that the other end is actually receiving that traffic. That may seem a little casual, because this traffic could really be very important traffic. Well, the thing that you want to keep in mind is on the inner header; this could be TCP. So I don’t really show layer four here, but we could have TCP originating from the virtual machine. So some kind of connection-oriented protocol with acknowledgements and verification that the traffic was actually sent or received can be carried out by the virtual machines. So if some traffic does not arrive at VM2, it can be retransmitted by VM1, and then it’s going to get these additional headers and be UDP over the physical network. But if it doesn’t get there, we can still have a connection-oriented protocol on the inner headers between VM one and VM two. So don’t let the fact that Genev is UDP make you feel like, “Hey, this is not going to be as reliable as traditional network communications.” Because again, on the inner header, we can be using TCP protocols at layer four.

2. vSphere 7, vDS, and N-VDS

We really only have this one option, the NVDs, as we saw in the last lesson. Basically everything related to the NVDs is managed strictly within Nsxt, and this is great for non-VMware environments like KVM or bare metal. But let’s take a step back and think about this from an implementation perspective. Let’s say that we have an existing Vsphere environment and we’re looking to do a brownfield deployment of Nsxt. So we already have a Vsphere Distributed switch deployed, and odds are, if we already have a Vsphere Distributed switch deployed, most of the physical adapters and most of the VM necks that you have available on these ESXi hosts are going to be dedicated to the Vsphere Distributed switch. So how do I handle setting up an NVD?

 On top of that, I’m going to have to take adapters away from that Vsphere distributed switch and give them to the NVDs. But what about the virtual machines? I still have virtual machines on that VsphereDistributed switch, virtual machines that have not been migrated to Nsxt yet, and now I’m taking adapters away from that VsphereDistributed switch and giving them to the NVDs. So what I may end up with here is a situation in which I don’t have enough physical adapters to support both running at the same time, and I either have to do some big cutover or I have to buy physical adapters just to supplement while I’m in this transition period. So that can be a real headache when transitioning from an existing Vsphere Distributed Switch environment to Nsxt. Now let’s think about utilising the VDS instead. And by the way, the NVDs are going to be deprecated in a future release. Now we’ll talk about that.

It’s not going to be deprecated for KVM, and it’s not going to be deprecated for bare metal, but it is going to be deprecated for ESXihosts that are transport notes. So with the release of Vsphere 7, came the Vsphere distributed switch 7.0, and this version of the Vsphere distributed switch now supports NSS distributed port groups. So I can use the same Vsphere Distributed Switch for Nsxt 3.0 and for Vsphere 7 networking simultaneously. I could create layer-two segments in Nsxt, and those layer-two segments would exist on top of the same Vsphere distributed switch. So here’s a layer-two segment, and by the way, I’ve also got my distributed port groups, and I’ve got those running on the same underlying VSphere distributed switch with the same underlying set of physical adapters. Now I don’t have to pull adapters away from the existing platform to put them on an NVD. So this is one of the major advantages of using the VDS instead of having a separate switching mechanism with the NVDs. But there’s also one more very significant benefit to the integration of Nsxt 3.0 with VSphere 7, and it comes down to micro segmentation. So let’s assume that’s the reason you want to roll out Nsxt. You want micro-segmentation.

The other stuff is great, but you don’t really care about the other stuff so much. You’re not really interested in distributed routing, edge services, or stuff like that. You’re buying Nsxt strictly for micro-segmentation purposes. Now, I know we really haven’t started talking about micro-segmentation yet, but let me break it down in very simple terms. So here I’ve got two virtual machines, and let’s assume that these two virtual machines are connected to the same port group on the same Vsphere distributed switch. And what I want to do is establish a set of firewall rules. So let’s assume that VM One is a web server. So, traffic is going to start flowing into VM One. And we also want to allow some traffic to leave VM One and flow to VM Two, which is providing a database for the web server. And so we don’t necessarily want the same type of traffic that’s allowed to hit VM One. We don’t want to allow that web traffic to hit VM Two. We want to open up a very specific set of openings for VM 2. It should only be traffic coming from VMOne, and it should be traffic from a port connecting to my database.

So ideally, what I’ll have here is a set of firewall rules attached to VM One and a different set of firewall rules attached to VM Two. even though those virtual machines are on the same layer 2 network. Even though those virtual machines are connected to the same port group and the Vsphere distributed switch, I want to give them different sets of firewall rules. The term “responsibility” refers to the act of determining whether or not a person is responsible for his or her own actions. And that’s one of the major features of Nsxt. Well, what I can do with Vs for Seven and Nsxt 3.0 is I could instal Nsxt, and I could use a VISA for a distributed switch as my switch for Nsxt. And then I am immediately able to start creating these micro-segmentation rules. So I could create these micro-segmentation rules for virtual machines even though I haven’t migrated them to an NSS segment. That’s a huge advantage if you’re just trying to get to the point where you can do micro segmentation and don’t want to configure all of Nsxt’s other features. You just want micro-segmentation. You don’t have to re-architect your network here.

You can just roll out Nsxt and start creating micro-segmentation rules on the virtual machines that are already connected to your existing port groups. So just bear in mind that the NVDs, which is the host switch that we’re going to instal and control with the Nsxt user interface, will be deprecated in a future release and going forward. The plan is to converge NSS and ESXi host switches. However, the NVDs are not going to be completely deprecated. It’s going to remain for KVM, for the Nsxt edge nodes, and for bare metal workloads. So the NVDs are only being deprecated on ESXi host transport nodes and not on other types of transport nodes. And we’ll really have to wait for a future release to see exactly how all of this plays out. There is a transition path from NVDs to VDS. It involves manual steps, and you should be contacting VMware support to help you with that transition if you want to do that. But the bottom line right now is that if you are in a pure Vsphere environment and you’re rolling out Nsxt 30, you should be using the Vsphere distributed switch as the underlying switch for Nsxt.

3. Demo – Create Layer 2 Segments

In this video, I’ll demonstrate how to create a new layer-two segment in MSXT 3.0, and I’ll be demonstrating these tasks using the free labs that are available at HOL dot vmware.com. And when we’re creating a layer-2 segment, what we’re essentially doing is creating a layer-2 network, and we’re going to be using GEV encapsulation to pass this layer-2 traffic from one transport node to another. As can be seen, I’ve already logged into the Nsxt user interface. I’m going to click on the Networking button here, and under Networking, I’m going to click on Segments. And again, because we’re using these free labs at Holvmware.com, there are already a bunch of segments that are automatically created along with the lab environment. But let’s go ahead and create our own new segment. So I’m going to click on “Add a segment here.” So I’ll name this segment LS Rick, and I’m going to choose either a tier-one or a tier-zero gateway to connect to. We’ll talk more about routing later on, but in this particular scenario, we’re using what’s called a single-tier routing topology.

So we don’t have a Tier I gateway. So I’m going to connect to a Tier-0 gateway. So basically, I’m choosing the router that this segment is going to connect to, and I’ll also go ahead and configure the subnet on the segment itself. So first, let me choose the transport zone here. I’m going to choose my overlay transport zone, and then I’m going to establish a subnet here. So I’m going to put in 172, 1661, slash 24, and this is going to be the interface on my router. So the tier-zero router is actually going to be configured with this interface to connect to the subnet. So I am configuring the subnet on the segment itself, but the routing part of it isn’t actually configured as part of the segment. It’s configured on the upstream Tier 0 or Tier 1 gateway.

And so any virtual machines that are connected to the segment should be using this address as their default gateway. That should be where they’re sending traffic to as the default gateway. So let’s go ahead and scroll down a little bit here and click on “Save.” And now I’ve created a new layer to segment, and it’s connected to my Tier 0 gateway. And now it’s asking me, “Do I want to continue configuring the segment?” I’ll go ahead and click on yes, and from there I can configure different settings, I can configure my segment profiles, I can configure some DHCP static bindings, but I’m going to go ahead and just click on “close editing” for the time being. And let’s move over to the Vsphere client. So here I am in the Vspare client, in the hosts and clusters view, and I’m going to pick a virtual machine that is in my compute cluster here, and I’m going to right-click that machine. So I’m going to go with Web 4 A. I’m going to right-click it, and I’m going to go to Edit Settings. And what I want to do here is take this virtual machine and connect it to the new layer two segment that I just created. So here, under Network Adapter 1, I’m just going to change the network that we’re connecting this virtual machine to, and I’ll scroll down and find the new port group that I’ve just created. Here it is, my layer-two segment, Rick LS Rick. So I’ll go ahead and click OK there.

So I went ahead and turned on Web 4 A. And now that I just refreshed my screen, I can see the virtual machine is now booted up, and I can see the IP address and notice that that IP address falls right within the range that I configured on that layer-two segment. Remember that the IP address I used for the layer 2 segment was 172 1 6 1. That’s going to be the default gateway for this virtual machine. So now that the virtual machine is booted up, let’s go ahead and launch the web console for this VM and get logged in. And now that we’re in, let’s just do a quick if config command. And we can see up here, towards the top, that ETH zero is on that 170 216 60 network. So let’s try a little ping command here. Let’s try to ping the default gateway. So I’m going to try to ping 170, 216, and 61. And I can see that those responses are coming back successfully. So that was the address that we configured as our default gateway, and it’s responding. So now that I know that my default gateway is working, let’s try to ping something on a different subnet. I’m going to try to ping one and 170, which is one of the other virtual machines in this LabKit. It’s a database VM, and those pings are coming back successfully as well.

So we just set up the segment, but we’re already able to ping virtual machines that are on different segments. Well, how is that possible? And the quick answer to that is, let’s go back to the Nsxt user interface and take a look at our Tier Zero gateway. There’s already this Tier 0 gateway that’s been established and was already built into my lab environment for me. So we’ve already got this router established, and I can see all of the segments that are linked to it. And one of the segments that’s linked to it is the LS Ric with this default gateway IP address. But there are other segments that are connected to it as well, including 170, 216, and 30. So we can route traffic between these two subnets through this Tier 0 gateway. So, when I created my LS Rick segment, what happened in the background was that the subnet I configured, which was actually the IP address of the default gateway, resulted in this segment getting an interface on the tier zero gateway that is being used as the segment’s default gateway. And so the question becomes, “Well, can I create a segment even if I don’t have a gateway?” Can I create one of these Layer 2 segments without putting in a gateway for an upstream connection? Let’s give it a try. Let’s go to segments. And I’m going to click on “Add segment.” So I’m going to call this segment a test. And for connectivity, I’m not going to choose my tier-zero gateway. I’m just going to leave this at that. And I’m going to connect to my overlay transport zone, and I’m going to establish a subnet. I’ll simply make subnet 1231 my default gateway no. 2. And so I’ve configured a subnet there as well.

And I’m going to scroll down and save this. And it asks me, “Do I want to continue configuring this segment?” I’ll just go ahead and click “no.” And so, yeah, that shows that I don’t necessarily have to have a router set up prior to creating a segment. I don’t have to have a Tier 0 or Tier 1 gateway configured to connect this segment to. I can always create a segment, and then later on, I can edit that segment and choose which routing component I want to connect it to. So I’m not going to connect this to my Tier Zero gateway. I’m just going to cancel this. But yeah, now I’ve got a new segment. It’s a segment that is essentially not going to do anything. It’s not connected to a router. So any machines that are connected to this particular segment could communicate with one another as long as they’re in the same IP address range. And guess what? They don’t even need to be in this IP address range. They’re on the same Layer II network. As long as they are configured with identical network subnets, they’ll be able to communicate with one another.

4. Demo – Configure Segment Profiles

In this lab, we’ll explore segment profiles, and we’ll do so using the free labs that are available at Hol vmware.com. So here we are on the home screen of the Nsxt 3.0 user interface, and we’re going to go ahead and click on Networking here. And under Networking, we will click on Segments, and then we’ll click on segment profiles. And there are some segment profiles that are already built here. Some of them are default options, and some of them are configured specifically for the Hold environment. I’m going to add a new segment profile, and you can see here that there are a number of choices for the different types of segment profiles. And regardless of the type of segment profile that we choose to create here, what we’re essentially doing is creating a standardized configuration.

So as I create new segments, these are a standardized set of configurations that I can apply to those segments, which keeps me from having to configure all of these settings on every segment manually. So, for our first example, let’s start with Qu’s quality of service. So the basic idea of QoS is that I have certain types of traffic that I want to prioritise over others. I want to give certain traffic priority access to bandwidth, and I want to make certain traffic discard eligible. And so the basic idea here is to control which traffic gets prioritized during times of contention.

And with Nsxt 30, we can set up a class-of-service header that is actually applied at layer two. So this class of service provides prioritization at layer two. DSCP priority is applied at layer three. So think of these two as classes of service configurations and priorities. These are used in my switches, while DSCP and DHCP are used in my routers. So I’m going to name this segment profile something very simple, like Ricdemo QoS. And the first configuration setting that we’ll choose from is the mode. Do we want this to be trusted or untrusted? And when you choose trusted, basically we’re configuring class of service and quality of service settings that are going to be configured on traffic that is going to eventually get encapsulated. The traffic is going to get encapsulated using Genev and passed across a physical underlay network. And the part that gets encapsulated—well, that includes whatever quality of service settings we are establishing here.

So how are we going to handle this quality of service configuration getting encapsulated by Genev before it actually even hits the physical network? So, if the DSCP mode that we choose is trusted, that means that whatever DSCP priority I establish here inside the header will actually be applied to the headers of the gen-encapsulated packets as they hit the physical network. So the physical network itself can still see the DSCP priority that I’m configuring here. So trusted basically means the DSCP values that I apply here have to be carried over and applied within the physical underlay network. And then we can set a priority value. So zero is basically the highest priority. And if I want to make the priority lower here, I couldn’t put in a value anywhere between zero and 63. So I’m going to give this a priority value of 40, and then the class of service, which is applied at layer 2, can be anywhere from zero to 70 for best effort service. And over here on the right, we see some settings for ingress, ingress broadcast, and egress traffic. What you’re essentially doing when you configure these options is that you can use average bandwidth to reduce network congestion, you can use peak bandwidth to allow bursting when we need to, and you can’t really guarantee bandwidth here. But what you’re essentially doing here is resetting some limitations on the amount of bandwidth that can actually be consumed by this segment.

And the goal here is to say, “Hey, we’ve got this one physical underlay network; we don’t want one segment to consume all of that bandwidth.” So what we may want to do is configure some limitations on a per-segment basis using this segment profile to prevent it from overwhelming the physical underlay network. So that’s the first type of segment profile. I’m just going to cancel this and I’m going to click on “Add Segment Profile” and create an IP discovery segment profile. And the IP discovery protocol is all about things like ARP snooping and DHCP snooping. like, for example, DHCP snooping. So if a client comes online and requests an IP address, DHCP snooping can observe that and determine which IP address is being assigned to which Mac address. The same as ARP snooping. If an ARP request is sent, it is essentially a machine trying to learn what the mac address associated with some specific IP address is. And VMware tools can also be used to discover the IP address of virtual machines. And yeah, that’s what this whole part of the profile is about. It’s about learning which IP addresses are associated with which Mac addresses and understanding all of those mappings because those are going to be valuable for things like the distributed firewall.

And it’s also valuable because it’s going to help us suppress our requests and eliminate unnecessary broadcasts from being distributed all over these two layering segments. And the other big part of this is important for the distributed firewall. So let’s say that I create a firewall rule for some particular virtual machine. Well, because we have IP discovery, we’re going to know in the distributed firewall which IP addresses are associated with which virtual machine. As a result, the distributed firewall can use this to apply a rule created for VM to the IP address associated with that virtual machine. So next, let’s take a look at the Spoof Guard segment profile. And the goal with Spoof Guard is to prevent spoofing attacks. So what we can do here is create a spoof guard policy that will prevent virtual machines from sending traffic from an IP address that they’re not authorised to send traffic from. So basically, if a virtual machine is trying to send traffic from a source IP address that does not actually belong to that virtual machine, spoof Guard can be used to block that. So this provides additional security because it could potentially prevent rogue virtual machines or compromised virtual machines from assuming the identity of a legitimate virtual machine.

The other big benefit of this is that it allows us to ensure that the firewall rules that we create can’t be bypassed by simply changing the IP address of a virtual machine. Those IP address changes will need to be approved when we’re utilising Spoof Guard, so we can protect against that as well. So Spoof Guard is all about understanding the actual IP and Mac addresses associated with virtual machines and preventing attempts to spoof those addresses. So I’m going to go ahead and cancel this. Let’s take a look at the segment security type of segment profile. So here’s where we can configure some basic security settings like Bpdu filters, DHCP snooping, and DHCP blocking. And yeah, these are just the basic security settings that we may want to configure on any layer 2 Ethernet segment. So do we want to block Bpdus? And Bpdus is related to the Spanning Tree Protocol, which is used to establish a loop-free topology in an Ethernet network. So we may want to block those here. We may also want to block certain DHCP messages, like maybe we don’t want to allow clients connected to this segment to send out DHCP requests, and I can block that here. So let’s cancel this.

And lastly, let’s take a look at the Mac Discovery segment profile. And this is used to control how Mac addresses are learned and how Mac addresses are changed. The Mac Address Change option allows virtual machines to change their Mac addresses. And these settings may look kind of familiar if you’ve worked with Vsphere distributed switches or Vsphere standard switches and their port groups, where you may have seen settings like forged transmits. Are we going to allow virtual machines to do things like spoof their Mac address? And do we want to allow Mac loading, which can be useful if we have multiple Mac addresses associated with one specific port? So an example could be that I might have a hypervisor—maybe an ESXi host running a virtual machine within this environment—with many different virtual machines running on that hypervisor. So there are going to be multiple Mac addresses associated with not one single virtual network interface card. And I can configure a Mac limit that limits the actual number of Mac addresses that could be learned. And so I’m just going to cancel this. And in this video, we’ve learned about some of the different segment profiles that we have available to us. And the primary thing that I want you to take away from this lesson is just a simple understanding that we are doing policy-driven configuration here. We create, establish, and completely configure these profiles. Once they’re done, they can be applied to many different segments. So we’re not having to configure each and every one of those segments individually.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!