Vmware 2V0-41.20 Topic: Preparing Transport Nodes for NSX-T
December 21, 2022

4. VLAN Transport Zone

So here in this situation, we’re going to focus on VLAN transport zones. And just like in the last lesson, we have a single VLAN transport zone created here that’s associated with a group of transport notes. We have four ESXi hosts. Those are our transportation notes. At the top right, you can see the VLANTZ, which is our VLAN and transport zone. And within that VLAN transport zone, I can create segments. Each segment that I create is going to be associated with a specific VLAN. And what’s happening is that VLAN-backed port groups are being created on my ESXi hosts.

So, in essence, these are the same as the port groups we’ve always had with the VISA for a distributed switch. They are VLAN-backed port groups. And as we get further into this course, we’ll learn about the edge nodes. And the edge nodes are going to connect to this VLAN-backed transport zone, and the edge will also connect to a VNI-backed NVDs segment. So essentially, the edge is going to basically become the bridge between my overlay networks, my gene networks, the segments that I create within NSX, and those VLAN-based network segments. And remember, VLAN-backed segments are just port groups. And so devices on the physical network within my data center can be on the same VLAN, and they can communicate with these VLAN segments natively.

5. Demo – Configure Hosts for NSX-T

In this video, I’ll walk through the host preparation process for ESXi hosts in the NSS 30 environment. I’ll also be using the Free Hands labs at Hold vmware.com. So here you can see I’m already logged into the Nsxt user interface, and I’m going to click on the System tab here. Up at the top and under system, I’ll click on fabric and then nodes. Under Nodes, I’m going to specifically take a look at the nodes that are managed by a V Center server appliance. And so let’s expand here and see what kind of ESXi hosts we have deployed in this hands-on lab environment.

Now down here, towards the bottom, I have two ESXi hosts, ESXi one A and ESXi two A. And as you can see here, these are already configured for NSX. So I don’t really need to do anything with these two here. But the two hosts up here at the top are currently not configured for NsXT. So why is the lab set up this way? Why are two hosts configured for Nsxt and two are not? Well, in the lab environment, these hosts are actually going to be used for the NSX edge, and the NSX edge nodes already have tunnel endpoints built right into them. So I don’t actually need a tunnel endpoint on the ESXi host itself. So those NSX edge nodes do not need to run on hosts that are prepared for NSX, and that’s why the lab is set up the way it is. However, that’s not really what I’m here to demonstrate. I’m going to show you how to prepare these hosts for NSX. So I’m going to show you how to prepare these hosts for NSX.

Before we do that, let’s take a look at the transport zones that have already been established here. So we’ve created a few transportation zones here. I’ve got a couple that I want to focus on here. First off, there is my overlay transport zone. So let’s go ahead and click on that. So we can see here that the name of the transport zone is TZ overlay, and we can see the NDVs that this transport zone is a part of. And if we take a look at our Tzvlan transport zone, again, we can see the name of it here. We can see the traffic type for this one is VLAN, and it’s on that same switch, NVDs 1. Okay, so let’s go back to our nodes here, and let’s zoom in on the ESXi one here. And I just want to note that this transport node is part of the TZ overlay and the Tzvlan transport zone. Let’s take a look at ESX-2 A. The term “electronic commerce” refers to the sale of electronic goods. It’s part of both the Overlay and the Tzvlan transport zones. Okay, so now I’m just going to select these two ESXi hosts in region A-one management, and I’m going to click on Configure NSX, and now it’s asking me the transport node profile here. And if I click my little drop-down menu, you can see I have one transport node profile called ESXi transport node profile. And the transport node profile is really going to determine how these transport nodes are configured. So let’s actually click on cancel for a moment here and go over here to profiles, and let’s take a quick look at our transport node profiles here. And here you can see the transport node profile that I was just showing you as part of setting up these nodes. Let’s go back to nodes for a moment.

These nodes are going to need tunnel endpoints; they’re going to need taps that are going to allow them to communicate over the physical underlay network. So if we go to networking here at the top left and under Networking, I want to take a look at the IP address pools. By the way, if you’re used to dealing with Nsxt 2/4, this used to be under the Advanced Networking and Security tab. Now in Nsxt 3.0, that is gone. So here we can see the IP address pools and the IP address blocks that are already created here, and one of them is called the Region A TEP pool. And if I click on this little subnets link here, I can see the IP address range that is going to be used for all of the taps within this region. So I’ve got this IP range that’s going to be utilised to address all of the tips. So in the lab, part of it is taken care of for you. But you could always add your own IP address pool here as well. And I’m just going to call it Rick Demo, and I can set the subnets that should be a part of that pool. So I could add blocks of IP addresses there as well. But we’re not going to do that.

We’re just going to go with the built-in IP address pool that’s already included in the lab environment. So what else do we need to add prior to adding these transport nodes? And I want to set up some new ESXi hosts as transport notes. What else needs to be configured first? So I’m going to go back to the system tab, and under Fabric, I’m going to go to profiles, and I want to take a look at Uplink profiles. And again, we can see there are a number of uplink profiles that have been automatically created for us here in the lab environment. Now, some of these uplink profiles are specifically for the NSX Edge, but there’s also one called “NSX Default.” Uplink hosts the switch profile, and yeah, the Uplink profile is going to determine things like the Nick teaming policy, where in this case we’ve got a failover order where Uplink one is going to be active and Uplink two is going to be standby. So the uplink profiles are used to configure the uplinks of our transport nodes and things like link aggregation groups and MTU. We’ve also got transport node profiles as well. And as you can see, there’s one that’s already created here called an ESXi Transport Node Profile. And so I’m just going to edit this Transport Node profile to take a closer look. Here, we can see the switch type is an NVDs.

We can see that the mode is standard for all hosts, and we can see the NVDs that it’s associated with and the transport zones that it’s associated with as well. As a result, you can find a lot more information here. This has a network I/O control profile that can be used to prioritise certain types of traffic. It’s got an uplink profile associated with it; it’s got an LLDP profile associated with it; it’s got an IP assignment associated with it, with a pool of IP addresses that are going to be used to assign to the taps. And any ESXi host that’s configured using this Transport Node profile will use VM Nick Two as its uplink, as its active uplink. So the transport node profile is really just a collection of these other profiles and other configuration settings that represent a more comprehensive set of configurations for the transport nodes that we want to configure. So now we’ve got a transport node profile that’s ready to go, we’ve got an uplink profile that’s created and ready to go, we’ve got an IP pool that we can use to assign all of the IP addresses to our taps, and we’ve got a transport zone created. So let’s go back under “Fabric.” Let’s click on nodes, and I’m going to select this cluster here. At the top, I’m going to click on “Configure NSX.” It’s going to ask me for my transport node profile.

So I’ll choose that Transport Node profile that we just looked at, and all the other configuration is basically done for me at this point. I just picked the Transport Node profile that contains all of those settings, and I clicked on Apply, and NSXManager is going to do the rest for me here. It’s going to go ahead and configure all of the settings contained within that Transport Node profile on this group of ESXi hosts and basically make this overlay network available to the ESXi hosts. And whatever segments have been created within that transport zone are also going to be available on these ESXi hosts. So what I really want to get across here is that once we’ve gone through this configuration process for our transport notes, I really just wanted you to see what the process looked like. And once this configuration process is complete, we can look at things like the physical adapters of those transport nodes. We can see our VM next to it. Here we can look at switch visualization. This used to be called “NVDs Visualization.” Now it’s called “switch visualization.” And I can see a nice little diagram here. So here’s my VM nix, and I can see what they’re connected to, and it’s actually VMIC 2. That’s my uplink, and it has my tap there as well. So this host preparation is some of the basic configuration that we need to start doing prior to rolling out our logical switches.

6. Demo Monitor NSX TEPs

In the last video, we explored the transport zone and tap configurations for a group of transport nodes. And one of the things that we noticed here in the lab environment is that if we go to the system screen and we examine our fabric and we go to our nodes, we had two different clusters of transport nodes, nodes that were managed by our Venter server appliance. We had a management cluster and a compute cluster. And we noticed that the management cluster was not configured for NSX. And the reason behind that is the fact that the NSX edge nodes already have taps built right in. So the tips on our edge nodes are actually inside the virtual machine that the edge node is running on. So keeping that in mind, let’s move over to the Vsphere client. And here we can see those two clusters. We have the compute cluster, and we have the management cluster. The compute cluster is configured for Nsxt; the management cluster is not. So let’s take a moment to examine some of the hosts in the compute cluster here. And again, this cluster is prepared for Nsxt.

So focusing on host zero one A, let’s go to the configure tab, and then let’s go to virtual switches. And here we can see that there is a vSphere-distributed switch that has been configured on this host, but there’s also an NVDS that’s specific to Nsxt. And so here you can see the physical adapter that’s been allocated and dedicated to this NVD’s virtual switch. And over here on the left, we can see all the layer-two segments that we’ve created. So we’ve got a logical switch for application databases for our web server virtual machines. We’ve got some Kubernetes-specific segments as well. And we even got a VLAN segment called Phoenix VLAN. So that’s what’s currently configured in my NVDs. Now, if you remember NSX for Vsphere, and if you’ve never worked with NSX for Vsphere, this may be something you’re not familiar with. But if you did that in NSX for Vsphere, we could actually go in and examine the VM kernel ports that were dedicated to our VTPs. That’s not the case here with Nsxt. And actually, if we just navigate over here to the VM kernel adapters section, you can see we have VMK 0 for management. That’s the only service enabled on that. We’ve got VMK one with no services enabled; this must be used for storage, and VMK two, which is just for V motion.

None of these VM kernel ports is marked for Genev traffic that is associated with Nsxt. And so the fact that we can’t see the taps here in the Vsphere client is again just another indication that as we move into Nsxt, the NSX configuration is separate from the Vsphere configuration. We’re not configuring things in the VS for the client. We can’t make changes like this within the Vsphere client. They have to be done in the Nsxt user interface. So let’s go back to the Nsxt user interface. And under the networking tab, I’m just going to scroll all the way down, and we’ll go to our IP address pools. And here you can see that there is a TEP pool that’s been created for region A. And if I click on subnets here, we can see the subnets that are associated with this IP address pool. So as I add transport nodes, they’re going to get taps, and the taps are going to get IP addresses from this address range. So let’s wrap this up and return to System. Let’s look at one of these specific transport nodes under system. Let’s look at ESXi one A, the same transport node that we were looking at in the Vsphere client. So I’m just going to click on this transport node itself, and I’m going to go to the Monitor tab. And under the monitor tab, I’m just going to scroll down here. We can see the transport node status. I’m not really concerned with that. I’m not really concerned with the system usage.

What I want to see here are the different network interfaces. You can see all the network interfaces for this transport node, and we can also see the status of our tunnels. So what we’re looking at here is the source IP that is the tip of this particular transport node, the remote IP. These are the other transport nodes to which the genev tunnels have been established. And you can see the type of remote transport node here. I’ve got an ESXi host, I’ve got an NSX edge, and I’ve got a KVM host. So these are all the tunnels that are being established on that physical underlay network, allowing the virtual machines on this transport node to communicate with components on those other transport nodes. So that’s just a quick little lesson on what’s actually available in the VSphere client. And as far as the Vsphere client goes, the visibility and configurability of a lot of these Nsxt components are really limited. We can see some of these things, such as segments and port groups and so on. But there are really no actions we can carry out on these networking constructs because they’re shown as opaque objects in the Vsphere client. I can’t see my tips, and I can’t see the IP address of my taps. All of that configuration is now located in the NSX user interface, which is much different than if you’re used to dealing with NSX for Vsphere.

7. Uplinks and Teaming

So the primary configuration method that we’re going to use in NSX is something called an uplink profile. And the uplink profile determines the Nic teaming policy on my physical adapters, as well as which adapters are active versus standby. And so the active adapters are going to, as you can probably guess, actively pass traffic, whereas the standby adapters are there just in case one of the other adapters happens to fail. We will also establish a transport VLAN. This is the VLAN that’s going to be used on the underlay network. So essentially the purpose of this VLAN is to say, “Hey, I’ve got traffic coming from one virtual machine running on ESXi 1, and it’s down for some other virtual machine on a different Six host.” As that traffic is getting encapsulated by these tips, it’s hitting a physical network, the underlay network, whose VLAN on the physical network is going to be used to carry that traffic from host to host. So all of that gene-encapsulated traffic will be tagged with a certain VLAN value before it hits the physical network. And we’ll also have to set up that VLAN on the physical network itself. So that’s the purpose of the transport VLAN. And then we also need to establish an MTU for our uplink profile as well. It’s got to be something larger than 1600.

Okay, so we’ve got a couple of different nick-teaming policies that are possible, and we’re just going to take a moment to review some of these different nick-teaming policies that we can choose from. But before we do that, let’s talk about the concept of uplinks. What is an uplink? An uplink is essentially an analogical construct of the NVDs. So, for example, let’s say we’ve configured the originating virtual port ID for Nick teaming. And on the physical host, we have four physical adapters for VmNext. Each of those VMs next is going to be considered a unique uplink. There will be no nick-teaming here. There’s no LACP configured here. Each adapter is a separate, unique entity. And. same thing with Source MA cache. Again, we’re not dealing with any kind of nick-teaming. The wording is a little different, but it’s the same idea. Each adapter kind of stands on its own, and each adapter is considered a separate, unique uplink. Whereas with something like LACP, we’re going to actually bond together multiple physical connections as part of a link aggregation group. So essentially, what happens here is that the ESXi host treats a link aggregation group as one big physical adapter. And within LACP, there are hashing algorithms that can be used to distribute the traffic across those physical connections.

But the way that NSX sees this is as one uplink. Even though it’s actually multiple physical connections, it’s treated as one uplink. So here’s the point that I’m trying to make with these three slides: Let’s go back a little bit and take a look at the originating virtual port ID. Slide. The point I’m trying to make here is that, basically, this is how NSX is going to determine the number of uplifts that it needs to distribute traffic across with an originating virtual port ID. If I’ve got four physical adapters, I’ve got four uplinks with Source Macache; if I’ve got four physical adapters, I’ve got four uplinks. And, with LACP, NSX treats two physical adapters that are bonded together into a LinkAggregation group as a single uplink. So essentially there are two different layers as to how we’re configuring the traffic distribution here. So far, we’ve been looking at how the ESXi host handles it using LACP and things like the originating virtual port ID or the source macache. That’s how the ESXi host itself handles it. And so now NSX is aware of these uplinks. So now for NSX itself, we’re going to configure a teaming method.

How does NSX actually distribute traffic across the uplinks that it’s aware of? And so let’s say that we’re configuring failover orders as the teaming method in our uplink profile. And in this scenario, we have an ESXi host that has two physical adapters that are part of the Link Aggregation Group. Let’s also assume that it has a third physical adapter that is not part of the Link Aggregation Group. Well, what we end up with in this scenario is two uplinks. NSX doesn’t really differentiate as to how the traffic is handled after it hands it off to the host. It just says, “Hey, we’ve got two uplinks.” And so in the case of a failover order, we are going to specify one of these uplinks as active. The other uplink is going to be standby. And if the active uplink fails, the standby uplink takes its place immediately. So, in this case, let’s say we’re creating a Link Aggregation Group with activeUplink and this single standalone port. We’re making that the standby uplink. That means that all of the traffic for these VMs is going to flow over the Link Aggregation Group. So let’s take a little deeper look at this and kind of walk through it. Some traffic gets generated by VM 1, and it needs to flow over the underlay network. It needs to be VXLAN encapsulated, and it needs to flow over the underlay network. It’s going to flow and hit the active up link for Nsxt.

 At that point, the ESXi host takes over, and we have a Link Aggregation Group. So we’ve got some kind of algorithm within this link aggregation group that is receiving the traffic from this uplink and distributing it evenly across these physical adapters. That’s where the Nick teaming method of the actual ESXi host takes over. If I’m using something like Source Mac or an originating virtual port ID, that really changes here because if traffic is hitting this uplink, it’s hitting a specific physical adapter. Okay, so now let’s look at another teaming method that’s available with NSXT, the load-balanced source teaming method. And this is going to make a one-to-one mapping between virtual interfaces and uplinks from the host. So, if you’re familiar with originating virtual portID, this is similar. It will take each virtual machine and bond it to a specific uplink.

So maybe VM One is going to go to Uplink 1, maybe VM Two is going to go to the second uplink, and maybe VM Three is going to go to the third uplink. Each virtual machine is being tethered to a specific uplink, and that’s going to be the uplink used for all of the traffic coming from this VM. So in this case, I’ve got two uplinks. One of them is a link-aggregation group with two physical adapters. The other one is an uplink with a single physical adapter. And so the load-balanced source is going to make a one-to-one mapping between each virtual machine and one of the uplinks. And then we’ve also got the option of a load-balanced source Mac address. This gives us a little bit more granularity because I could potentially have certain virtual machines that have multiple network interfaces. So let’s say VM One has two different virtual nicks. Well, one virtual nick could be going to this uplink, and another virtual nick could be going to this uplink. We could use different uplinks for the same virtual machine because each of the Vnixes on VM One will have a different Mac address configured. So in this case, let’s assume VM One is using uplink one for its virtual nick. VM Two has a single virtual nick. It’s going to use Uplink 2. VM Three actually has two virtual nicks. One of them might end up on Uplink 1, and the other one might end up on Uplink 2. So those are the different ways that Nsxt can be configured. spread traffic across the uplinks, and the uplinks themselves are associated with the physical adapters of the ESXi host.

8. Demo – TEPs and NIC Teaming

In this video, I’ll demonstrate how to configure uplink profiles in NSS version three using the free labs available at Hold vmware.com. So here you can see that I’m logged into the Nsxt user interface. I’m going to click on the System tab, and under System, we’re going to click on Fabric. And under Fabric, we’re going to look at profiles, and it drops us into a section called Uplink Profiles. And this is a hands-on lab that is built to be ready for you to experiment with. So VMware has already created some uplink profiles here, and there are also some default uplink profiles that you’ll find in your own Nsxt environment. So let’s start with the default Uplink host switch profile here and take a look at that.

So first off, I want to take a look at the Nick teaming method. And you can see here that the failover order Nick teaming method is being used, and Uplink One is going to be active and Uplink Two is going to be on standby. So that’s the configuration that’s automatically included in this NSX default uplink host switch profile. And just bear in mind that these uplinks could each be a single physical adapter, a single VM neck, or an uplink could be a link aggregation group, which is actually a set of multiple physical adapters. So these uplinks could be associated with either V-necks or link aggregation groups. So let’s create an uplink profile here.

I’m just going to click on the little plus button, and I’m just going to call it the Rick Demo Uplink Profile. And you’ll notice that Nsxt is big on profiles. There are a lot of different kinds of profiles here. The thought process is: Let’s allow the administrator to establish these profiles, which are standardized sets of configurations that can be applied to many transport nodes. So rather than one by one configuring every single transport node with the required settings, why not just create these profiles that we configure once?

And as long as the configuration of those transport nodes is consistent, this will make it really easy to do that. Well, anyways, here’s my Uplink profile that I’m creating now. And here you can see that I can modify the teaming policy. And so here I can choose from some of the other methods that we learned about in the previous video. Instead of failover order, I could choose load balance source or load balance Mac address. So those are the options that we learned about in the prior video. So I’m actually going to cancel this. Let’s cancel this out and take a look at this specific transport node. And what we can tell from this section of the user interface is that for this transport node, if traffic is being gene encapsulated, if it’s being sent to other transport nodes, or if it’s northbound and being sent to an edge node, all of that traffic will actively flow over uplink One. Uplink Two is only going to be used in the event that Uplink One fails. So under Fabric here, I’m going to click on Nodes, and I’m going to go to ESXi One A and click on that. And on that particular transport node, I’m going to click on “Switch Visualization.” And this is going to give me a nice little diagram to see how this profile has actually been instantiated on this particular transport node.

And here we can see that on this particular ESXi host, I have four VM nix. A couple of my VM nix are being dedicated to Vsphere distributed switch ports, probably VM kernel ports. That’s where vmneck zero and vmnick one are configured. And then I’ve got VM Nick two and VM Nick three. VM Nick Two is my active uplink. That is the uplink through which all traffic will be routed. I can see my TEP IP addresses that exist on this NVD. So this may seem a little strange because we’ve configured active standby for our uplinks. I’m currently running VM Nick 2. But where’s my standby? I don’t see any standby here. And the reason that that has kind of worked out the way that it has is if we go to our profiles and instead of looking at the uplink profile, let’s take a quick look at the transport node profile here that is being associated with these transport nodes.

And you can see under the transport node profile that we have the ability to map physical interfaces to particular uplinks. So if I go to uplink two here, I can configure a specific VM nick to be associated with uplink two as a standby adapter. So let’s experiment a little bit here. Let’s add VM Nick Three to the physical mix, and we’ll change our transport node profile. And I’m just going to go ahead and click on “Save Here” and modify that transport node profile. And so I’m going to go back over here to nodes, and we’re going to grab ESXi One A one more time. And let’s take a look at what the configuration is now. Well, now I’m still not getting my intended configuration. Both VM Nick Two and VM Nick Three are now active adapters. And so, based on my uplink profile, I want active standby. If I look at my uplink profile, if you look at the Teamings, uplink One is supposed to be active, and uplink Two is supposed to be on standby. So what am I doing wrong here?

Let’s take a closer look at this node. Let’s look at ESXi. Let’s take a look at the transport node profile, and maybe I’ll find the answer there. So essentially, the transport node profile is the collection of profiles that are being assigned to our transport nodes. And you can see here that this particular transport node profile is being assigned to every transport node in my TZ overlay network, my TZv land network. So all of my transport nodes are getting this transport node profile, which is essentially a list of profiles that are being applied to all of those transport nodes.

So if I scroll down a little bit here, we can see why the behavior is not what we were anticipating. We’re using the ESXi region zero one, a transportnode profile that is configured for Active/Active. We’re not actually using the NSX default uplink host switch profile, which calls for active passive. Let’s give this a shot. Let’s say that we change our uplink profile to the NSX default uplink host switch profile, and under physical NYX, I am now going to make VM Nick Two my active and VM Nick Three my passive, my standby. I’ll go ahead and click on “Save Here.” Let’s move back over to our node, and let’s examine ESXi one A. Let’s look at our switch visualization. And there we go. Now VM nick two is the active uplink uplink one, and VM nick three is the standby uplink uplink two.

Okay, so let’s keep digging a little bit deeper here. Let’s go back to our profiles, and I want to look at this NSX default host switch profile here. There are also Nick teaming options here. And there are Transport, VLANs, and MTU settings here, very similar to what we’ll see here for this custom-created ESXi region with the A Uplink profile. So I’m going to go back to my transport node profile one more time. I’m going to edit it one more time. We can see that this transport node profile is being associated with our overlay and our VLAN transport zones.

We can see that this is what we’ve set up for the physical adapters associated with this. And I’m actually going to go back here and modify my Uplink profile one more time. I’m going to change it back to ESXi region A. It’s going to clear out my physical adapter mappings, and I’m just going to put everything back the way it was. So Uplink One is going to be my active uplink, and I’m not going to have a second uplink. So I’ll change VM Nick Two back to my active uplink. And if we go back to our node here, you can see things returning to normal. VM Nick Two is the only adapter being utilized for these NVDs. So what we now have is a bunch of logical switch segments that are genetically encapsulated and sending traffic over VM Net 2. But that’s not really the only thing we have here.

If we look at ESXi one A in the Vs. client and we look at our virtual switches, not only are we dealing with two GEV encapsulated layer segments, but we’ve also got a VLAN segment as well that’s associated with this transport note. So how are we identifying settings like which VLAN is being utilized and which VLAN is being sent over this physical VPN towards the physical switch? Well, if we go back to the networking area of the user interface, and we look under segments. We can see here that this PhoenixVLAN segment has been created here.

And if we edit this Phoenix VLAN segment, we’ll notice that we have the ability to modify the VLAN identifier here as well. So what you essentially want to think of when you’re looking at this transport node diagram is that this uplink is receiving traffic from this transport node on one VLAN. That’s all my natively encapsulated traffic, but it also has VLAN segments that are created. And so this is actually an a VLAN trunk into my physical switch. And any traffic on that VLAN segment is just going to hit the physical switch on that particular VLAN that we’ve designated in the segment itself.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!