Pass VMware 2V0-21.20 Exam in First Attempt Easily
Latest VMware 2V0-21.20 Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 109 Questions & Answers
Last Update: Nov 22, 2023
- Training Course 100 Lectures
- Study Guide 1129 Pages
Download Free VMware 2V0-21.20 Exam Dumps, Practice Test
Free VCE files for VMware 2V0-21.20 certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest 2V0-21.20 Professional VMware vSphere 7.x certification exam practice test questions and answers and sign up for free on Exam-Labs.
VMware 2V0-21.20 Practice Test Questions, VMware 2V0-21.20 Exam dumps
Managing Networking in vSphere 7
1. Foundation Review: Virtual Networking Concepts
In this video, I will explain some of the concepts behind virtual networking and how our virtual machines can connect to other resources, either within the same ESXi host or possibly even resources connected to our physical network. So how do virtual machines actually handle transmitting and receiving network traffic? Well, in many ways, they work exactly the same way that a physical machine does. So here we see a virtual machine, and it has a network interface card just like any other network-connected machine. But in this case, we're dealing with a virtual mix. Our guest operating system, in this case Windows, is completely unaware that the virtual nick is not a physical hardware device. Windows just sees a network interface card. And from the perspective of the guest operating system, that's really the end of the story. So Windows sends some packets to the virtual nick, and just like a physical nick would, my virtual nick needs to connect to a switch. So our virtual machines will connect to a virtual machine port group on a virtual switch.
And the port group is used to define settings like VLAN membership and security policies and stuff like that. And my ESXi host also has physical interfaces. But my traffic doesn't necessarily need to flow over a physical interface. If I've got multiple virtual machines connected to the same port group, they can communicate without their traffic ever flowing over a physical network. And then, of course, my ESXi host itself has some physical network interfaces that connect to a physical switch. These are called VMs on NX. And if my traffic needs to flow to the Internet or to some physical server, it'll do so using these VM nicks or physical adapters. So a VM nick is basically an uplink for a virtual switch that gives connectivity to the actual physical network. But our virtual machine port groups are really only half the story. My VM port groups are for all of my virtual machine traffic. Everything else is going to be handled by a VM kernel port. So the virtual machine port groups are kind of like ports on a physical switch that a PC would connect to or our server would connect to.
VM kernel ports are special types of ports on a virtual switch that are used for traffic like V-motion or IP storage or management. These are ports that the hosts and V centre used to talk amongst themselves for purposes other than virtual machine traffic. And then our hosts and our virtual switches also support VLANs, and they support trunk ports as well. So, for example, let's say I've got two virtual machines here. The virtual machine on top is connected to a port group with VLAN ten assigned to it. So my VM on the top of my diagram here is connected to a port group that has VLAN ten assigned. And my VM at the bottom is connected to a different port group with a different VLAN assigned. So as traffic flows into the virtual switch from my on the top of the screen, it's going to hit a port group that's tagged with VLAN ten. And if that VM is trying to communicate with the other VM, that traffic is actually going to have to flow out of the physical network to a switch, hit a router that can route between VLANs, and eventually that traffic will flow back in.
And that's kind of how our VLAN segmentation works with a virtual switch: each VM will connect to a port group. Those port groups will have VLANs defined, and we will have a trunk port to a physical switch that is able to handle traffic across multiple VLANs on a single physical connection. That way, the physical switch can actually see a consistent set of VLANs and can see which virtual machine traffic belongs on which VLAN as that traffic arrives. So you really want to understand that there are these things called virtual switches that exist within the ESXi host. And on them, we've got virtual machine pork groups. and each virtual machine is going to be equipped with a virtual nick. You may have heard some of the options. There are, like a VMX net, three virtual network interface cards. Those are going to provide connectivity to the virtual machine port groups on the virtual switches. It's basically like having a fake switch running inside of your ESXi host that interconnects all of those VMs on that host and also applies VLAN tagging and security policies based on those port group memberships. Then we've also got these special ports on the virtual switches called VM kernel ports. And this is another important concept for the exam. VM kernel ports are used for managed traffic, storage traffic, and V motion traffic.
2. Foundation Review: vSphere 7 Standard Switches
In this video, I'll explain certain attributes of the VSphere standard switch. Specifically, we'll talk about how NIC timing is performed and how we can configure our VMXor physical adapters to tolerate failures. So our VM necks are actually the physical adapters of the ESXi host itself. And each VM neck can only be assigned to a single virtual switch. Our virtual switches can't share VM necks. So in this slide, we see a virtual switch with three physical adapters, or Vmix. And our network is in a healthy state. So all three of my adapters are connected and have a nice green link light. At this moment, traffic for this virtual machine is currently flowing through the first physical adapter. And let's say that something happens. Like, for example, let's say we have a new intern, and we send them into the data center, where we say, "Go ahead and clean up the cables." And he goes in with his scissors, starts cutting cables, and just so happens to cut the wrong cable. Well, now, in this situation, our nice green link light on that VM isn't green anymore.
Now the connection has been physically broken. And this is something that's very easy for the ESXi host to detect. and it will simply redirect that traffic to another port that's still functional. Now, let's talk about a more complicated failure. Let's assume that cable that justgot caught, we fixed it. So now all of my network adapters are connected. And again, we have a nice green linklight on all of those physical VM nicks. And in this case, our virtual machine traffic is flowing through this third adapter. And again, we go ahead and we send the intern into the data center, and we say, "Okay, careful this time, but go ahead and keep on cleaning up those cables." And this time, he cuts a cable that interconnects our two physical switches. Now this is a little bit different because in thisscenario, the link stay of the VM Nic doesn't change.
That nice green link light is going to remain green. But if we look at our diagram here, traffic from this virtual machine is now flowing into a physical switch that's essentially isolated. It doesn't have any connectivity to the other physical switch, and therefore, it doesn't have any connectivity to the Internet. So at this moment, adapter Three is flowing into a dead end. This is where beacon probing can be helpful. Beacon probes are little packets that my VMnicks send out to each other to just validate that they're still operating well. So here we see the two VM nicks at the top of our screen being able to pass these beacon probes and successfully communicate with each other. However, our third adapter is flowing into a dead end.
So although those two adapters on the top can communicate, the adapter on the bottom is sending these beacon probes into a physical switch that the other VM NEX can't see right now due to this upstream network failure. When the host realises it isn't seeing those beacon probes from the third adapter, it will disable that adapter and redirect virtual machine traffic to another VM neck. Now, let's talk about some of our nick-teaming options. What we're trying to accomplish here is to have multiple VM necks or multiple physical adapters that are connected to a virtual switch. And we want to ensure that our virtual machine traffic is able to utilise all of these physical adapters. So we're going to have to configure some sort of nick timing method to allow those adapters to load and balance that traffic. And the first option is a method called "nick timing" by originating virtual port ID. Now, how this works is that each virtual machine will be connected to a virtual port on our virtual switch. And based on the virtual port ID that the VM connects to, all of its traffic will flow out of one specific VM nick. And if our second VM connects to a different virtual port ID, its traffic will be routed through a different VM nick. same thing with our third virtual machine.
Now, in this scenario, each VM is essentially tethered to a specific physical adapter, and all of that virtual machine's traffic will flow through that one physical adapter. And this is how nick timing by originating port ID spreads that workload out across all of these physical nicks. Now, in this scenario, it's important to understand that what we want to do is configure the physical switch without port channels, without LACP. We don't want any of that nick teaming to occur on the physical switch side. The physical switch has to see these four physical connections as completely independent. And that's the same situation when we go to source match hashes. So with Source Match, it's actually very similar to the method that we just looked at, maybe for a virtual machine One has a unique Mac address called Mac One. And based on that unique Mac address, that VM is going to be associated with a specific VM neck. The same is true for VMs two and three.
Based on their Mac addresses, they will be tethered to one specific physical adapter, and we'll use that for all traffic that needs to leave the ESXi host. Again, we don't want to configure LACP ethernet and port channels. We don't want to configure any of that stuff on the actual physical switch in this case. Now, the final nick teaming method that's supported by the standard virtual switch is one called IP hash. And here's how IP hashing works. Here you can see that we have a virtual machine with IP address 1. When that virtual machine goes to send some traffic to a particular destination with a unique IP address, that traffic can flow over a physical adapter. And the physical adapter is selected based not only on the source IP but on the destination IP as well. So now if the same virtual machine happens to be sending traffic to some other destination with a different destination IP, that traffic can actually flow out of a different physical adapter.
And this is different than the prior nicktiming methods that we saw because now my virtual machine can actually utilise multiple VM nicks. So in this scenario, it's important thatthe physical switch is appropriately configured. We want to set up port channel, or LACP, on the physical switch to bind these physical adapters together. Another feature that's supported on the VistaStandard switch is something called traffic shaping. What traffic shaping does is apply settings such as peak bandwidth, average bandwidth, and birth size to a port group. So, for example, let's say that we have a group of virtual machines connected to a port group on a Visa standard switch, and sometimes we test new applications on these VMs or do something else that's resource intensive. And as a result, these VMs tend to dominate the bandwidth of that virtual switch, and other VMs on other port groups sometimes can't get the bandwidth they need.
Well, what we can do is apply traffic shaping to a port group so that, essentially, on this particular port group, each VM will have a peak bandwidth of 100 megabits per second. Each VM on this port group will have an average bandwidth of 50 megabits per second. And over time, that's what the virtual machines must average out to at a maximum of 50 megabits per second. So because this particular virtual machine is connected to a port group that has this traffic shaping policy assigned, this policy will be enforced on this particular VM. So under normal circumstances, this VM should be averaging 50 megabits per second or less in bandwidth usage. However, let's say there's some large file that we need to upload. Well, for a short period, this VM can actually utilise 100 megabits per second, or its peak bandwidth rate, until it completely uses up what we call burst size. So for this VM, maybe we'll say our burst size is 100 megabytes. And again, that's defined as a port group level, not an individual virtual machine level. So let's say at the port group level, our traffic shaping policy specifies a 100 megabyte burst size. This VM will be able to transmit at 100 megabits per second until it uses up that 100 megabyte burst size maximum. And then the virtual machine will be forced back down to the average bandwidth of 50 megabits per second.
And it will not be able to exceed that until it builds that burst-size backup. Now, how does it build the burst-size backup? By staying below that 50 megabits per second average for enough time to essentially save up 100 megabytes' worth of birth size again, And so in that way, traffic shaping can really help us to ensure that one particular port group doesn't completely overwhelm the physical adapters of an ESXi host. There may be many port groups with different types of traffic connected to this VLAN for a standard switch. And traffic shaping just gives us that ability to place some limits on how much bandwidth each VM within a port group can actually consume. We can also configure some security settings on a BESpar standard switch.
These can be configured at either the virtual switch or the port group level. If we configure these settings at the virtual switchlevel, they will be similar to our global settings. So maybe we want to enable forging on a virtual switch. If we do not, or if we do so at the virtual switch level, this means that setting will apply to all port groups on that virtual switch. Now, if we go to an individual portgroup and modify that setting, the settings that are created at the pork group level will override those global virtual switch ride configurations. So one of the settings that we can modify is called "forged transmits." Let's say you have a virtual machine and, for some reason, you need to modify the Mac address of that virtual machine. Maybe it has been converted from a physical machine to a virtual machine, and you have some software that is licenced based on Mac addresses. So we need to keep the same Mac address that we had in the physical environment.
That's a good use case for Mac spoofing. And so in that case, when that virtual machine generates traffic, it's going to be using the Mac address that we've specified in the guest operating system. Now, the virtual switch isn't going to like that. The virtual switch expects traffic to come on that port from the Mac address of the virtual nick of that VM. So if it sees traffic coming in on some other Mac address, that's called the "forged transmit," and we can choose to either accept or reject that traffic; by default, the virtual switch is configured to accept it. And by setting this to accept, we're going to allow Max-Boofing for outbound traffic.
Another security setting involves Mac address changes. And Mac address changes are basically the exact same setting for inbound traffic. So now let's say some traffic is coming towards a virtual machine, and the destination Mac is some Mac address that we've configured in the guest operating system that is inconsistent with the actual Mac of the virtual nick. To allow this traffic through, Mac address changes must be set to accept, which is the default setting in a virtual switch. Mac address changes and Forge Transmit are both set to accept Now, if you don't need this MaxSpoofing capability, I recommend you go into those virtual switches and change those settings to reject because that'll be more secure. Finally, the third security setting is something called promiscuous mode. And promiscuous mode allows sniffing of all the traffic on a virtual switch. This is not a secure option to leave enabled all the time. Maybe you need to install sniffer software on a virtual machine and monitor all of the traffic on a virtual switch. For some reason, that's a good reason to enable promiscuous mode, but you're essentially opening up your network and allowing it to be sniffed. So of course, promiscuous mode isn't something that we want on all the time because it presents a pretty serious security risk. So the recommendation is turn Promiscuous mode on when youneed it, and when you're done, turn it back off. Vs. Fare. Six supports multiple TCP IP stacks. So what is a TCP IP stack? Well, it provides a DNS and default gateway. For example, the default gateway is used when traffic is bound for some other network. Let's say you go into Windows on your machine and you type in trainertests.
Well, that traffic is bound for some address on the Internet, and so therefore, that traffic needs to hit something in your network that's capable of performing routing to that other network. That's your default gateway. And you may have different machines in your network that need to use different networks for different things. And in that case, you might need multiple default gateways that are part of the TCP IP stack. So built right into your ESXi host is the default TCP IP stack, and that's used for management traffic and all other types of traffic by default. But if you want to, you can utilise a separate TCP/IP stack for V-Motion traffic. And this is useful if you need to send V-Motion traffic to some other network. Let's say, for example, you plan to do a lot of long-distance V motions. You might have another network that you want to send that V motion traffic over.
So by giving V Motion its own dedicated TCP IPstack, you can direct that V Motion traffic to a different default gateway or a different DNS server. And we can do something similar for provisioning and for cold migrations, for example, cloning snapshots. We can send that traffic through its own dedicated TCP IP stack, and we can create custom TCP IP stacks as well. Okay, so in this lesson, we learned about the following topics. We learned about virtual switches and how we can protect them from a network interface card failure by using either link state or beacon probing. We learned about the different nick timing methods and how they can be utilised to load balance across our VMX or physical adapters. We learned about originating virtual port IDs, which essentially tie each virtual machine to one VMNIC based on the virtual switch port. Very similar to that was Source Mac Ash, which ties its virtual machine to a physical adapter based on the Mac address.
And the third method was a little bit different, right? The IP hash was based on not only the source IP, but the destination IP as well. And with that method, we saw that virtual machines had the ability to utilise multiple physical adapters. We talked about traffic shaping and how it can provide bandwidth control on a per-virtual-machine basis, and that traffic shaping is going to be configured on a port group. And then finally, we talked a little bit about the multiple TCP/IP stacks that are included with VSFare 6 so that we can use different default gateways for different types of traffic.
3. vSphere 7 Distributed Switch Concepts
In this video, we'll take a look at the Vsphere Distributed Switch and talk about how it's different than the Vsphere Standard Switch. We'll look at some scalability characteristics of the Vsphere Distributed Switch, and we'll also take some time to examine private VLANs, LACP, and route based on physical nick load and some other features that are specific to the Vsvare Distributed Switch. Now, the primary benefit of the Vs. Verse distributed switch is scalability. The VSfair Distributed Switch is only available with the Enterprise Plus Licensing Edition, and it provides a lot of features that a standard virtual switch does not. Now for scalability, here in this slide we see four ESXi hosts, and let's assume that we're using a Visa Standard Switch, right? So on the first host, I create a virtual switch and I create a couple of port groups, and the process repeats itself every time I add a new ESXi host.
So if I want a matching configuration on some new host, I'm going to need to manually recreate that standard virtual switch configuration on each and every host. on each host. I'll also have to create VM kernel ports and associate physical adapters with the virtual switches as well. So this process can be time-consuming, and it's also prone to error. When you manually create a large number of standard virtual switches, the chances of making a mistake somewhere along the way increase as you create more and more. Now, the goal of the Visa Distributed Switch is to automate and centralise a lot of this process so that we don't have the same likelihood of human error.
Let's say that instead of a Vs. Fair Standard Switch, we decide to go with a Distributed Switch. And what do I mean by this term "distributed"? Well, it really just means that the switch is created in one place, and then copies of that virtual switch are going to be distributed to all of my ESXi hosts. And that centralised copy of the Vsfair Distributed Switch is going to be contained in VCenter. So Vcenter is the management plane for the distributed virtual switch, and it essentially gives us the feel of only having a single virtual switch to configure. Now, just to make it very clear, VCenter is the management plane, and no traffic flows through VCenter. If VCenter fails, our virtual machines don't lose connectivity. Our traffic is going to flow through the data plane, which is actually made up of hidden virtual switches that VCR distributes to all of our ESXi hosts.
So now we can create a distributed port group in V Center, and it will be pushed out to all of those little hidden virtual switch instances that run on all of my hosts. Now, when I create a Distributed Pork Group, I can configure settings like VLANs, security settings, and traffic shaping settings, and I only have to configure them once, and those settings are distributed to all of my ESXi hosts. And again, not only does this speed up the process of creating these configurations, but it also greatly reduces the likelihood of human error. Another benefit is that if we have a specific security policy, it's much easier now to ensure that all of our ESXi hosts are compliant with whatever network security policy that we've identified. Now one side note: VM kernel ports don't really change when you start using them for distributed switching. Those are still managed on a per-ESXi host basis.
So we still need to create a unique management VM kernelport for every host with its own unique IP address. That part of the process doesn't really change much when we go from a VSphere standard switch to a VSphere distributed switch. Now, with a VSphere distributed switch, there is a huge list of features that are supported that are not supported on a Visa standard switch. So let's take a look at a few of these, and then the next video will take a look at some more. The first feature that I want to talk about is called private VLANs. Private VLANs are a feature that is again supported only on the Vs. FARE Distributed Switch. And we can use a private VLAN to isolate traffic within a VLAN. So let's break down our diagram here. Here we see a VSphere Distributed Switch, and we've configured primary VLAN 10. Now within that VLAN, we've configured a number of secondary VLANs.
On the far left, we see secondary VLAN 110. That's an isolated VLAN. In the middle, we see secondary VLANs 111 and 112. Those are community secondary VLANs. And on the far right we see secondary VLAN 10, which is a promiscuous secondary VLAN. And down a little bit lower, if you look, we have a bunch of IP addresses; those represent my virtual machines. And notice something about those IP addresses. They're all in the same address range. All of these VMs are still on the same primary VLAN. They can still all be part of the same IP address range. What we're looking to accomplish here with these private VLANs is to create some controls within VLAN 10 to govern which virtual machines within that VLAN are actually allowed to communicate. So maybe these virtual machines are owned by different departments or different tenants. And we need to create some level of isolation while still maintaining a contiguous addressing scheme. So let's start by looking at my VMs. On the left, we have two VMs that are in an isolated secondary VLAN. Any virtual machine in an isolated secondary VLAN can only communicate with promiscuous ports.
If you notice that last animation, let's watch that one more time. The virtual machine on the left attempts to communicate with ten one-on-one, and that's not allowed. Even though they're on the same secondary VLAN, it's an isolated VLAN, and so therefore, those virtual machines can't communicate with each other. They can only communicate with devices on the Promiscuous VLAN. That's what an isolated VLAN is. Now in the middle, we have a couple of community VLANs. And so on the left, we have ten one, two, and ten 1113 that are in secondary VLAN 111. And in blue, we have ten dots, one dot, one dot 14, and ten dots, one dot 115 in secondary VLAN 112. And the effect of a community VLAN is that members of the same community VLAN are able to communicate with each other. VMs on secondary VLAN 111 can all communicate with each other. But if they try to communicate with virtual machines in some other community, that's not allowed. And of course, the community VLAN can also communicate with any VMs connected to a promiscuous secondary VLAN.
Think of that almost like your default gateway—that's going to be a secondary VLAN that everything is allowed to communicate with. Another additional feature of the VS for a distributed switch is the nick teaming method called route, which is based on physical nick load. Or sometimes you'll hear it called load-based teaming. Now, when we were looking at the Visa for standardswitch lessons, we looked at nick teaming modes like originating virtual port ID, source Mac hash, and IP hash. And all of those nick-timing methods had something in common: they're not very intelligent. None of these methods can detect the fact that a physical nick is being overwhelmed and adjust accordingly.
Load-based teaming or routing based on physical-nick load is a little bit more complicated than those prior methods that we talked about. So here we see three virtual machines. VM one, VM two, and VM three. And each of those VMs is bound to a specific VM neck or physical adapter. So at the top, VM one is flowing through the VM neck, and VM two and VM three are both utilising the same VM neck towards the bottom of the diagram. And let's assume that VM 2 and VM 3 generate a lot of traffic. The second VM nick is likely to be very busy—significantly busier than the first VM nick. If this physical nick is used more than 75% of the time, virtual machines will be migrated to a less busy physical adapter, with the goal of reducing the workload on the really busy physical neck. Now, in this case, each virtual machine will only be using a single physical adapter.
And this means that when we configure our physical switch, we want to make sure not to enable Ether Channel Port Channel LACP. We don't want to enable any of those Nick timing configurations in the actual physical switch. LACP is a feature that is only available within the VS-Fare distributed switch. It's kind of like ethernet if you're familiar with Cisco switches. This is kind of similar to that. It's a way to bond together multiple physical adapters and make them essentially act like one big pipe. Right? So here we see two VM necks that are connected to a physical switch. And in order to enable LACP, we'll start out by configuring link aggregation groups. Now, link aggregation groups are essentially a way to identify ports that are going to participate in LACP, and it's important that both sides match. Once this is configured, the two physical adapters can act as one large pipe, and we can take advantage of a huge number of Nick teaming methods that are built into LACP. And here, on the right-hand side, we see a list of all of these different Nick teaming algorithms. Look at all the different ways we can load-balance this traffic. This gives us a tonne of options and allows us to choose a method that's really well suited to the type of traffic that our virtual machines generate. LACP is an open standard. It's supported by a wide variety of physical switch vendors.
So in review, we learned about the Visa Distribution Switch. We learned about how it's centrally managed by VCenter and how hidden virtual switches are distributed to all of the ESXi hosts. and that's the data plane. So if VCenter fails, there's no impact on traffic. We can create distributed port groups that span many ESXi hosts and that have identical settings such as security settings or traffic shaping settings. VM kernel, ports, and uplinks still need to be configured on a host-by-host basis. Those objects are unique to an individual host. And then we also looked at private VLANs and how they can be used to create logical segmentation within a VLAN. We learned about routes based on physical load or load-based teaming that can intelligently migrate traffic from one physical adapter to another based on workload. And then we learned about a Nick timing method called LACP that can be used to bond multiple Ethernet connections together and provide a wide variety of load balancing algorithms.
4. Demo: Create a vSphere 7 Distributed Switch
In this video, I'll demonstrate how to create a VSphere distributed switch in Vsphere Seven. And so, here you can see I'm at the home screen of the VSphere client. I'm just going to browse to the networking area, and here under networking you can see I've got a couple of port groups created. These are port groups for a VSphere standard switch. So at the moment, I do not have a VSphere distributed switch created. So let's start by creating a new VISA.Distributed switch. I'm going to right-click on my virtual data centre here and select Distributed Switch. I'll choose New Distributed Switch. I'm just going to call it Rickdemo VDS and click on next year, and then I'll choose the version of my Distributed Switch. So the big question here is: Which versions of ESXi should this Visa Distributed Switch be compatible with? And if I've still got some older hosts, like some ESXi 67 hosts or 65 hosts, then I should choose a version of the Vsphere Distributed Switch that's compatible with those older hosts. But in my environment, all of my hosts are ESXi Seven or later.
So I'm going to choose that version. Just keep in mind that if you choose one of these older versions, the set of features included will be different. So you'll notice here are some of the new features that were released with ESXi 65-like port mirroring enhancements and Mac Learning 66. There were new features introduced then, and in version seven there are new features introduced now, most notably the NSX Distributed Port Group. Now we're not going to spend time talking about NSX here, but that is an important new feature with the Vsfare Distributed Switch and Vsfair Seven. So like I said, I'm just going to pick the version that's compatible with ESXi 7 or later, and I'll hit next. And now I'll choose the number of uplinks. And what you're really choosing here is how many VMs, or how many physical adapters per host, you can configure for this Vespa Distributed Switch.
So bear in mind that this Vs. Fare Distributed Switch is going to be created on many ESXi hosts, and each of those ESXi hosts has their own set of physical adapters, and the virtual machines on that particular host are going to use those physical adapters. So the uplinks define how many physical adapters per host I can possibly allocate to this VSphere Distributed Switch. So I'm going to leave the number of uplinks set at four. Note that that doesn't mean I have to dedicate four VMs per host to this VSphere Distributed Switch. But that's just the maximum network I/O control allows me to use to prioritise certain traffic. I'm not going to configure that just yet, but I've even enabled it and I am going to create a defaultport group to connect my virtual machines to. I'm just going to call it Rick Crushey's Demopgand, and then I'll go ahead and click on Next. So I'll go ahead and click on Finish here, and that's about it. Now I've created a Vsphere Distributed Switch, and if I hit the refresh button here in my VsFare client, I can see my new VsFare Distributed Switch in my inventory. I can see the current version of the Vsphere Distributed Switch here.
And underneath the VSphere Distributed Switch, I've got my portgroup that I created here, and I also have a little section here for the configuration of the uplinks. At this point, I've created a VsphereDistributed Switch, but it's not really doing anything. I've created a logical construct in VCenter. That's what I've done. I've created a virtual switch that's not actually present on any of my ESXi hosts. So I've created the configuration for my Vsphere Distributed Switch in VCenter. Now I need to distribute that configuration to my ESXi hosts. So I clicked on my Vsphere Distributed Switch here, and I'm going to go to the Hosts tab, and as you can see at the moment, this Vsphere Distributed Switch has not been configured on any hosts.
Now before I start adding hosts to this Vsphere Distributed Switch, I do want to take a quick look at the hosts that I currently have and examine the configuration of these hosts, and specifically, I want to look at the physical adapters that are currently enabled on these hosts and what they're being used for. And so as you can see here, this host has two physical adapters, one of which is in use by V Switch Zero. The other one looks like it's available. Let's take a look at the second host, where we have a similar situation. So what I want to make sure of is that I'm using adapters that are available, and if I need to take an adapter away from an existing virtual switch, what I also want to make sure of is that I'm watching out for my Management VM kernel port. I don't want to disrupt the connectivity of my Management VM Kernel port as I'm adding these hosts to a VSphere Distributed Switch. If any of the changes I make disrupt this Management VMKernel port, I'm going to lose the ability to manage this host from Venter or from the VSphere host client.
But yeah, what we basically know for both of these physical hosts is that each of them has a VM nick that is currently unused, Vmnic One. So let's go back to my views for the Distributed Switch here, and I'm going to click on Hosts, and the list of hosts is currently empty. I'm going to choose Actions, and then I'm going to go to Add and Manage Hosts. So I'll just choose to add hosts and I'll click on "Next," and then I'll pick the hosts that I want to add to this Vs. Fare Distributed Switch. So I'm going to choose the two hosts that we just took a look at here, and I'll click OK, and these are the two new hosts that I'm going to add to my VSphere Distributed Switch. I'm going to pick the physical adapters that I want to assign to the VSphere Distributed Switch.
So I'm going to do VM NickOne, which is going to be the uplink one. And I'm going to apply this same uplink assignment to the rest of the hosts so that I don't have to do this over and over again. So, on my first host, I have VM Nic one assigned up link; on my second host, the same thing. And you can also see here that it's showing me VM Nic zero on each host, and that is dedicated to the switch zero. I could technically click on this and assign this uplink and steal it from the switch that it's currently assigned to, but I'm not going to do that. So now each host has a VM nick, and VMone is going to be the uplink one on the vsfair.distributed switch. And now do I want to do anything to my VM kernel ports? So you can see here that I've got my management VM kernel port that currently exists on a Vsphere Standard switch on each of those hosts. Do. I want to move that over to my V-spare distributed switch. And the most important thing to remember here is that if I'm thinking about moving that VM Kernel port, I need to make sure that my V Spare Distributed Switch is connected to the appropriate VLAN and that everything is configured properly because I don't want to move this and then lose connection to that management port.
So I'm actually going to leave it as is. And if I did want to migrate that management port over to my Views Fare Distributed Switch, what I would probably do is go ahead and test out that port group prior to moving it, just to make sure that it's actually working properly. So that's the process that I'll typically follow. If I'm creating a VSphere distributed switch, I'll get it all set up. I'll test out those port groups on the distributed switch; I'll make sure that they can connect to the appropriate VLANs; and then, and only then, will I start moving VM Kernel ports over. So, let's click on next year. And now do I want to migrate any virtual machine networks? So if I had virtual machines running on these ESXi hosts, I could select them here and migrate them over to my new Distributed Port Group. So I'm just going to click next here and click finish. And now my VSphere Distributed Switch has been distributed to these two ESXi hosts. And that's really what the vSphere.Distributed Switch is, right? It's a switch that we managed centrally in V Center, but now it's also been distributed down to these two ESXi hosts.
So at this point, I've got my V sphere distributed. I've got two hosts that this distributed switch has been propagated to. I've got my port group, and at the moment, I don't have any virtual machines connected to that Distributed Pork group yet. Now, just for the purposes of demonstration, what I'm going to do here is go to my Distributed Switch and actually remove one of these hosts from the VSphere Distributed Switch. You cannot do this. If you have virtual machines on this host that are connected to a distributed port group on this Vsphere Distributed Switch, you will be unable to remove them. So step one, if I want to get a host off of the East Fare Distributed Switch, is to get all of the virtual machines on that host and move them over to a port group that is not on the East Fare Distributed Switch. But in this case, I don't have any VMs running on either of these hosts. So I can just easily go ahead and put it under Actions. I can go to add and manage hosts, and I can remove hosts from my V Sphere Distributed Switch. And I'll just go ahead and say, okay, which attached hosts do I want to remove here? I'll pick the second host. We'll call it HOST 15.
I'll click Okay, Next, and I'm going to remove one host. I'll click Finish here, and this should go successfully. The host should be removed from the VSphere distributed switch. And now it is. So that's how I can remove a host. I'm going to just quickly add it back in one more time. And I could always add more hosts after the fact at any time by just following the same simple process: adding the hosts, picking the VM text that I want to assign, and which uplink I want to assign them to. Which VM kernel ports do I want to migrate? In this case, none. Which virtual machines do I want to migrate? And so now we'll just add that host back in.
All right, so now I've created a new virtual machine. I just called it "demo VM." And if you're following along at home, I didn't even install an operating system or anything like that. I just powered it on and created this barebones basic virtual machine because I want to show you how to migrate this virtual machine from one port group over to my distributed port group. So here you can see it's connected to my VM network, which is a port group on a Vespa Standard Switch. So there are a couple of different ways that I can go about migrating this VM to my new port group. I can right-click it. I can access edit settings and Network Adapter One. I can choose which port group I want to connect that network adapter to, and I can choose my Distributed Port Group there. So that's basically the same thing as unplugging my virtual machine from one network and plugging it into another. I'm connecting this virtual machine to a different Port Group. So that's one way that I can do this the other way.
So here in the networking view, I've got some VMs that are on the VM network. I can right-click that VM Network Port Group. I can migrate VMs to another network. You can see here that the source network is the VM network; the destination network is going to be my Distributed Port Group. And so if I want to move multiple virtual machines simultaneously, now I can just check each VM that I want to move to the Distributed Port Group. I'll click Next here and finish, and it can do a massive migration of virtual machines over to this port group.
Now, you can see I actually have a VM that's connected to my Rickreshi demo portgroup, which is my Distributed Port Group. And now that we've got a VM running on Host14 here, let's see what happens if I try to remove that host from this Visa Distributed Switch. So I'm going to go to Add and Manage Hosts. I'm going to choose Remove Hosts, and I'm going to try to remove Host 14. It should not work. I'll go ahead and click Next here, finish, and you can see it immediately fails out.That's because I have a virtual machine running on that host that's connected to this distributed switch.
5. Demo: Configure Distributed Port Groups
So here we are at the VSphere client, and I'm going to go to networking. So here's my distributed Vs. Fare switch. I'm going to click on Rick Crechey's PG demo PG.Here's a port group that I created in the last video. And I'm just going to right-click this portgroup that I created and go to Edit Settings. And under Edit Settings, there are some simple things I can do. I can rename the port group, so I can change the name of the port group if I want. I can also choose the port binding method. So 99.99% of the time, you want to go with static binding. Here, what that means is that when you boot up a virtual machine and connect it to this Vsphere Distributed Switch, the VSphere Distributed Switch creates a port for that VM, and that VM utilises that port consistently. Each VM has its own port that it uses, and those ports don't need to be recreated or rebuilt or anything like that.
As virtual machines connect or disconnect, each VM just gets their own port. That's static port binding. Ephemeral port binding has certain use cases. For example, if V Center is going to be down for an extended period of time, ephemeral port binding can be used to allow new VMs to connect. But the vast majority of the time, like I said, you should be using static port binding. And as you can see here, we're only creating eight ports on this visa for a distributed switch. So I might have more than eight virtual machines, but that doesn't really matter here because we're choosing elastic port allocation. And what elastic port allocation means is that this VSphere Distributed Switch is going to be created with a total of eight ports.
When those eight ports are exhausted, it is going to go ahead and create more ports. It's elastic. So think of a rubber band. It stretches or contracts to meet the changes in demand. So if I have more virtual machines, it's going to dynamically create more ports to accommodate those virtual machines. So even if I'm going to have more than eight VMs connected to this, I can leave the number of ports at eight because I've chosen elastic port allocation. That's always going to ensure that my virtual switch and my Distributed Port Group have enough ports for all the VMs running on them. The next setting that we see here is the Network Resource Pool. We'll talk a little bit more about that when we get into Network I/O Control. So next on the list, let's take a look at VLAN. I'm going to skip over the advanced policies for the moment, but these are really things like, "Do I want to allow us to override these port policies?" Do I want to allow specific ports to be blocked or things like that? We're not going to mess with any of these advanced settings at the moment. What we may want to configure, though, is the VLAN type. So the most common use here is to just simply configure AV land, and with our VLAN, what we're doing here is something called "virtual switch tagging," which means if I connect a virtual machine to this port group, the traffic flows in from the VM to this port group. And at that point, the traffic can flow within the assigned VLAN. So let's say it's VLAN 20.
All of the VMs connected to this virtual switch that are on VLAN 20 will be able to communicate with each other if I have other port groups that are on VLAN 10. Let's say the VMs on VLAN ten and the VMs on VLAN 20 cannot communicate with each other by default. So VLANs are a way for me to segment switches and to ensure that only the appropriate VMs can actually communicate with each other. So what does this mean for me overall? There's going to be times when I'm going to have VMs on one VLAN that need to communicate with VMs on another VLAN. So what I can do is send that traffic out of this Visa distributed switch to a router.
And so here is our router. Now, a router is a layer 3 network device that can basically take traffic that's on one network segment and route it to another network segment. So let's say that in this scenario, we have certain VMs on VLAN ten. Maybe those are in the IP address range of ten one 100:24.Now, I've got another group of VMs that are on VLAN 20, and those are on a different address range. Let's call it 192-168-1024. So I have these two distinct groups of VMs with distinct VLANs and portgroups that, by default, cannot communicate with one another via the virtual switch. So now let's say that one of the VMs on VLAN 10 tries to communicate through a VM on VLAN 20, and that traffic is going to flow into the distributed switch. The distributed switch is going to analyse that traffic to determine that the destination Mac address is the Mac address of the default gateway. That's the role of our default gateway. The default gateway is there to basically receive traffic and point it towards the right network. So my virtual machine was smart enough to say, "Hey, this traffic is for some other network." Let me send it to my default gateway.
That's my router. So as the traffic hits this port group, the port group is going to tag that traffic with a VLAN identifier. And as the traffic flows into the physical network, the physical network will say that all this traffic is on VLAN ten. I see a VLAN tag here. I see a VLAN identifier. That VLAN identifier was applied by the V-Sphere distributed switch. And eventually the traffic can hit the router, which also has a network connected to VLAN 10. The router can analyse the destination IP of that traffic and route it onto VLAN 20, where it can eventually reach its destination. In our viewsphere distributed switch, that's virtual switch tagging; that's VLAN tagging. And so what I can do is put different port groups on different VLANs. And in doing so, the only way that those VMs on those different port groups can communicate with one another is through this router.
And so I can create things like access lists or firewall rules at this router to control which Vlants can communicate with one another. That's the whole point of VLANs: to segment traffic and force it to flow through a Layer 3 device. Okay, so now that we've completed a five-minute crash course on what VLANs are, I can choose my VLAN that I want to associate with this port group. I can also configure VLAN trunking. And what VLAN trunking means is that we're going to use something called virtual guest tagging. The virtual switch itself is not going to assign a VLAN to this traffic. If a VM connects to this port group, it's actually going to be the responsibility of the operating system of that VM to identify the VLAN. So I'll go into Windows, and I'll assign VLAN 20 to a network interface card. The virtual switch is set up in VLAN trunking mode. That just basically means that the virtual switch is not going to mess with that VLAN tag. It's going to just pass it through, so that's VLAN trunking, or we can set up private VLANs and we'll get into those later. So that's one of the most important policy settings of a port group: which VLAN do we want to assign to it? And in the next video, we'll take a look at the security settings of a port group.
VMware 2V0-21.20 Exam Dumps, VMware 2V0-21.20 Practice Test Questions and Answers
Do you have questions about our 2V0-21.20 Professional VMware vSphere 7.x practice test questions and answers or any of our products? If you are not clear about our VMware 2V0-21.20 exam practice test questions, you can read the FAQ below.
Purchase VMware 2V0-21.20 Exam Training Products Individually