Pass VMware 2V0-41.20 Exam in First Attempt Easily
Latest VMware 2V0-41.20 Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 92 Questions & Answers
Last Update: Mar 17, 2023
- Training Course 65 Lectures
Download Free VMware 2V0-41.20 Exam Dumps, Practice Test
Free VCE files for VMware 2V0-41.20 certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest 2V0-41.20 Professional VMware NSX-T Data Center certification exam practice test questions and answers and sign up for free on Exam-Labs.
VMware 2V0-41.20 Practice Test Questions, VMware 2V0-41.20 Exam dumps
1. About this Section
Basics and the spanning tree protocol. Now, if these are concepts that you're already very comfortable with, you can skip this section. But I wanted to include these lessons just in case you're coming to this course with very little networking background. That way, you can have an introduction to some of the basic concepts that you're going to need to successfully complete this course. So, like I mentioned, if you already have strong prior knowledge about things like ARP requests, layer-two broadcasts, or IP network basics, feel free to skip this section and move on to the next section.
2. The OSI Model
In this video, we're going to learn about the OSI Model. And if you've dealt with networking for any period of time, you've probably had some exposure to the OSI model. Now, a lot of people don't really love to deal with the OSI model or think about it. But really, once you kind of demystify and explain what the OSI model means and get used to working with it, you'll find it really is an essential tool to talk about networking protocols. So all the OSI Model really is a kind of logical representation of how our devices on the network can communicate with each other. And the job of the model is to break down all of the functions that have to get performed by these components that are connected to the network, and then it breaks them down into simplified layers. So let's start out with the very basics. What does OSI stand for? What is the OSI Model? Well, it stands for Open System Interconnect Model, and the model includes seven layers. And what I want to actually add in here is a little bit of an additional layer that I sort of think is at the top of the OSI model, right? So here we see the application, the presentation session, and kind of on down the line towards the physical. And that's the actual physical media, like a copper wire or fibre optic cable. I like to put at the top of the model one additional layer. and I call this layer simply the user layer. At the top of my model, that's where the user resides. So as the information is flowing up through the OSI model, it's getting closer to my user actually seeing and utilising that data. So now that we have our user at the top of the OSI model, let's look at the layer below. that the application layer And this is a really simple layer. This is what the user is going to use. This is what the user is going to see. This is an application like Outlook for email. This is what the user will actually interact with and utilise to receive this data. The next layer is the presentation layer. The presentation and session layers are actually pretty closely linked to each other. Think of these as the things that my operating system needs to do to communicate with some other operating system. For example, I could have Windows running on my machine, and I may be establishing a session with another Windows machine. That's what the presentation and session layers do. And honestly, in this video, we're not super concerned with these top three layers application, presentation, and session. These are kind of like the things that are happening within my computer. And we're not so worried about that from a networking perspective. So we don't spend a lot of time on application, presentation, and session layers. So, for these two layers, let's call them the presentation layer and the session layer of my operating system, respectively. This is how my operating system talks to other machines. And now we get into the layers that we're more concerned with when it comes to networking: the transport network, data link, and physical layers. So the transport layer involves ports and protocols, things like FTP, SS, and HTTP, which are all different transport layer protocols. And what they govern is: How is my data actually moved from point A to point B? Is it going to be reliable? If so, we're talking about TCP. Is it going to be unreliable? If so, we're talking about UDP, and we have these different protocols that basically do things differently. Different types of traffic have different needs. So for example, let's talk about an unreliable type of traffic. Let's say we're streaming video through Netflix. Well, we don't need to guarantee the reliable delivery of that data. Let's say that a packet is dropped or lost. Well, we're not going to go back and retransmit that packet because at that point it's too late, right? The time has already come and gone for that packet. And so at this point, we don't need reliable delivery of data. So we're going to go with an unreliable layer for protocol, and that's suitable for certain types of data. However, for other types of data that might not be suitable, we may need to ensure that every last bit of data that gets transmitted is received. And that's where we're going to use TCP—the reliable delivery of data. And TCP involves significantly more overhead. With TCP, the two nodes that are communicating with each other are going to use tools like checksums to ensure that everything that the sender has sent is actually received by the receiver. And so this involves more overhead and more verification, and therefore it's a little bit less efficient. But there are certain protocols that we want to guarantee that level of delivery. And we'll see this manifested in a variety of ports, such as SSH port 22 and FTP port 23. There are a wide variety of well-defined port numbers, and you can also utilise custom port numbers for your own application. The next layer we'll take a look at is the network layer. This is layer three, and this is the layer where things like IP addresses and routers reside. So every device on a network has a unique identifier called an IP address. And we may send packets to things like a default gateway, which is commonly a router. And as we work our way through this course, you'll learn a lot more about IP routing, layer three, traffic, and the actual functions that a router performs. But that's what we're talking about when we talk about layer three: IP addresses and routers. Layer two is our data link layer. And that layer is commonly associated with Ethernet. So in the data link layer, we're talking about things like Ethernet, Ethernet switches, protocols like ARP and CDP, and our local area network, the ability to connect devices on the same network with a switch. Now the final layer, the lowest layer on our OSI model, is the physical layer. And this is the physical stuff, right? It's things like cables. If you have a microwave-wireless transmitter, that's the physical layer. If you have a fiber-optic cable, or if you just simply have an Ethernet cord connecting your computer to a switch, that's the physical layer. So if something happens at the physical layer, we're typically talking about things like a cable that's been caught or some other kind of actual physical component failure. Those are the things that create issues at the physical layer. Okay, so now that we've defined the seven layers and given you a really basic understanding of what is occurring at each layer, let's talk about some of the failures and some of the terminology that is used to associate failures with these seven layers. And we're going to work our way from the bottom up, because if I was troubleshooting an actual network, that's what I would do. If I have a network problem, I'm going to start by eliminating the physical layer as a potential issue. And honestly, more often than not, that's where you're going to find the problem. So what sorts of problems can occur at the physical layer? While at the physical layer, we're talking about something physical happening, like somebody having unplugged the wrong cable or somebody having accidentally cut a cable that wasn't supposed to be cut. Or maybe if we're talking about wireless, it's common to see things like interference or even with Ethernet. That's why that cabling is shielded, because other sorts of electrical devices and things like that can interfere with our cables. So if we're having a layer one problem, that means something's wrong at a physical level. So what sort of failures can actually occur at this data link layer? Well, we have a few listed here, and let's kind of work our way through them. Now, on modern switches, we will almost always have a VLAN associated with a port on the switch. So you're going to plug a device into a switch, and that device will be associated with a certain VLAN. We're going to have a video on VLANs coming up shortly, but essentially, here's how it works. If one virtual machine is in VLAN Ten, let's say, and another virtual machine is in VLAN 20, they won't be able to communicate with each other over layer two. It's a way to kind of break my switch up into segments and make it almost feel like multiple switches. There are certain machines that we don't want to have communicate over the switch. I'd like to force their traffic through a firewall or something like that. That's what we accomplished with the VLAN. So if we've improperly configured a VLAN, or if we've configured a port in the wrong mode, those are data link layer-type issues. We can also have broadcast traffic on a layer in the network. The most common form of broadcast traffic is ARP, and this is where one device is trying to discover the Mac address of another device. And if I have loops in my topology, that's going to result in something called the Broadcast Storm, where broadcast traffic just goes around and around and around infinitely from one switch to another until it brings our network to a screeching halt. We'll learn about the spanning tree protocol in an upcoming video, and that'll help us to ensure that these loops do not exist in our layer 2 network. So those are some layer-two types of problems. And just a quick reminder, the address at layer two is the Mac address. This Mac address, also known as the Media Access Control Address, is now hard coded into our network interface cards. So whether we're talking about a virtual machine or a physical machine, there is a unique Mac address hard-coded into that device. Now, how about the network where all sorts of problems can occur there? Well, the most common problem is a duplicate IP address. Maybe I've got two virtual machines, physical machines, or whatever that are on the same network and have been configured with the same IP address. That's not going to work. Every machine on a network has to have a unique identifier. The IP address has to be unique. And so if we have IP conflicts, that's going to be a layer three problem. Or, if we have a misconfigured router, perhaps missing routing entries from a routing table, there are issues at the router level. Those are layer-three issues. The next layer we'll talk about is the transport layer. And honestly, you're probably not going to run into a whole lot of issues here. Right? At the transport layer, we don't often see a whole lot of misconfigurations or problems that occur. As a matter of fact, I'm struggling to come up with a great example of a layer four problem that I've ever had to fix. Basically, as a networking person, you're not going to have to mess around a whole lot with Layer 4. And this kind of thing is the same for layers five, six, and seven, right? If you're a programmer, if you're an application designer, if you're in one of those roles, then, yeah, layers four through seven are really important to you. But from our perspective as networking people, we're very highly focused on layers one through three. Those are the layers that are really critical from a networking perspective, and they're very important to understand NSX and network virtualization as well. So that's the area that we will focus on moving forward in this class.
3. Ethernet Basics
In this video, I'm going to take you through some of the basics of Ethernet Layer 2 networking. And we're going to take a little trip back in time here and think about how Ethernet has evolved over the years. and that'll really help us gain a better understanding of where we are right now. So some of you might be old enough to remember this, and I'm probably exposing how old I am by covering this, but when Ethernet first came out, it was called carrier sense multiple access collision detect. That was kind of the first iteration of this type of layer-two network. So let's talk about how that worked. So, with carrier since multiple access collision detection, we'd have this long cable. We have a long cable, like a coaxial cable, like the kind you hook up your cable TV to. And when I wanted to go ahead and deploy a new node on my network, I would connect it to that cable. So what we'd actually use is something that we used to call a vampire tap. Here's my new machine, and another machine, and another machine. And I'm connecting all of these machines to my network, and I'm doing so by simply tapping into this shared wire, the shared physical cable that all of these workstations are using. So here I've labelled my four stations: one, two, three, and four. And they're all using vampire traps. They're tapped into this shared wire. And when one of these nodes wanted to communicate on the network, here's the process that it would go through. So let's say node one had some sort of traffic that it needed to transmit. Well, node one would basically shout on to the network, "Hey, I've got something I need to send." I've got something I need to send, and I'm going to go ahead and send it. Well, if node three just so happened to be sending information at the same time, we would have what's called a collision. Two machines weren't allowed to transmit at the same time like this. And this is why we need to carrier sounds multiple access collision detect. Our network would need to be able to basically detect the fact that two machines had tried to communicate at the same time. And what would happen then is it would set a random timer for machine one and a random timer for machine three, and eventually they would try to transmit that data again. Okay, so that was kind of the beginning point for Ethernet with that kind of configuration. Collisions were pretty prevalent, and we ended up with a lot of overhead. And really, it's just not an efficient way to transmit traffic across the network. So we had to come up with something better. It also wasn't really efficient to have to tap into this shared wire every single time you needed to connect a new machine. And so along came hubs, and hubs were kind of like the next generation in local area networks. So hubs changed the way our network looked. Instead of having the shared wire down the middle of our network with a hub, what we now have is this box looks kind of like a switch. If you're used to seeing switches, it kind of looks like a switch where it's got all these ports that my machines can plug into. So now when I go and deploy a new host, I don't have to tap into a wire. I can simply deploy my new host system and go ahead and connect it up to a port on that hub. And now my machine will be able to communicate. But I still had the same problem with collisions, right? If I have two machines connected to a hub and they attempt to communicate at the same time, I'm going to have a collision. So that led to the need for something better yet again. And what we came up with that is superior is an Ethernet switch. And the key to switching is something called the Mac Address Table. So for each of these virtual machines that we see here, they're going to have a unique Mac address. Every machine on an Ethernet network has a unique Mac address, and only one machine can have that Mac. On our old physical computers, it was something that was burned into the physical adapter that was not changeable. And you can kind of think of our virtual machines the same way: they have this Mac address assigned, and it cannot be changed. So for the moment, let's just think of this in terms of physical computers. I have two physical computers plugged into a switch, and the first computer has Mac Address One, and the second computer has Mac Address Two. And let's say that Mac Address One and Mac Address Two, that these two machines want to transmit some traffic at the same time. And let's say that they both want to communicate with a machine that has Mac Address Three. So let's add one more machine to our equation here. Here's a third machine, and it's got yet another unique Mac address. So both of these machines wanted to communicate with this device that has a Mac 3. As we've seen with previous generations of the Hub, or was that just a long, stretched cable that would have resulted in a collision? If both Mac One and Mac Two tried to transmit at the same time, we would have had a collision. What Alert Two Switch does is utilise a Mac address table to avoid those collisions. So let's say that the first machine, Mac One, whatever machine has Mac One, goes to transmit this data, and the destination address is Mac Three. Well, that data is going to flow into my Ethernet switch. The Ethernet switch will receive it, and it will store that frame in memory, or at least that's how switches used to work. Now they don't really do that so much anymore. But it'll store that frame, that layer two frame, in memory, and it will look up this destination address, which is Mac three. I'll say, "Okay, Mac three is located on this port." So we'll go ahead and take that Ethernet traffic and switch it over to that port, and it will reach its destination. And so the machine that has a Mac, too, could potentially be transmitting data at the exact same time. And that frame will be received in memory as well. The switch will then look up the destination Mac address in its Mac address table and route it to the appropriate port. So in this scenario, our Mac address table will look sort of like this here. You can see I've numbered all of my ports on my switch port, 12345, six, and seven. And so, as these devices are sending and receiving traffic, the switch will start to build its Mac address table. Let's say, for example, that the machine that has Mac address one transmits some Ethernet frames. Well, when that frame arrives at the switch, the switch will say, "I just got a frame from Mac One on port six." So now that this traffic has come in from MacOne and the switch saw that traffic arrive from MacOne on port six, the switch will recognize that I've got a device connected to MacOne via port six. Let me go ahead and update my Mac address table. So at this point, the switch will add that entry to its Mac address table. And now, in the future, if it ever gets a frame that's destined for Mac One, it will know, "Hey, I have to get that frame out of port six," and it'll learn Mac Two. the same way. If a frame enters the switch and lands on port 5 from source Mac 2, it will now associate Mac 2 with port 5 and Mac 3 with port 3. And it'll build up this Mac address table so that it has the ability to receive a frame, store it, analyse it against the Mac address table, and then forward it to the appropriate Ethernet port. So that's the job of my layer-two switch. That's the most basic function of the Layer 2 switch. A two-layer switch also performs many other functions, such as arps and spanning trees and other similar tasks. We'll learn about those in upcoming lessons. But at the most basic level, that's what my layer-two switch does.
4. Maximum Transmission Unit (MTU)
In this video, I'll explain the concept of a maximum transmission unit and the effect that it has on the performance of our network. So let's just start out by taking a look at an example scenario. with an ESXi host. We have an ESXi host and a virtual machine running on that host, and that virtual machine is going to generate some sort of network traffic. Okay, so here on the left side of the screen we have our ESXi host, and within that ESXi host I have some virtual machines that are running, right? So let's go ahead and create a virtual machine here, and my VM is going to be generating some sort of network traffic that's going to be sent out of the host and into the physical network. So here we see the ESXi host is connected to a physical switch, and so as VM One generates that traffic, we're going to have an MTU configuration on the virtual switch. The MTU now specifies how many bytes per byteshow an Ethernet frame can contain. So if we set our MTU at, let's say, 9000, we're going to have really large frames, but we're going to have fewer frames. And that's really the benefit of having a higher MTU: we don't actually have any less data that's getting sent, but we have fewer frames, and every Ethernet frame needs to get a source and destination Mac, and there's other overhead associated with generating those frames. So fewer frames means less overhead. And that's the big benefit of a higher MTU: that we can send fewer, larger frames. However, it doesn't come without its drawbacks. And one of the most important things is to make sure that we're properly configuring the MTU consistently across our network. So in this case, let's assume that the physical switch has been configured with an MTU of 1524. Well, that MTU is significantly lower than what we've configured on our ESXi host or virtual switch. So the result is we're going to have these really large frames hitting the physical switch—frames that are larger than the physical switch can actually handle. So as these large frames leave the ESXi host and hit the physical switch, they're going to be these jumbo frames, these really big frames, and the physical switch is going to essentially say, "That's too big for me; I can't handle a frame that's that large." So what will happen at the physical switch is that it'll simply drop that frame. It won't be able to do anything with the frame that exceeds the size of the configured MTU, so it'll get dropped. If we're talking about a physical router, a physical router has the ability to segment and reassemble packets. So basically what a physical router will do is break that large packet up into smaller pieces, and it will have to individually add headers to all of those smaller pieces. So even on a router, it's still a big problem because of the massive amount of extra resources required to fragment and reassemble every large packet that's received. I've always equated this to saying that I have four guitars and I want to ship those four guitars to a friend of mine who lives in California. So I take those four guitars, I package them up in a really big box, all four of them, and I put it out on my driveway while I wait for the mailman to show up. And he comes and he looks at it, and he says, "That box is too big." I can't fit it in the back of the mail truck. We're going to have to open up that box, put all four of those guitars into individual boxes, put new addresses on each of those individual boxes to make sure they all make it to the right place, and then I can put them all in my truck. Well, the process of taking all those guitars out, reboxing them, and redressing them is going to be a really lengthy process. And if the mailman has to do that at every house, he's not going to do a very good job. That's kind of like my physical switch. In this scenario, if the physical switch is constantly breaking frames up into smaller chunks and reassembling them and redressing them, that's going to really hamper the performance of that physical switch. An important takeaway to keep in mind when we talk about maximum transmission units is that your network is configured consistently.
5. Ethernet Broadcasts
In this video I'll explain layer two, broadcast traffic and how it impacts the performance of either a physical or a virtual switch. Now there can be a variety of different reasons that a virtual machine or physical machine generates a layer to broadcast. The most common reason is an ARC request, and I'm going to do a video on that as well. But let's just focus on what actually happens when a layer-two broadcast is generated. Now, with Ethernet at layer two or IP at layer three, there are special addresses called broadcast addresses. And basically, if you send a frame to the broadcast address of Ethernet, that frame is going to be flooded to every single port that's part of that layer in the network. So let's take a look at our diagram here. And I've got a bunch of virtual machines. Those little white blocks up at the top—those represent virtual or physical machines—don't really matter. And one of those virtual machines—or physical machines—sends a layer to broadcast to the physical switch. Well, what the physical switch is going to do is receive that frame. Take a look at the destination, Mac. And the destination Mac address is a layer for the broadcast address. So it will then forward that frame to every single port on that switch, and every single machine will receive a copy. This is kind of like what I always equate this to: let's say you're shopping, you're at the grocery store or some department store, and somebody comes over the PA system and says, "Please pay attention to everybody." The following licence plate numbers left their lights on in the parking lot, and everybody kind of stops and just waits for a moment and listens to hear if it's their car. That's kind of like what a broadcast is—you're sending something out to everybody, but really only one person cares about it, right? And that's kind of what our requests are like, and that's what a lot of broadcast traffic is like. So ideally, we'd like to avoid this broadcast traffic as much as we possibly can, especially our prequests. Now let's say, for example, that our switch is actually connected to another switch as well. Maybe either a physical or virtual switch—again, it doesn't really matter—is connected to another switch via some sort of interswitch link. Maybe there's just an Ethernet connection or a trunk port that connects these two switches together. Well, in that scenario, the broadcast will actually flow from one switch to another. This is all part of a layer in a network. So if my broadcast is originated by a machine connected to the virtual switch on top, it will also be passed to the switch on bottom. And again, every device connected to that switch will receive a copy of that layer to broadcast. Now let's make one more change to our topology here. Let's add a router and one more switch. And this switch is connected to the network via a router. So there's a router in between the two switches on the left and the switch on the right. What do you suppose happens to ourlayer to broadcast in this scenario? Well, the broadcast will be passed to that router. The router will receive that layer-two broadcast because the router is also connected to that layer-two network. However, that's the end of it. The router does not ever pass layers to broadcast traffic out to other network segments. So that's kind of the limiting factor in our physical or virtual network for those layer-two broadcasts. When they hit a router, they stop there.
6. Spanning Tree Protocol (STP)In this video, we'll look at Spanning Tree Protocol and explain how it helps prevent switching loops in an Ethernet network. So here we see a diagram where we've got one physical or virtual machine, and it doesn't matter. These could be physical switches, or these could be virtual switches. They both basically operate the same way. So I've got some machines here that are connected to either a physical or a virtual switch. And let's say that that machine generates some sort of broadcast traffic. like, for example, an ARP request. Well, my switch receives that broadcast traffic. What it's going to do is flood it out of every single port on that layer of the network, including this port here that connects to another physical or virtual switch. So when that broadcast is received by this other switch, it'll go through the same process. It'll flood out all ports, even ports that connect to other network devices. And you can probably see where I'm going with this now is that what's going to happen is that frame is going to eventually end up getting flooded back to that same switch. If our switches are connected in parallel, that creates this loop. And this will essentially result in an endless loop where the traffic just goes around and around and around and around and eventually brings my network to a screeching halt. So switching loops is definitely not a good thing; we want to ensure that our topology is loop-free. Ethernet frames are also not configured with a time to live. So an IP packet might be configured with the time to live, but that time to live is only four hops across routers. So if an Ethernet frame comes in, hit 123-4567 and so on and so forth. There's no limit to the number of hops that that traffic can take. There's no time to live to find it. So the way that we can resolve this is by enabling what's called the Spanning Tree Protocol. And Spanning Tree guarantees a loop-free topology. So how does it do this? because we might have a physical loop in our network. How does Spinning Tree prevent a switching loop from occurring? Well, what it basically does is detect these loops in the physical network, and it will pick certain connections to block them, putting certain ports in the blocking state, breaking up those loops. So now when my broadcast comes from this machine, it flows out, it hits this physical switch, and that's where it stops. This port is blocked, and therefore the broadcast stops there and the loop is broken. So Spinning Tree does a great job of automatically finding these loops, blocking the appropriate ports, and ensuring that we don't have these broadcast storms. But it does so at a cost. The network connection that we act out here, this network connection, is not usable now. So we've lost some bandwidth as a result of blocking certain ports. So spinning trees definitely comes with a cost, but the benefit in most cases outweighs the bandwidth loss because if we do have those layers of switching loops in our environment, it's just a matter of time before the network comes to a screeching halt. So that's the purpose of spanning trees. Now, if you're connecting a physical switch to an ESXi host, there are a few things you should know about configuring it with Vsphere. So let's say here's my ESXi host, and I'm connecting a physical switch to that ESXi host. Well, in that case, we want to enable a special mode of spanning tree ports. Now, what PortFast does is that for all these ports on my physical switches, they have to go through a certain process when I connect up a physical port. Basically, spanning tree needs to check and validate that this port is not creating a switching loop, and if it is going to create a switching loop, it'll get blocked, and that takes a little time. This is what we call "spacing tree convergence," where the switches agree on the topology and the appropriate ports are blocked. We don't need that to happen on the ports that we connect to our ESXi hosts. So we'll configure those ports in a special mould called portfast, which will skip all of those spanning treestates and allow the port to start up faster.
VMware 2V0-41.20 Exam Dumps, VMware 2V0-41.20 Practice Test Questions and Answers
Do you have questions about our 2V0-41.20 Professional VMware NSX-T Data Center practice test questions and answers or any of our products? If you are not clear about our VMware 2V0-41.20 exam practice test questions, you can read the FAQ below.
Purchase VMware 2V0-41.20 Exam Training Products Individually