1. Layer 2 Bridging
And that’s the basic purpose of layer 2’s bridging. It allows you to connect an overlay segment with a VLAN. And you can have virtual machines or even physical servers with the same IP address range as your virtual machines that are connected to an NSX layer 2 segment. So now we can take these layer-two network works and actually extend them to virtual machines in a VLAN or even to physical servers. And this allows virtual machines that exist on a regular distributed switch to communicate with virtual machines on a layer 2 NSX segment without any routing requirement.
That’s important because if you want to do some kind of virtual migration where you have a bunch of VMs that are currently on a specific VLAN and you want to move them over to an NSX layer-2 segment. Now, you can do that with a more gradual approach because of the ability of a layer-two bridge to extend that network. And you don’t have to readdress those virtual machines as you’re moving them onto a layer-two segment. And by the way, layer two-bridging supports V motion as well. So if the layer-two segment that your virtual machine is connected to is connected to a layer-two bridge, you can still V-motion that virtual machine. So let’s take a look at page 40 of the Nsxt reference design guide. And again, the page numbers may change if you’re looking at a newer version of the Nsxt reference design guide. But basically, what we’re doing here with the layer-2 bridge is extending an overlay-backed segment to a VLAN.
So, in this diagram, we can see that we have our gene-encapsulated network on the left side. This is our overlay VLAN, and we’ve got virtual machines that are connected to it. And then on the right, we’ve got a VLAN. And so we’ve got this edge bridge in the middle. On the left, we have VNIS, or virtual network identifiers. There’s no VLAN configuration here, whereas on the right we’ve got a VLAN identifier. So basically, the VNI identifies the layer two segment in the overlay segment, whereas the VLAN defines the layer two segment within the VLAN segment. And so here are some frames with a source Mac and a destination Mac. And maybe it’s destined for something on the VLAN segment. Well, in that case, it’s going to get switched over to this edge bridge. The edge bridge is going to pull out the VNI and prepare this header for transmission across a traditional VLAN layer 2 network. And in Next Two Four, you can have an edge bridge—one edge bridge per given segment. So you can have a single-layer bridge between a VLAN and a VNI-backed overlay network.
Now, you could have multiple different VNIS that are being bridged to the same VLAN. That’s possible. All right, so now let’s take a look at how traffic actually flows over the layer-two bridge. Let’s start by breaking down this diagram really quickly. You can see we’ve got a couple of transport nodes here: ESXi one and ESXi two. And we’ve got an overlay transport zone and a virtual machine connected to a layer-two segment. And notice the IP addresses: 10 and 1121. Up here, we’ve got a VLAN transport zone. And notice VM 2’s IP address is on the same network as VM 1. And then over here in the physical external network, we’ve got a physical server, which again is in the same IP address range. So when you go to set up a layer-two bridge, the first step is to create an ESXi bridge cluster. And in this example, I’ve only used a single transport node for the bridge cluster. But ideally, you’ll have two ESXi hosts defined as transport nodes that can be used in the bridge cluster. And by the way, the transport node has to be an ESXi host.
And so, step one is to simply identify one or, ideally, two ESXi hosts that can be part of the bridge cluster. And then you’ll go through this process of creating a bridge profile, where you’ll select an NSX edge cluster. You’ll pick a primary node, you’ll pick a backup node, and you’ll select how you want it to fail over. So again, in my design here, I’ve just kept it really simple. I only have a single edge node running, and I only have a single transport node, but in reality, I would want to for failover purposes. And then another step you’re going to have to do is enable either promiscuous mode or Mac learning for this edge node. And the reason behind that is that it has to have the ability to learn which Mac addresses are on which parts of the network. So think of this NSX edge node as having an understanding as to which systems are on each side of the layer-two bridge. And then you’ll go through the process of creating a layer-two segment. So the process of actually selecting the NSX edgecluster, selecting a primary edge node, selecting a backup edge node, and selecting the failover mode is known as creating a bridge profile. And I can then subsequently go to my layer-two segment and associate it with a bridge profile. And what you’re doing when you do that is essentially connecting that layer’s two segments to that bridge.
And so when I connect my overlay segment to the bridge, it’ll ask me for a VLAN identifier that you want to bridge it to. And once that’s done, you have created the layer-two bridge. So here’s how traffic is actually going to flow: Let’s say that VM one sends an Ethernet frame destined for the Mac address of VM two. It’s going to flow out of VM-1 and hit the edge node. The edge node is my layer-two bridge. It can also learn the Mac addresses so that it knows which interface and side of the bridge to forward that traffic out of. And so now let’s say that VM-2 generates an Ethernet broadcast. Not only will it be received by everyone on this VLAN transport zone, but it will also be forwarded to the overlay transport zone by the layer-two bridge. So what you’ve done at this point is create a layer-2 network that spans both the VLAN and the overlay. Now, in NSX 2.5, there is some additional functionality that is supported.
The same layer-two segment can be attached to several different bridges on several different NSX edge nodes. And the purpose of this design is—let’s say I’ve got this overlay transport zone and I want to connect it to a VLAN for one rack in my data center, and maybe there’s a second rack in my data centre that’s on a different VLAN. Well, allowing me to have these multiple edge bridges is going to allow bridging to VLANs in separate racks without depending on the physical network to actually provide that connectivity. So again, that’s a newer feature that’s supported only in Nsxt 2.5 and later. So finally, let’s wrap this up by taking a look at how the layer-2 bridge integrates with the routing here in my NSX domain. And so, distributed routing is going to route between the different segments within my NSX domain. So in this example, and by the way, this example is coming from the NsXT reference design guide, we have a multi-tier routing architecture.
We have a tier-zero gateway interconnecting multiple tier-one gateways. So with our Tier 1 gateway here on the left, we see just a regular overlay segment. And I’ve got multiple virtual machines connected to that overlay segment. I’ve also got a segment here that is connected to an edge bridge. And here’s my physical server. So this physical server is connected to a VLAN-backed network, and it could actually be using the tier one gateway here as its default gateway. Tier-one or tier-zero gateways can potentially act as default gateways for even physical systems. And so I could have a physical server here that needs to communicate with this physical server over here. It sends a packet in, destined for that physical server. It’s going to get routed by the Tier 1 gateway up to the Tier 0 gateway, then down to the Tier 1 gateway. Over here, a different tenant router takes you to the appropriate segment. So in conclusion, routing seamlessly integrates with layer-two bridges. We can use our distributed routers as default gateways for the segments that are being extended out to a vLim.
2. Network Address Translation (NAT)
So what is the purpose of network address translation? Well, basically, it’s used to allow inbound or outbound communication between external networks and privately addressed virtual machines. So you may have virtual machines inside your NSX domain that exist on one of these private IP address ranges. This is a really common practice. There aren’t a lot of public IPS out there that are available. So often, you’ll configure your internal machines with private IP addresses. Those private IP addresses don’t make any sense on the Internet. So in order to allow traffic from these privately addressed virtual machines to flow out to the Internet, or in order to allow traffic to flow in from the Internet, we need to modify the IP addresses used. And we do that using network address translation. And network address translation has been around forever. It’s a standards-based feature.
It’s supported by many vendors. In addition, sourceNAT and destination NAT are supported in Nsxt. Port address translation is also supported. So if you have an entire range of IP addresses that need to be translated, that’s when you would use port address translation. So let’s take a look at how this works. Let’s start with a really simple example. Here’s a virtual machine called Web Three. You can also see the source IP address. Web Three has a private IP address of 170 216 1013. So the source IP is private, but it needs to communicate with some external machine, maybe something on the Internet. So Web Three generates some packets destined for a machine on the Internet. When it does that, it’s going to send those packets to the default gateway, which in this case is a tier-one distributed router. So we’ve got a multi-tier routing architecture here. The Tier 1 router is being used as the default gateway for this segment. And so the distributed router is going to determine, “Hey, this is a packet that requires network address translation.” It’s not just going to get routed; it’s going to get translated. And none of those centralised services occur in Tier 1. They happen over here in the Tier 1 service router.
So now this traffic needs to get encapsulated and flow over the physical network on this intra-tier transit segment and arrive at the Tier One services router. And, as you may recall from previous lessons, there is an automatically created segment between the Tier 1 service router and the Tier 1 distributed router that is gene encapsulated traffic. So basically, at this point, the traffic has been delivered to the Tier One service router, where network address translation can happen. And so here at the edge node, we’ve got a tap that encapsulates that traffic. It’s delivered to the Tier 1 service router. And the Tier 1 service router says okay for IP address 170 216 1013. That’s the source IP. I’m translating source-to-network addresses. I need to pull out that original source IP, replace it with this publicly routable source IP, and then I will forward it on to the next hop. But when it gets forwarded onto the next hop, which is our Tier Zero Service router, the source IP is going to look like 80 80 81. And so when that traffic gets routed out to the Internet, this external machine on the Internet is seeing a source IP that it actually understands. And if it needs to respond back, it can. If that traffic was delivered with this original destination IP, this external machine would not have been able to respond because that’s a private IP. So that’s an example of a source network.
And that’s actually the exact example that we’re going to look at when we do a demo, and in source net, the router is removing the source IP and replacing it with a different source IP. And it’s useful for outbound traffic with destination networks. We’re doing the exact opposite. So now we’ve got traffic coming from some external machine, and the external machine sends that traffic to this public IP. Well, the traffic arrives at the Tier Zero Service Router, and the Tier Zero Service Router has a route table entry for that. So how does the Tier Zero Service Router know how to get to 80, 80, or 81? Basically, here’s how it works: When we set up Nat, we’re going to actually configure the Tier One Service Router to say, “Hey, if there are any Nat rules, advertise those networks to the Tier Zero Service router.” So the Tier Zero Service Router is going to get something—an advertisement from this Tier One router saying, “Hey, if you need to send traffic to 80 dot 80 dot 80 dot one, send it to me.” I’ve got a nap rule for that. So the traffic flows into this Tier Zero Services router from the Internet or from my physical router in my data center.
The Tier Zero service router has that route table. The entry saying “80.81” goes to the Tier One service router. And once it hits the Tier 1 service router, the Tier 1 service router is going to do its job. It’s going to perform a destination Nat. It’s going to pull out that destination IP and replace it with this private IP address. And now the destination IP is the actual private IP address of this Web Three VM. And now it’s just a simple matter of routing, right? The Tier One Service router is going to say, “I’ve got an interface on the 170/216/10 network.” Let me forward it out over that interface. And at this point, the traffic has flown in and has hit this segment, 5001. That’s the segment that the WebThree VM is connected to. And from that moment on, it’s just a simple matter of encapsulation, forwarding, and the physical underlay of decapsulation and delivery. So again, it is the destination that is used to replace the destination IP for inbound traffic.
3. Demo – Configuring Network Address Translation (NAT)
In this video, I’ll show you how to set up NetworkAddress Translation (Nat) in an Nsxt-30 environment. And I’ll be demonstrating these tasks with the help of the free labs available at Hol Vmware.com. So here I am logged into the Nsxt user interface, and I’m going to go over to the networking section, and notice here we currently have a tier-zero gateway, but we do not have a tier-one gateway. So I’m going to go over to Tier 1 gateways, and I’m going to add a new Tier 1 gateway. So I’m just going to call my Tier 1 gateway Rick Nat, and I’m going to link it to my Tier 0 gateway.
And I’m going to pick an edge cluster here because we are going to be utilising a centralised service. Network address translation is a centralised service that happens in the service router of the tier-one gateway. And so now I’ll just go ahead and use all of the rest of the defaults and click on “Save here.” And that’s all of the configuration that I’m going to do on this Tier 1 gateway at the moment. So I’ll just click on “no” here. Now, also in this lab environment, there are a number of segments that have already been created. What I’m going to do is grab one of those segments, and instead of having it connect to the tier zero gateway, I’m going to move it over and have it connected to my tier one gateway. So I’m going to pick the segment called LS Web. I’ll click on the ellipsis here, and I’ll go to Edit. And under “Edit for connectivity,” I’m going to change it from my Tier 0 gateway to the new Tier 1 gateway that I just created. And then I’ll just scroll down, save the changes that I made, and click on the little close editing button here.
So now I’ve got multitier routing enabled. I’ve got a Tier I gateway. I’ve got a segment connected to that gateway. So under Network Services, let’s go to Nat, and I am going to add a Nat rule. And the rule that I’m going to be creating here is going to be called Web One A. And under Action, the only option it’s giving me here is reflexive. That’s my mistake. That’s because I created this rule on the tier-zero gateway instead of the tier-one gateway. Let’s move over to the Tier 1 gateway and create the Nat rule there instead. So the first action that I’m going to create is a destination, Nat. And so if traffic hits this tier-one gateway and the destination IP is 80, 80, or 81, I want to translate that IP, that destination IP of that packet, to 170, 216, or 1011. That’s my internal private IP for the Web One, a virtual machine. So let’s go ahead and click on “Save Here.” And now I’ve got my destination net rule created. And let’s go ahead and add the corresponding source net rule. So I’m going to call this rule Web, and the action is going to be performing a source network address translation.
So the original source address is going to be 170 216 1011. The translated address is going to be 80 80 81. So if some traffic originates from this private IP and hits this Tier 1 service router destined for the Internet, the source IP is going to be modified and changed to 80, 80, or 81, which is a public IP that will actually be reachable on the Internet. Okay, so now we’ve got our source net rule and our destination net rule created. Let’s go over here to the Tier 1 gateway. And so here’s my Tier 1 gateway. I can see that I’ve got a linked segment connected to this LS Web segment. So I’ve got 170, 216, and eleven. That segment is directly connected to this Tier 1 gateway. And if I look at my route advertisement, we can see that this Tier 1 router is not advertising connected segments and service ports. So I’ve got to make a change there because I want to advertise these networks to the Tier Zero gateway. So I’m going to edit this Tier One gateway, and I’m going to advertise all connected segments and service ports. I’ll also publicise all of Nat IPS. And really, if we think about it, do we really need to advertise the connected segments and service ports? Well, not really, because if people are trying to connect to this Web virtual machine, they’re going to be trying to connect using that public IP. So what do I really need to advertise to the Tier Zero gateway?
If the tier-zero gateway sees traffic bound for 80 or 81, it’s going to forward it to the tier-one gateway, as long as I am advertising all Nat IPS. And so now what I’m going to do is just go ahead and save this configuration so that those Nat IPS are advertised to the Tier Zero gateway. And so now that we’ve got that setup, let’s try a little test here. In my hands-on lab kit, I’m going to open a command prompt. Now, this is my console, and my console is not part of the NSX domain. So if I now try to ping this web server virtual machine and I can successfully hit it, what that means is that my ping is flowing through the tier-zero gateway, being routed to the tier-one gateway, going through this network address translation, and successfully responding. So it looks like myNat’s configuration is working properly. So let’s go back and review one of the diagrams that we saw in the last video here’s. This Web 3 virtual machine that we’ve been working with in our lab is connected to this Tier 1 distributed router. And as it sends out traffic, that traffic is flowing over to the Tier 1 service router. over the physical underlay network. And the Tier One Services router is replacing the IP address.
At that point, it’s performing that source nap before the traffic ever hits the Tier Zero Services router. And so what we are essentially doing here is hiding this segment of IP addresses by not advertising it to the Tier Zero router. We’re not making any other routing components in our environment aware of this 170-to-216-ten network. And that’s one of the purposes of Nat, to have that security. So really, as far as the Tier Zero router needs to know, the only network it needs to know about is this 80 x 80 x 81 machine. That’s it. It can perform that function at the source network, send that traffic to the Tier Zero, and the Tier Zero can route it out wherever it needs to go. And on the inbound side of things, when traffic flows in from some external machine destined for the 80 dot 80 dot 80 dot one IP address, the Tier Zero router knows that traffic needs to get sent to the tier. Here is one service router where this Nat translation can be performed and the traffic can reach the destination.
4. Demo Reflexive NAT
In the last video, we saw some basic configuration of network address translation. Now in this video, I want to dig a little bit deeper and look at some of the other Nod options that we have available. And so here I am in the Nsxt user interface. I’m going to go to my Tier Zero gateway, and in this environment, I have a Tier Zero gateway configured. There is currently no Tier 1 gateway. And in this lab environment, the tier-zero gateway is configured in active, high-availability mode. And because of that, we cannot configure stateful networking. We have to configure what’s called a reflexive network. Nat is a stateful service that is not supported by a Tier Zero gateway when active availability is enabled. So here at the Nat screen, we’re going to use our Tier Zero gateway, and I’m going to create a Nat rule.
I’m just going to call the new Natrule Rick Demo, and for action, I don’t see any options other than reflexive. I don’t see source Nat, and I don’t see destination Nat. Reflexive is the only option available to me. So, in this video, we’ll set up a Nat rule very similar to the one we set up in the previous video for Web One A. And just bear in mind what we’ve got configured here: single-tier routing. There’s no Tier I gateway. A lot of those things that I did in the last lab have been undone. So the source IP that it’s going to match is going to be the same source IP from the previous video, 172-16-1011. And again, the translated IP is going to be 80, 80, 81. And then I’ll just go ahead and click on “Save Here.” So now I’ve created a reflexive Natrule in my active Tier Zero gateway. So let’s go back to the Tier Zero gateway here. And so I’m just going to edit the Tier Zero gateway quickly here.
And I specifically want to take a look at route redistribution. And basically, what I need to decide here is: do I want the Tier Zero gateway to inform the physical router about these Nat IPS? So I’m going to go ahead and click on “Add route redistribution” here, and I’m going to call my route redistribution Tier Zero Nat, and I’m going to click “Set” under Route redistribution. And I’m going to say that for Tier Zero subnets, I want to redistribute Nat IPS into the BGP routing protocol. So now the upstream physical routers will learn about my Tier Zero NAT IP addresses. So that 80.80.81 address that I configured Now my Tier Zero gateway is going to advertise that network to the external router. So I’m going to close editing here. And now these changes are live. So let’s go to the home screen of my console here. And you can kind of think of this console as an external computer. I’m going to try to ping 80, 80, and 81. And look at that. It’s able to ping it, so it looks like my reflexive Nat rule is working properly. Properly. Even though I have an active tier-zero gateway, I can still enable the reflexive nap rule. I can advertise that public IP outside the physical routing domain, and I can reach it from an external machine.
So you can configure a DHCP server on either a Tier 1 or Tier 0 service router. Either one connects as a DHCP server, and you can connect interfaces on your logical switches to either a DHCP relay or a DHCP server. It’s important that you don’t connect to both. So if you’ve got a segment in NSX and you connect it to both the DHCP relay and the DHCP server, the DHCP relay will be used, but the DHCP server will not. And that’s essentially how it works. You’ll have DHCP requests coming from virtual machines on your router downlinks, and either the Tier One or Tier 2 service router will respond to those requests. And for each segment that you’re connecting to DHCP, you’ll configure a range of IP addresses that should be distributed to virtual machines on that segment. And in the next video, you’ll see me demonstrating that process.
But before we get there, let’s take a conceptual look at how this works. So here on the left, we see a transport node, and we’ve got a Tier 1 distributed router. And we’ve got segment 5001, which is one of the layer-two segments connected to that Tier One distributed router. On the right, we’ve got an edge node. We’ve got our distributed router instance running on that edge node, and we’ve also got the Tier 1 service router. So we’ve got an intra-tier transit connection between the distributed router and the service router. And on the left, I’ve got a virtual machine that’s connected to this segment. And the virtual machine does not currently have an IP address. So this VM is configured to boot up and issue a layer to Broadcast, which is a DHCP request. So the VM boots, sends the DHCP request, and the Ethernet broadcast reaches the distributed router, which recognises “hey, this is a DHCP request.” I need to forward this along to the Tier One service router on the edge note. And so the DHCP request is encapsulated by the tap and sent over the physical underlay network, where it arrives at the receiving tab, which decapsulates it.
And then the DHCP request is received by the Tier 1 service router. So once the Tier One service router receives that request, it’ll offer an IP address to the virtual machine, and that DHCP offer will go right back over the same underlay network and arrive at the VM, at which point the virtual machine can request that IP address and the DHCP server can acknowledge that request. So yeah, basically in this scenario, the Tier1 service router is acting as a server that is able to provide IP addresses to virtual machines on this particular segment. Now, the other way that we could configure DHCP is through the DHCP relay. So in this situation, let’s assume that we already have a DHCP server in place and that we want to continue using that existing DHCP server. So when a virtual machine boots up, it sends that Ethernet broadcast. It sends that DHCP request, and it hits the Tier 1 gateway. And the Tier One gateway will encapsulate it once more before sending it over the underlay network. It will arrive at the Tier 1 service router. And the service router is configured with the DHCP relay. So it’s basically configured with the IP address of the DHCP server that this request should be forwarded to. And so now the request is actually hitting this external DHCP server, and the external DHCP server will actually offer an IP address to the virtual machine. And I could also have multiple DHCP servers. So I could set up multiple DHCP servers on the Tier One router and connect different DHCP servers to different segments. I can establish for each segment what the IP range is that should be used for that particular segment. and you’ll see that in the next video. I’ll walk you through the complete configuration process for DHCP and how to establish an IP address range for different segments.
6. Demo: DHCP
In this video, I’ll demonstrate how to configure DHCP for NsXT 3.0. And I’ll be using the free labs that are available at Hol vmware.com. So here you can see that I’m logged into the Nsxt user interface. I’m just going to go ahead and click on “Networking” here. And under Networking, I’m going to scroll down, and under IP Management, I’m going to click on it, so I’m going to create a new DHCP profile. So I’ll click on “Add DHCP profile,” and I can either set up a DHCP server here or I can set up a DHCP relay to a different DHCP server. So if I do have a different DHCP server and I just want to forward my DHCP requests there, I would set up DHCP relay and configure the server that I want to send those requests to. But I’m not going to do that here. I’m going to set up a DHCP server. And for the profile name, I’m just going to call it RickDHCP, and I’ll assign an IP address to the DHCP server. And a little tricky thing here with the console is that you have to click on this little add item here before the server’s IP address is actually added. And then I’ll pick the edge cluster that I want my DHCP server to run on pick edge cluster one.
And I have to choose an edge cluster because the DHCP server will be running in a service router. So I’ll go ahead and click on “Save here.” And now my DHCP profile has been successfully created. So now that we’ve got the DHCP server setup, let’s go to our Tier Zero gateway. And I’m just going to click on the little ellipsis here, and I’m going to edit my Tier 0 gateway. And under IP address management here, I’m going to click where it says no dynamic IP allocation. And instead of the default here, I’m going to establish a DHCP server. And I’ve got my new DHCP server, called Rick DHCP, that I just created. And then I’ll just go ahead and click on “Save Here.” And now we can see that my tier-zero gateway is now configured to use this DHCP server.
So let me just finish by clicking Save, and then I’ll click on Close Editing. Now at this point, I’ve got a DHCP server. I’ve associated it with the Tier Zero Gateway. However, I haven’t done anything to establish the IP address ranges that should be distributed to the virtual machines that issue DHCP requests. So what I’m going to do now is go to the segment area. So what I’ll do here is add a new layer to Segment. I’m just going to call it Rick DHCP, and I’m going to connect it to my Tier Zero gateway. I’m going to put it in my overlay transport zone. And then for subnets, this is where I’m actually going to configure my pool of DHCP addresses. So I’m going to set the default gateway at 172.16.6.1, dot 50, dot one, slash 24. And then I can click on where it says “Set Dhcpconfig.” And then from there, I could choose either DHCP local server DHCP relay or gateway DHCP. I’m going to pick my DHCP local server, and I am going to enable DHCP.
Or I could utilise the gateway DHCP, which is just going to use the DHCP server that I’ve already associated with my Tier 1 gateway. So that’s what I’ll do. So I’ll enable DHCP configuration, and then I’ll establish a range of IP addresses that should be distributed to any of the virtual machines that connect to that segment. So there’s my range of IP addresses. And again, you have to click on the little add-on items here. I can set my DHCP lease time. I can set my DNS servers here as well, so that all of my virtual machines will be automatically configured with those values also. So now that we’ve got this all configured, I’m just going to scroll down a little bit here. I’m going to save, and I am going to close editing. And so what we have now is a segment that is connected to my Tier 0 gateway. And in this segment, any virtual machine that I attach to it should be able to complete a DHCP request.